diff --git a/_quarto.yml b/_quarto.yml index d676270a2..5d0b0949c 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -17,32 +17,32 @@ book: chapters: - index.md - intro_lec/introduction.qmd - # - pandas_1/pandas_1.qmd - # - pandas_2/pandas_2.qmd - # - pandas_3/pandas_3.qmd - # - eda/eda.qmd - # - regex/regex.qmd - # - visualization_1/visualization_1.qmd - # - visualization_2/visualization_2.qmd - # - sampling/sampling.qmd - # - intro_to_modeling/intro_to_modeling.qmd - # - constant_model_loss_transformations/loss_transformations.qmd - # - ols/ols.qmd - # - gradient_descent/gradient_descent.qmd - # - feature_engineering/feature_engineering.qmd - # - case_study_HCE/case_study_HCE.qmd - # - cv_regularization/cv_reg.qmd - # - probability_1/probability_1.qmd - # - probability_2/probability_2.qmd - # - inference_causality/inference_causality.qmd - # # - case_study_climate/case_study_climate.qmd - # - sql_I/sql_I.qmd - # - sql_II/sql_II.qmd - # - logistic_regression_1/logistic_reg_1.qmd - # - logistic_regression_2/logistic_reg_2.qmd - # - pca_1/pca_1.qmd - # - pca_2/pca_2.qmd - # - clustering/clustering.qmd + - pandas_1/pandas_1.qmd + - pandas_2/pandas_2.qmd + - pandas_3/pandas_3.qmd + - eda/eda.qmd + - regex/regex.qmd + - visualization_1/visualization_1.qmd + - visualization_2/visualization_2.qmd + - sampling/sampling.qmd + - intro_to_modeling/intro_to_modeling.qmd + - constant_model_loss_transformations/loss_transformations.qmd + - ols/ols.qmd + - gradient_descent/gradient_descent.qmd + - feature_engineering/feature_engineering.qmd + - case_study_HCE/case_study_HCE.qmd + - cv_regularization/cv_reg.qmd + - probability_1/probability_1.qmd + - probability_2/probability_2.qmd + - inference_causality/inference_causality.qmd + # - case_study_climate/case_study_climate.qmd + - sql_I/sql_I.qmd + - sql_II/sql_II.qmd + - logistic_regression_1/logistic_reg_1.qmd + - logistic_regression_2/logistic_reg_2.qmd + - pca_1/pca_1.qmd + - pca_2/pca_2.qmd + - clustering/clustering.qmd sidebar: logo: "data100_logo.png" diff --git a/docs/Principles-and-Techniques-of-Data-Science.pdf b/docs/Principles-and-Techniques-of-Data-Science.pdf deleted file mode 100644 index baa27bbe0..000000000 Binary files a/docs/Principles-and-Techniques-of-Data-Science.pdf and /dev/null differ diff --git a/docs/case_study_HCE/case_study_HCE.html b/docs/case_study_HCE/case_study_HCE.html new file mode 100644 index 000000000..5286221d5 --- /dev/null +++ b/docs/case_study_HCE/case_study_HCE.html @@ -0,0 +1,1513 @@ + + + + + + + + + +15  Case Study in Human Contexts and Ethics – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

15  Case Study in Human Contexts and Ethics

+
+ + + +
+ + + + +
+ + + +
+ + +

Note: Given the nuanced nature of some of the arguments made in the lecture, it is highly recommended that you view the lecture recording given by Professor Ari Edmundson to fully engage and understand the material. The course notes will have the same broader structure but are by no means comprehensive.

+
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Learn about the ethical dilemmas that data scientists face.
  • +
  • Examine the Cook County Assessor’s Office and Property Appraisal case study for fairness in housing appraisal.
  • +
  • Know how to critique models using contextual knowledge about data.
  • +
+
+
+
+
+

Disclaimer: The following note discusses issues of structural racism. Some of the items in this note may be sensitive and may or may not be the opinions, ideas, and beliefs of the students who collected the materials. The Data 100 course staff tries its best to only present information that is relevant for teaching the lessons at hand.

+
+

As data scientists, our goal is to wrangle data, recognize patterns and use them to make predictions within a certain context. However, it is often easy to abstract data away from its original context. In previous lectures, we’ve explored datasets like elections, babynames, and world_bank to learn fundamental techniques for working with data, but rarely do we stop to ask questions like “How/when was this data collected?” or “Are there any inherent biases in the data that could affect results?”. It turns out that inquiries like these profoundly affect how data scientists approach a task and convey their findings. This lecture explores these ethical dilemmas through the lens of a case study.

+

Let’s immerse ourselves in the real-world story of data scientists working for an organization called the Cook County Assessor’s Office (CCAO) located in Chicago, Illinois. Their job is to estimate the values of houses in order to assign property taxes. This is because the tax burden in this area is determined by the estimated value of a house rather than its price. Since value changes over time and has no obvious indicators, the CCAO created a model to estimate the values of houses. In this note, we will dig deep into biases that arose in the model, the consequences to human lives, and what we can learn from this example to avoid the same mistakes in the future.

+
+

15.1 The Problem

+

What prompted the formation of the CCAO and led to the development of this model? In 2017, an investigative report by the Chicago Tribune uncovered a major scandal in the property assessment system managed by the CCAO under the watch of former County Assessor Joseph Berrios. Working with experts from the University of Chicago, the Chicago Tribune journalists found that the CCAO’s model for estimating house value perpetuated a highly regressive tax system that disproportionately burdened African-American and Latinx homeowners in Cook County. How did the journalists demonstrate this disparity?

+
+ +
+

The image above shows two standard metrics to estimate the fairness of assessments: the coefficient of dispersion and price-related differential. How they’re calculated is out of scope for this class, but you can assume that these metrics have been rigorously tested by experts in the field and are a good indication of fairness. As we see above, calculating these metrics for the Cook County prices revealed that the pricing created by the CCAO did not fall in acceptable ranges. While this on its own is not the entire story, it was a good indicator that something fishy was going on.

+
+ +
+

This prompted journalists to investigate if the CCAO’s model itself was producing fair tax rates. When accounting for the homeowner’s income, they found that the model actually produced a regressive tax rate (see figure above). A tax rate is regressive if the percentage tax rate is higher for individuals with lower net income; it is progressive if the percentage tax rate is higher for individuals with higher net income.

+
+ +
+


+

Digging further, journalists found that the model was not only regressive and unfair to lower-income individuals, but it was also unfair to non-white homeowners (see figure above). The likelihood of a property being under- or over-assessed was highly dependent on the owner’s race, and that did not sit well with many homeowners.

+
+

15.1.1 Spotlight: Appeals

+

What was the cause of such a major issue? It might be easy to simply blame “biased” algorithms, but the main issue was not a faulty model. Instead, it was largely due to the appeals system which enabled the wealthy and privileged to more easily and successfully challenge their assessments. Once given the CCAO model’s initial assessment of their home’s value, homeowners could choose to appeal to a board of elected officials to try and change the listed value of their home and, consequently, how much they are taxed. In theory, this sounds like a very fair system: a human being oversees the final pricing of houses rather than a computer algorithm. In reality, this ended up exacerbating the problem.

+
+

“Appeals are a good thing,” Thomas Jaconetty, deputy assessor for valuation and appeals, said in an interview. “The goal here is fairness. We made the numbers. We can change them.”

+
+
+ +
+


+

We can borrow lessons from Critical Race Theory —— on the surface, everyone has the legal right to try and appeal the value of their home. However, not everyone has an equal ability to do so. Those who have the money to hire tax lawyers to appeal for them have a drastically higher chance of trying and succeeding in their appeal (see above figure). Many homeowners who appealed were generally under-assessed compared to homeowners who did not (see figure below). Clearly, the model is part of a deeper institutional pattern rife with potential corruption.

+
+ +
+


+

In fact, Chicago boasts a large and thriving tax attorney industry dedicated precisely to appealing property assessments, reflected in the growing number of appeals in Cook County in the 21st century. Given wealthier, whiter neighborhoods typically have greater access to lawyers, they often appealed more and won reductions far more often than their less wealthy neighbors. In other words, those with higher incomes pay less in property tax, tax lawyers can grow their business due to their role in appeals, and politicians are socially connected to the aforementioned tax lawyers and wealthy homeowners. All these stakeholders have reasons to advertise the appeals system as an integral part of a fair system; after all, it serves to benefit them. Here lies the value in asking questions: a system that seems fair on the surface may, in reality, be unfair upon taking a closer look.

+
+
+

15.1.2 Human Impacts

+
+ +
+


+

What happened as a result of this corrupt system? As the Chicago Tribune reported, many African American and Latino homeowners purchased homes only to find their houses were later appraised at levels far higher than what they paid. As a result, homeowners were now responsible for paying significantly more in taxes every year than initially budgeted, putting them at risk of not being able to afford their homes and losing them.

+

The impact of the housing model extends beyond the realm of home ownership and taxation —— the issues of justice go much deeper. This model perpetrated much older patterns of racially discriminatory practices in Chicago and across the United States. Unfortunately, it is no accident that this happened in Chicago, one of the most segregated cities in the United States (source). These factors are central to informing us, as data scientists, about what is at stake.

+
+
+

15.1.3 Spotlight: Intersection of Real Estate and Race

+

Before we dive into how the CCAO used data science to “solve” this problem, let’s briefly go through the history of discriminatory housing practices in the United States to give more context on the gravity and urgency of this situation.

+

Housing and real estate, among other factors, have been one of the most significant and enduring drivers of structural racism and racial inequality in the United States since the Civil War. It is one of the main areas where inequalities are created and reproduced. In the early 20th century, Jim Crow laws were explicit in forbidding people of color from utilizing the same facilities —— such as buses, bathrooms, and pools —— as white individuals. This set of practices by government actors in combination with overlapping practices driven by the private real estate industry further served to make neighborhoods increasingly segregated.

+
+ +
+


+

Although advancements in civil rights have been made, the spirit of the laws is alive in many parts of the US. In the 1920s and 1930s, it was illegal for governments to actively segregate neighborhoods according to race, but other methods were available for achieving the same ends. One of the most notorious practices was redlining: the federal housing agencies’ process of distinguishing neighborhoods in a city in terms of relative risk. The goal was to increase access to homeownership for low-income Americans. In practice, however, it allowed real estate professionals to legally perpetuate segregation. The federal housing agencies deemed predominantly African American neighborhoods as high risk and colored them in red —— hence the name redlining —— making it nearly impossible for African Americans to own a home.

+

The origins of the data that made these maps possible lay in a kind of “racial data revolution” in the private real estate industry beginning in the 1920s. Segregation was established and reinforced in part through the work of real estate agents who were also very concerned with establishing reliable methods for predicting the value of a home. The effects of these practices continue to resonate today.

+
+ +
+Source: Colin Koopman, How We Became Our Data (2019) p. 137 +
+
+


+
+
+
+

15.2 The Response: Cook County Open Data Initiative

+

The response to this problem started in politics. A new assessor, Fritz Kaegi, was elected and created a new mandate with two goals:

+
    +
  1. Distributional equity in property taxation, meaning that properties of the same value are treated alike during assessments.
  2. +
  3. Creating a new Office of Data Science.
  4. +
+

He wanted to not only create a more accurate algorithmic model but also to design a new system to address the problems with the CCAO.

+
+ +
+


+

Let’s frame this problem through the lens of the data science lifecycle.

+
+ +
+
+

15.2.1 1. Question/Problem Formulation

+
+
+
+ +
+
+Driving Questions +
+
+
+
    +
  • What do we want to know?
  • +
  • What problems are we trying to solve?
  • +
  • What are the hypotheses we want to test?
  • +
  • What are our metrics for success?
  • +
+
+
+

The old system was unfair because it was systemically inaccurate; it made one kind of error for one group, and another kind of error for another. Its goal was to “create a robust pipeline that accurately assesses property values at scale and is fair”, and in turn, they defined fairness as accuracy: “the ability of our pipeline to accurately assess all residential property values, accounting for disparities in geography, information, etc.” Thus, the plan —— make the system more fair —— was already framed in terms of a task appropriate to a data scientist: make the assessments more accurate (or more precisely, minimize errors in a particular way).

+

The idea here is that if the model is more accurate it will also (perhaps necessarily) become more fair, which is a big assumption. There are, in a sense, two different problems —— make accurate assessments, and make a fair system. Treating these two problems as one makes it a more straightforward issue that can be solved technically (with a good model) but does raise the question of if fairness and accuracy are one and the same.

+

For now, let’s just talk about the technical part of this —— accuracy. For you, the data scientist, this part might feel more comfortable. We can determine some metrics of success and frame a social problem as a data science problem.

+
+
+
+ +
+
+Definitions: Fairness and Transparency +
+
+
+

The definitions, as given by the Cook County Assessor’s Office, are given below:

+
    +
  • Fairness: The ability of our pipeline to accurately assess property values, accounting for disparities in geography, information, etc.
  • +
  • Transparency: The ability of the data science department to share and explain pipeline results and decisions to both internal and external stakeholders
  • +
+
+
+

The new Office of Data Science started by framing the problem and redefining their goals. They determined that they needed to:

+
    +
  1. Accurately, uniformly, and impartially assess the value of a home and accurately predict the sale price of a home within the next year by: +
      +
    • Following international standards (e.g., coefficient of dispersion)
    • +
    • Predicting the value of all homes with as little total error as possible
    • +
  2. +
  3. Create a robust pipeline that accurately assesses property values at scale and is fair to all people by: +
      +
    • Disrupting the circuit of corruption (Board of Review appeals process)
    • +
    • Eliminating regressivity
    • +
    • Engendering trust in the system among all stakeholders
    • +
  4. +
+

The goals defined above lead us to ask the question: what does it actually mean to accurately assess property values, and what role does “scale” play?

+
    +
  1. What is an assessment of a home’s value?
  2. +
  3. What makes one assessment more accurate than another?
  4. +
  5. What makes one batch of assessments more accurate than another batch?
  6. +
+

Each of the above questions leads to a slew of more questions. Considering just the first question, one answer could be that an assessment is an estimate of the value of a home. This leads to more inquiries: what is the value of a home? What determines it? How do we know? For this class, we take it to be the house’s market value, or how much it would sell for.

+

Unfortunately, if you are the county assessor, it becomes hard to determine property values with this definition. After all, you can’t make everyone sell their house every year. And as many properties haven’t been sold in decades, every year that passes makes that previous sale less reliable as an indicator.

+

So how would one generate reliable estimates? You’re probably thinking, well, with data about homes and their sale prices you can probably predict the value of a property reliably. Even if you’re not a data scientist, you might know there are websites like Zillow and RedFin that estimate what properties would sell for and constantly update them. They don’t know the value, but they estimate them. How do you think they do this? Let’s start with the data —— which is the next step in the lifecycle.

+
+
+

15.2.2 2. Data Acquisition and Cleaning

+
+
+
+ +
+
+Driving Questions +
+
+
+
    +
  • What data do we have, and what data do we need?
  • +
  • How will we sample more data?
  • +
  • Is our data representative of the population we want to study?
  • +
+
+
+

To generate estimates, the data scientists used two datasets. The first contained all recorded sales data from 2013 to 2019. The second contained property characteristics, including a property identification number and physical characteristics (e.g., age, bedroom, baths, square feet, neighborhood, site desirability, etc.).

+
+ +
+


+

As they examined the datasets, they asked the questions:

+
    +
  1. How was this data collected?
  2. +
  3. When was this data collected?
  4. +
  5. Who collected this data?
  6. +
  7. For what purposes was the data collected?
  8. +
  9. How and why were particular categories created?
  10. +
+

With so much data available, data scientists worked to see how all the different data points correlated with each other and with the sales prices. By discovering patterns in datasets containing known sale prices and characteristics of similar and nearby properties, training a model on this data, and applying it to all the properties without sales data, it was now possible to create a linear model that could predict the sale price (“fair market value”) of unsold properties.

+

Some other key questions data scientists asked about the data were:

+
    +
  1. Are any attributes of a house differentially reported? How might these attributes be differentially reported?
  2. +
  3. How are “improvements” like renovations tracked and updated?
  4. +
  5. Which data is missing, and for which neighborhoods or populations is it missing?
  6. +
  7. What other data sources or attributes might be valuable?
  8. +
+

Attributes can have different likelihoods of appearing in the data. For example, housing data in the floodplain geographic region of Chicago were less represented than other regions.

+

Features can also be reported at different rates. Improvements in homes, which tend to increase property value, were unlikely to be reported by the homeowners.

+

Additionally, they found that there was simply more missing data in lower-income neighborhoods.

+
+
+

15.2.3 3. Exploratory Data Analysis

+
+
+
+ +
+
+Driving Questions +
+
+
+
    +
  • How is our data organized, and what does it contain?
  • +
  • Do we already have relevant data?
  • +
  • What are the biases, anomalies, or other issues with the data?
  • +
  • How do we transform the data to enable effective analysis?
  • +
+
+
+

Before the modeling step, they investigated a multitude of crucial questions:

+
    +
  1. Which attributes are most predictive of sales price?
  2. +
  3. Is the data uniformly distributed?
  4. +
  5. Do all neighborhoods have recent data? Do all neighborhoods have the same granularity?
    +
  6. +
  7. Do some neighborhoods have missing or outdated data?
  8. +
+

They found that certain features, such as bedroom number, were much more useful in determining house value for certain neighborhoods than for others. This informed them that different models should be used depending on the neighborhood.

+

They also noticed that low-income neighborhoods had disproportionately spottier data. This informed them that they needed to develop new data collection practices - including finding new sources of data.

+
+
+

15.2.4 4. Prediction and Inference

+
+
+
+ +
+
+Driving Questions +
+
+
+
    +
  • What does the data say about the world?
  • +
  • Does it answer our questions or accurately solve the problem?
  • +
  • How robust are our conclusions, and can we trust the predictions?
  • +
+
+
+

Rather than using a singular model to predict sale prices (“fair market value”) of unsold properties, the CCAO predicts sale prices using machine learning models that discover patterns in data sets containing known sale prices and characteristics of similar and nearby properties. It uses different model weights for each neighborhood.

+

Compared to traditional mass appraisal, the CCAO’s new approach is more granular and more sensitive to neighborhood variations.

+

But how do we know if an assessment is accurate? We can see how our model performs when predicting the sales prices of properties it wasn’t trained on! We can then evaluate how “close” our estimate was to the actual sales price, using Root Mean Square Error (RMSE). However, is RMSE a good proxy for fairness in this context?

+

Broad metrics of error like RMSE can be limiting when evaluating the “fairness” of a property appraisal system. RMSE does not tell us anything about the distribution of errors, whether the errors are positive or negative, and the relative size of the errors. It does not tell us anything about the regressivity of the model, instead just giving a rough measure of our model’s overall error.

+

Even with a low RMSE, we can’t guarantee a fair model. The error we see (no matter how small) may be a result of our model overvaluing less expensive homes and undervaluing more expensive homes.

+

Regarding accuracy, it’s important to ask what makes a batch of assessments better or more accurate than another batch of assessments. The value of a home that a model predicts is relational. It’s a product of the interaction of social and technical elements so property assessment involves social trust.

+

Why should any particular individual believe that the model is accurate for their property? Why should any individual trust the model?

+

To foster public trust, the CCAO focuses on “transparency”, putting data, models, and the pipeline onto GitLab. By doing so, they can better equate the production of “accurate assessments” with “fairness”.

+

There’s a lot more to be said here on the relationship between accuracy, fairness, and metrics we tend to use when evaluating our models. Given the nuanced nature of the argument, it is recommended you view the corresponding lecture as the course notes are not as comprehensive for this portion of the lecture.

+
+
+

15.2.5 5. Results and Conclusions

+
+
+
+ +
+
+Driving Questions +
+
+
+
    +
  • How successful is the system for each goal? +
      +
    • Accuracy/uniformity of the model
    • +
    • Fairness and transparency that eliminates regressivity and engenders trust
    • +
  • +
  • How do you know?
  • +
+
+
+

Unfortunately, it may be naive to hope that a more accurate and transparent algorithm will translate into more fair outcomes in practice. Even if our model is perfectly optimized according to the standards of fairness we’ve set, there is no guarantee that people will actually pay their expected share of taxes as determined by the model. While it is a good step in the right direction, maintaining a level of social trust is key to ensuring people pay their fair share.

+

Despite all their best efforts, the CCAO is still struggling to create fair assessments and engender trust.

+

Stories like the one show that total taxes for residential properties went up overall (because commercial taxes went down). But looking at the distribution, we can see that the biggest increases occurred in wealthy neighborhoods, and the biggest decreases occurred in poorer, predominantly Black neighborhoods. So maybe there was some success after all?

+

However, it’ll ultimately be hard to overcome the propensity of the board of review to reduce the tax burden of the rich, preventing the CCAO from creating a truly fair system. This is in part because there are many cases where the model makes big, frustrating mistakes. In some cases like this one, it is due to spotty data.

+
+
+
+

15.3 Summary: Questions to Consider

+
    +
  1. Question/Problem Formulation

    +
      +
    • Who is responsible for framing the problem?
    • +
    • Who are the stakeholders? How are they involved in the problem framing?
    • +
    • What do you bring to the table? How does your positionality affect your understanding of the problem?
    • +
    • What are the narratives that you’re tapping into?
    • +
  2. +
  3. Data Acquisition and Cleaning

    +
      +
    • Where does the data come from?
    • +
    • Who collected it? For what purpose?
    • +
    • What kinds of collecting and recording systems and techniques were used?
    • +
    • How has this data been used in the past?
    • +
    • What restrictions are there on access to the data, and what enables you to have access?
    • +
  4. +
  5. Exploratory Data Analysis & Visualization

    +
      +
    • What kind of personal or group identities have become salient in this data?
    • +
    • Which variables became salient, and what kinds of relationships do we see between them?
    • +
    • Do any of the relationships made visible lend themselves to arguments that might be potentially harmful to a particular community?
    • +
  6. +
  7. Prediction and Inference

    +
      +
    • What does the prediction or inference do in the world?
    • +
    • Are the results useful for the intended purposes?
    • +
    • Are there benchmarks to compare the results?
    • +
    • How are your predictions and inferences dependent upon the larger system in which your model works?
    • +
  8. +
+
+
+

15.4 Key Takeaways

+
    +
  1. Accuracy is a necessary, but not sufficient, condition of a fair system.
  2. +
  3. Fairness and transparency are context-dependent and sociotechnical concepts.
  4. +
  5. Learn to work with contexts, and consider how your data analysis will reshape them.
  6. +
  7. Keep in mind the power, and limits, of data analysis.
  8. +
+ + + + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/case_study_HCE/images/data_life_cycle.PNG b/docs/case_study_HCE/images/data_life_cycle.PNG new file mode 100644 index 000000000..aef5d21de Binary files /dev/null and b/docs/case_study_HCE/images/data_life_cycle.PNG differ diff --git a/docs/case_study_HCE/images/vis_1.png b/docs/case_study_HCE/images/vis_1.png new file mode 100644 index 000000000..a9ecac7b3 Binary files /dev/null and b/docs/case_study_HCE/images/vis_1.png differ diff --git a/docs/case_study_HCE/images/vis_10.png b/docs/case_study_HCE/images/vis_10.png new file mode 100644 index 000000000..61daefb9d Binary files /dev/null and b/docs/case_study_HCE/images/vis_10.png differ diff --git a/docs/case_study_HCE/images/vis_2.png b/docs/case_study_HCE/images/vis_2.png new file mode 100644 index 000000000..db39da9e0 Binary files /dev/null and b/docs/case_study_HCE/images/vis_2.png differ diff --git a/docs/case_study_HCE/images/vis_3.jpg b/docs/case_study_HCE/images/vis_3.jpg new file mode 100644 index 000000000..72e645396 Binary files /dev/null and b/docs/case_study_HCE/images/vis_3.jpg differ diff --git a/docs/case_study_HCE/images/vis_4.png b/docs/case_study_HCE/images/vis_4.png new file mode 100644 index 000000000..472809dfc Binary files /dev/null and b/docs/case_study_HCE/images/vis_4.png differ diff --git a/docs/case_study_HCE/images/vis_5.png b/docs/case_study_HCE/images/vis_5.png new file mode 100644 index 000000000..74853eb27 Binary files /dev/null and b/docs/case_study_HCE/images/vis_5.png differ diff --git a/docs/case_study_HCE/images/vis_6.png b/docs/case_study_HCE/images/vis_6.png new file mode 100644 index 000000000..60d63cfb5 Binary files /dev/null and b/docs/case_study_HCE/images/vis_6.png differ diff --git a/docs/case_study_HCE/images/vis_7.png b/docs/case_study_HCE/images/vis_7.png new file mode 100644 index 000000000..ed490433d Binary files /dev/null and b/docs/case_study_HCE/images/vis_7.png differ diff --git a/docs/case_study_HCE/images/vis_8.png b/docs/case_study_HCE/images/vis_8.png new file mode 100644 index 000000000..e2ebc46b4 Binary files /dev/null and b/docs/case_study_HCE/images/vis_8.png differ diff --git a/docs/case_study_HCE/images/vis_9.png b/docs/case_study_HCE/images/vis_9.png new file mode 100644 index 000000000..aab375803 Binary files /dev/null and b/docs/case_study_HCE/images/vis_9.png differ diff --git a/docs/clustering/clustering.html b/docs/clustering/clustering.html new file mode 100644 index 000000000..c1feae485 --- /dev/null +++ b/docs/clustering/clustering.html @@ -0,0 +1,1116 @@ + + + + + + + + + +26  Clustering – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

26  Clustering

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Introduction to clustering
  • +
  • Assessing the taxonomy of clustering approaches
  • +
  • K-Means clustering
  • +
  • Clustering with no explicit loss function: minimizing inertia
  • +
  • Hierarchical Agglomerative Clustering
  • +
  • Picking K: a hyperparameter
  • +
+
+
+
+

Last time, we began our journey into unsupervised learning by discussing Principal Component Analysis (PCA).

+

In this lecture, we will explore another very popular unsupervised learning concept: clustering. Clustering allows us to “group” similar datapoints together without being given labels of what “class” or where each point explicitly comes from. We will discuss two clustering algorithms: K-Means clustering and hierarchical agglomerative clustering, and we’ll examine the assumptions, strengths, and drawbacks of each one.

+
+

26.1 Review: Taxonomy of Machine Learning

+
+

26.1.1 Supervised Learning

+

In supervised learning, our goal is to create a function that maps inputs to outputs. Each model is learned from example input/output pairs (training set), validated using input/output pairs, and eventually tested on more input/output pairs. Each pair consists of:

+
    +
  • Input vector
  • +
  • Output value (label)
  • +
+

In regression, our output value is quantitative, and in classification, our output value is categorical.

+
+
+

+
ML taxonomy
+
+
+
+
+

26.1.2 Unsupervised Learning

+

In unsupervised learning, our goal is to identify patterns in unlabeled data. In this type of learning, we do not have input/output pairs. Sometimes, we may have labels but choose to ignore them (e.g. PCA on labeled data). Instead, we are more interested in the inherent structure of the data we have rather than trying to simply predict a label using that structure of data. For example, if we are interested in dimensionality reduction, we can use PCA to reduce our data to a lower dimension.

+

Now, let’s consider a new problem: clustering.

+
+
+

26.1.3 Clustering Examples

+
+

26.1.3.1 Example 1

+

Consider this figure from Fall 2019 Midterm 2. The original dataset had 8 dimensions, but we have used PCA to reduce our data down to 2 dimensions.

+
+blobs +
+

Each point represents the 1st and 2nd principal component of how much time patrons spent at 8 different zoo exhibits. Visually and intuitively, we could potentially guess that this data belongs to 3 groups: one for each cluster. The goal of clustering is now to assign each point (in the 2 dimensional PCA representation) to a cluster.

+
+clusters_ex1 +
+

This is an unsupervised task, as:

+
    +
  • We don’t have labels for each visitor.
  • +
  • We want to infer patterns even without labels.
  • +
+
+
+

26.1.3.2 Example 2: Netflix

+

Now suppose you’re Netflix and are looking at information on customer viewing habits. Clustering can come in handy here. We can assign each person or show to a “cluster.” (Note: while we don’t know for sure that Netflix actually uses ML clustering to identify these categories, they could, in principle.)

+

Keep in mind that with clustering, we don’t need to define clusters in advance; it discovers groups automatically. On the other hand, with classification, we have to decide labels in advance. This marks one of the key differences between clustering and classification.

+
+
+

26.1.3.3 Example 3: Education

+

Let’s say we’re working with student-generated materials and pass them into the S-BERT module to extract sentence embeddings. Features from clusters are extracted to:

+
    +
  1. Detect anomalies in group activities
  2. +
  3. Predict the group’s median quiz grade
  4. +
+
+outline-ex3 +
+

Here we can see the outline of the anomaly detection module. It consists of:

+
    +
  • S-BERT feature extraction
  • +
  • Topic extraction
  • +
  • Feature extraction
  • +
  • 16D \(\rightarrow\) 2D PCA dimensionality reduction and 2D \(\rightarrow\) 16D reconstruction
  • +
  • Anomaly detection based on reconstruction error
  • +
+

Looking more closely at our clustering, we can better understand the different components, which are represented by the centers. Below we have two examples.

+
+components +
+

Note that the details for this example are not in scope.

+
+
+

26.1.3.4 Example 4: Reverse Engineering Biology

+

Now, consider the plot below:

+
+genes +
+

The rows of this plot are conditions (e.g., a row might be: “poured acid on the cells”), and the columns are genes. The green coloration indicates that the gene was “off” whereas red indicates the gene was “on”. For example, the ~9 genes in the top left corner of the plot were all turned off by the 6 experiments (rows) at the top.

+

In a clustering lens, we might be interested in clustering similar observations together based on the reactions (on/off) to certain experiments.

+

For example, here is a look at our data before and after clustering.

+
+beforeandafter4 +
+

Note: apologies if you can’t differentiate red from green by eye! Historical visualizations are not always the best.

+
+
+
+
+

26.2 Taxonomy of Clustering Approaches

+
+taxonomy +
+

There are many types of clustering algorithms, and they all have strengths, inherent weaknesses, and different use cases. We will first focus on a partitional approach: K-Means clustering.

+
+
+

26.3 K-Means Clustering

+

The most popular clustering approach is K-Means. The algorithm itself entails the following:

+
    +
  1. Pick an arbitrary \(k\), and randomly place \(k\) “centers”, each a different color.

  2. +
  3. Repeat until convergence:

    +
      +
    1. Color points according to the closest center.
    2. +
    3. Move the center for each color to the center of points with that color.
    4. +
  4. +
+

Consider the following data with an arbitrary \(k = 2\) and randomly placed “centers” denoted by the different colors (blue, orange):

+
+init_cluster +
+

Now, we will follow the rest of the algorithm. First, let us color each point according to the closest center:

+
+cluster_class +
+

Next, we will move the center for each color to the center of points with that color. Notice how the centers are generally well-centered amongst the data that shares its color.

+
+cluster_iter1 +
+

Assume this process (re-color and re-set centers) repeats for a few more iterations. We eventually reach this state.

+
+cluster_iter5 +
+

After this iteration, the center stays still and does not move at all. Thus, we have converged, and the clustering is complete!

+
+

26.3.0.1 A Quick Note

+

K-Means is a completely different algorithm than K-Nearest Neighbors. K-means is used for clustering, where each point is assigned to one of \(K\) clusters. On the other hand, K-Nearest Neighbors is used for classification (or, less often, regression), and the predicted value is typically the most common class among the \(K\)-nearest data points in the training set. The names may be similar, but there isn’t really anything in common.

+
+
+
+

26.4 Minimizing Inertia

+

Consider the following example where \(K = 4\):

+
+four_cluster +
+

Due to the randomness of where the \(K\) centers initialize/start, you will get a different output/clustering every time you run K-Means. Consider three possible K-Means outputs; the algorithm has converged, and the colors denote the final cluster they are clustered as.

+
+random_outputs +
+


Which clustering output is the best? To evaluate different clustering results, we need a loss function.

+

The two common loss functions are:

+
    +
  • Inertia: Sum of squared distances from each data point to its center.
  • +
  • Distortion: Weighted sum of squared distances from each data point to its center.
  • +
+
+inertia_distortion +
+

In the example above:

+
    +
  • Calculated inertia: \(0.47^2 + 0.19^2 + 0.34^2 + 0.25^2 + 0.58^2 + 0.36^2 + 0.44^2\)
  • +
  • Calculated distortion: \(\frac{0.47^2 + 0.19^2 + 0.34^2}{3} + \frac{0.25^2 + 0.58^2 + 0.36^2 + 0.44^2}{4}\)
  • +
+

Switching back to the four-cluster example at the beginning of this section, random.seed(25) had an inertia of 44.96, random.seed(29) had an inertia of 45.95, and random.seed(40) had an inertia of 54.35. It seems that the best clustering output was random.seed(25) with an inertia of 44.96!

+

It turns out that the function K-Means is trying to minimize is inertia, but often fails to find global optimum. Why does this happen? We can think of K-means as a pair of optimizers that take turns. The first optimizer holds center positions constant and optimizes data colors. The second optimizer holds data colors constant and optimizes center positions. Neither optimizer gets full control!

+

This is a hard problem: give an algorithm that optimizes inertia FOR A GIVEN \(K\); \(K\) is picked in advance. Your algorithm should return the EXACT best centers and colors, but you don’t need to worry about runtime.

+

Note: This is a bit of a CS61B/CS70/CS170 problem, so do not worry about completely understanding the tricky predicament we are in too much!

+

A potential algorithm:

+
    +
  • For all possible \(k^n\) colorings: +
      +
    • Compute the \(k\) centers for that coloring.
    • +
    • Compute the inertia for the \(k\) centers. +
        +
      • If current inertia is better than best known, write down the current centers and coloring and call that the new best known.
      • +
    • +
  • +
+

No better algorithm has been found for solving the problem of minimizing inertia exactly.

+
+
+

26.5 Hierarchical Agglomerative Clustering

+

Now, let us consider hierarchical agglomerative clustering.

+
+hierarchical_approach +
+


Consider the following results of two K-Means clustering outputs:

+
+clustering_comparison +
+


Which clustering result do you like better? It seems K-Means likes the one on the right better because it has lower inertia (the sum of squared distances from each data point to its center), but this raises some questions:

+
    +
  • Why is the inertia on the right lower? K-Means optimizes for distance, not “blobbiness”.
  • +
  • Is clustering on the right “wrong”? Good question!
  • +
+

Now, let us introduce Hierarchical Agglomerative Clustering! We start with every data point in a separate cluster, and we’ll keep merging the most similar pairs of data points/clusters until we have one big cluster left. This is called a bottom-up or agglomerative method.

+

There are various ways to decide the order of combining clusters called Linkage Criterion:

+
    +
  • Single linkage (similarity of the most similar): the distance between two clusters as the minimum distance between a point in the first cluster and a point in the second.
  • +
  • Complete linkage (similarity of the least similar): the distance between two clusters as the maximum distance between a point in the first cluster and a point in the second.
  • +
  • Average linkage: average similarity of pairs of points in clusters.
  • +
+

The linkage criterion decides how we measure the “distance” between two clusters. Regardless of the criterion we choose, the aim is to combine the two clusters that have the minimum “distance” between them, with the distance computed as per that criterion. In the case of complete linkage, for example, that means picking the two clusters that minimize the maximum distance between a point in the first cluster and a point in the second.

+
+linkage +
+

When the algorithm starts, every data point is in its own cluster. In the plot below, there are 12 data points, so the algorithm starts with 12 clusters. As the clustering begins, it assesses which clusters are the closest together.

+
+agg1 +
+

The closest clusters are 10 and 11, so they are merged together.

+
+agg2 +
+

Next, points 0 and 4 are merged together because they are closest.

+
+agg3 +
+

At this point, we have 10 clusters: 8 with a single point (clusters 1, 2, 3, 4, 5, 6, 7, 8, and 9) and 2 with 2 points (clusters 0 and 10).

+

Although clusters 0 and 3 are not the closest, let us consider if we were trying to merge them. A tricky question arises: what is the “distance” between clusters 0 and 3? We can use the Complete-Link approach that uses the max distance among all pairs of points between groups to decide which group has smaller “distance”.

+
+agg4 +
+

Let us assume the algorithm runs a little longer, and we have reached the following state. Clusters 0 and 7 are up next, but why? The max line between any member of 0 and 6 is longer than the max line between any member of 0 and 7.

+
+agg5 +
+

Thus, 0 and 7 are merged into 0 as they are closer under the complete linkage criterion.

+

After more iterations, we finally converge to the plot on the left. There are two clusters (0, 1), and the agglomerative algorithm has converged.

+
+agg6 +
+


Notice that on the full dataset, our agglomerative clustering algorithm achieves the more “correct” output.

+
+

26.5.1 Clustering, Dendrograms, and Intuition

+Agglomerative clustering is one form of “hierarchical clustering.” It is interpretable because we can keep track of when two clusters got merged (each cluster is a tree), and we can visualize the merging hierarchy, resulting in a “dendrogram.” Won’t discuss this any further for this course, but you might see these in the wild. Here are some examples: +

+dendro_1 dendro_2 +

+

Some professors use agglomerative clustering for grading bins; if there is a big gap between two people, draw a grading threshold there. The idea is that grade clustering should be more like the figure below on the left, not the right.

+
+grading +
+
+
+
+

26.6 Picking K

+

The algorithms we’ve discussed require us to pick a \(K\) before we start. But how do we pick \(K\)? Often, the best \(K\) is subjective. For example, consider the state plot below.

+
+states +
+

How many clusters are there here? For K-Means, one approach to determine this is to plot inertia versus many different \(K\) values. We’d pick the \(K\) in the elbow, where we get diminishing returns afterward. Note that big, complicated data often lacks an elbow, so this method is not foolproof. Here, we would likely select \(K = 2\).

+
+elbow +
+
+

26.6.1 Silhouette Scores

+

To evaluate how “well-clustered” a specific data point is, we can use the silhouette score, also termed the silhouette width. A high silhouette score indicates that a point is near the other points in its cluster; a low score means that it’s far from the other points in its cluster.

+
+high_low +
+

For a data point \(X\), score \(S\) is: \[S =\frac{B - A}{\max(A, B)}\] where \(A\) is the average distance to other points in the cluster, and \(B\) is the average distance to points in the closest cluster.

+

Consider what the highest possible value of \(S\) is and how that value can occur. The highest possible value of \(S\) is 1, which happens if every point in \(X\)’s cluster is right on top of \(X\); the average distance to other points in \(X\)’s cluster is \(0\), so \(A = 0\). Thus, \(S = \frac{B}{\max(0, B)} = \frac{B}{B} = 1\). Another case where \(S = 1\) could happen is if \(B\) is much greater than \(A\) (we denote this as \(B >> A\)).

+

Can \(S\) be negative? The answer is yes. If the average distance to X’s clustermates is larger than the distance to the closest cluster, then this is possible. For example, the “low score” point on the right of the image above has \(S = -0.13\).

+
+
+

26.6.2 Silhouette Plot

+

We can plot the silhouette scores for all of our datapoints. The x-axis represents the silhouette coefficient value or silhouette score. The y-axis tells us which cluster label the points belong to, as well as the number of points within a particular cluster. Points with large silhouette widths are deeply embedded in their cluster; the red dotted line shows the average. Below, we plot the silhouette score for our plot with \(K=2\).

+

+dendro_1 dendro_2 +

+

Similarly, we can plot the silhouette score for the same dataset but with \(K=3\):

+

+dendro_1 dendro_2 +

+

The average silhouette score is lower with 3 clusters, so \(K=2\) is a better choice. This aligns with our visual intuition as well.

+
+
+

26.6.3 Picking K: Real World Metrics

+

Sometimes you can rely on real-world metrics to guide your choice of \(K\). For t-shirts, we can either:

+
    +
  • Cluster heights and weights of customers with \(K = 3\) to design Small, Medium, and Large shirts
  • +
  • Cluster heights and weights of customers with \(K = 5\) to design XS, S, M, L, and XL shirts
  • +
+

To choose \(K\), consider projected costs and sales for the 2 different \(K\)s and select the one that maximizes profit.

+
+
+
+

26.7 Conclusion

+

We’ve now discussed a new machine learning goal —— clustering —— and explored two solutions:

+
    +
  • K-Means Clustering tries to optimize a loss function called inertia (no known algorithm to find the optimal answer in an efficient manner)
  • +
  • Hierarchical Agglomerative Clustering builds clusters bottom-up by merging clusters “close” to each other, depending on the choice of linkage.
  • +
+

Our version of these algorithms required a hyperparameter \(K\). There are 4 ways to pick \(K\): the elbow method, silhouette scores, and by harnessing real-world metrics.

+

There are many machine learning problems. Each can be addressed by many different solution techniques. Each has many metrics for evaluating success / loss. Many techniques can be used to solve different problem types. For example, linear models can be used for regression and classification.

+

We’ve only scratched the surface and haven’t discussed many important ideas, such as neural networks and deep learning. In the last lecture, we’ll provide some specific course recommendations on how to explore these topics further.

+ + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/clustering/images/agg1.png b/docs/clustering/images/agg1.png new file mode 100644 index 000000000..d408c6745 Binary files /dev/null and b/docs/clustering/images/agg1.png differ diff --git a/docs/clustering/images/agg2.png b/docs/clustering/images/agg2.png new file mode 100644 index 000000000..f4c4b1f15 Binary files /dev/null and b/docs/clustering/images/agg2.png differ diff --git a/docs/clustering/images/agg3.png b/docs/clustering/images/agg3.png new file mode 100644 index 000000000..34c3a916f Binary files /dev/null and b/docs/clustering/images/agg3.png differ diff --git a/docs/clustering/images/agg4.png b/docs/clustering/images/agg4.png new file mode 100644 index 000000000..b29437491 Binary files /dev/null and b/docs/clustering/images/agg4.png differ diff --git a/docs/clustering/images/agg5.png b/docs/clustering/images/agg5.png new file mode 100644 index 000000000..cef3813ba Binary files /dev/null and b/docs/clustering/images/agg5.png differ diff --git a/docs/clustering/images/agg6.png b/docs/clustering/images/agg6.png new file mode 100644 index 000000000..e6bd3539c Binary files /dev/null and b/docs/clustering/images/agg6.png differ diff --git a/docs/clustering/images/beforeandafter4.png b/docs/clustering/images/beforeandafter4.png new file mode 100644 index 000000000..fbd332a57 Binary files /dev/null and b/docs/clustering/images/beforeandafter4.png differ diff --git a/docs/clustering/images/blobs.png b/docs/clustering/images/blobs.png new file mode 100644 index 000000000..1fbc92303 Binary files /dev/null and b/docs/clustering/images/blobs.png differ diff --git a/docs/clustering/images/cluster_3.png b/docs/clustering/images/cluster_3.png new file mode 100644 index 000000000..2b07225c6 Binary files /dev/null and b/docs/clustering/images/cluster_3.png differ diff --git a/docs/clustering/images/cluster_class.png b/docs/clustering/images/cluster_class.png new file mode 100644 index 000000000..abcee0170 Binary files /dev/null and b/docs/clustering/images/cluster_class.png differ diff --git a/docs/clustering/images/cluster_iter1.png b/docs/clustering/images/cluster_iter1.png new file mode 100644 index 000000000..850d774a9 Binary files /dev/null and b/docs/clustering/images/cluster_iter1.png differ diff --git a/docs/clustering/images/cluster_iter5.png b/docs/clustering/images/cluster_iter5.png new file mode 100644 index 000000000..4da97d5ac Binary files /dev/null and b/docs/clustering/images/cluster_iter5.png differ diff --git a/docs/clustering/images/clustering_comparison.png b/docs/clustering/images/clustering_comparison.png new file mode 100644 index 000000000..2f824a073 Binary files /dev/null and b/docs/clustering/images/clustering_comparison.png differ diff --git a/docs/clustering/images/clusters_ex1.png b/docs/clustering/images/clusters_ex1.png new file mode 100644 index 000000000..b5579846b Binary files /dev/null and b/docs/clustering/images/clusters_ex1.png differ diff --git a/docs/clustering/images/components.png b/docs/clustering/images/components.png new file mode 100644 index 000000000..0ee3ae35f Binary files /dev/null and b/docs/clustering/images/components.png differ diff --git a/docs/clustering/images/dendro_1.png b/docs/clustering/images/dendro_1.png new file mode 100644 index 000000000..6daabecab Binary files /dev/null and b/docs/clustering/images/dendro_1.png differ diff --git a/docs/clustering/images/dendro_2.png b/docs/clustering/images/dendro_2.png new file mode 100644 index 000000000..8c9ef8089 Binary files /dev/null and b/docs/clustering/images/dendro_2.png differ diff --git a/docs/clustering/images/elbow.png b/docs/clustering/images/elbow.png new file mode 100644 index 000000000..a73b38667 Binary files /dev/null and b/docs/clustering/images/elbow.png differ diff --git a/docs/clustering/images/four_cluster.png b/docs/clustering/images/four_cluster.png new file mode 100644 index 000000000..adff35d06 Binary files /dev/null and b/docs/clustering/images/four_cluster.png differ diff --git a/docs/clustering/images/genes.png b/docs/clustering/images/genes.png new file mode 100644 index 000000000..f948b6009 Binary files /dev/null and b/docs/clustering/images/genes.png differ diff --git a/docs/clustering/images/grading.png b/docs/clustering/images/grading.png new file mode 100644 index 000000000..1eab30ae1 Binary files /dev/null and b/docs/clustering/images/grading.png differ diff --git a/docs/clustering/images/hierarchical_approach.png b/docs/clustering/images/hierarchical_approach.png new file mode 100644 index 000000000..f0c915a4f Binary files /dev/null and b/docs/clustering/images/hierarchical_approach.png differ diff --git a/docs/clustering/images/high_low.png b/docs/clustering/images/high_low.png new file mode 100644 index 000000000..346d62f14 Binary files /dev/null and b/docs/clustering/images/high_low.png differ diff --git a/docs/clustering/images/inertia_distortion.png b/docs/clustering/images/inertia_distortion.png new file mode 100644 index 000000000..9d0b750d0 Binary files /dev/null and b/docs/clustering/images/inertia_distortion.png differ diff --git a/docs/clustering/images/init_cluster.png b/docs/clustering/images/init_cluster.png new file mode 100644 index 000000000..abf040b97 Binary files /dev/null and b/docs/clustering/images/init_cluster.png differ diff --git a/docs/clustering/images/linkage.png b/docs/clustering/images/linkage.png new file mode 100644 index 000000000..a0e972586 Binary files /dev/null and b/docs/clustering/images/linkage.png differ diff --git a/docs/clustering/images/ml_taxonomy.png b/docs/clustering/images/ml_taxonomy.png new file mode 100644 index 000000000..bbcaf3a2d Binary files /dev/null and b/docs/clustering/images/ml_taxonomy.png differ diff --git a/docs/clustering/images/outline-ex3.png b/docs/clustering/images/outline-ex3.png new file mode 100644 index 000000000..06e914d40 Binary files /dev/null and b/docs/clustering/images/outline-ex3.png differ diff --git a/docs/clustering/images/random_outputs.png b/docs/clustering/images/random_outputs.png new file mode 100644 index 000000000..0d133b46b Binary files /dev/null and b/docs/clustering/images/random_outputs.png differ diff --git a/docs/clustering/images/silhouette_2.png b/docs/clustering/images/silhouette_2.png new file mode 100644 index 000000000..3aa2d6bab Binary files /dev/null and b/docs/clustering/images/silhouette_2.png differ diff --git a/docs/clustering/images/silhouette_scores.png b/docs/clustering/images/silhouette_scores.png new file mode 100644 index 000000000..fc54a4d5d Binary files /dev/null and b/docs/clustering/images/silhouette_scores.png differ diff --git a/docs/clustering/images/states.png b/docs/clustering/images/states.png new file mode 100644 index 000000000..20e60c9e9 Binary files /dev/null and b/docs/clustering/images/states.png differ diff --git a/docs/clustering/images/taxonomy.png b/docs/clustering/images/taxonomy.png new file mode 100644 index 000000000..9f0ae5903 Binary files /dev/null and b/docs/clustering/images/taxonomy.png differ diff --git a/docs/constant_model_loss_transformations/images/bulge.png b/docs/constant_model_loss_transformations/images/bulge.png new file mode 100644 index 000000000..aee1d745e Binary files /dev/null and b/docs/constant_model_loss_transformations/images/bulge.png differ diff --git a/docs/constant_model_loss_transformations/images/constant_loss_surface.png b/docs/constant_model_loss_transformations/images/constant_loss_surface.png new file mode 100644 index 000000000..1cd733bd8 Binary files /dev/null and b/docs/constant_model_loss_transformations/images/constant_loss_surface.png differ diff --git a/docs/constant_model_loss_transformations/images/dugong_rug.png b/docs/constant_model_loss_transformations/images/dugong_rug.png new file mode 100644 index 000000000..9c5e9df67 Binary files /dev/null and b/docs/constant_model_loss_transformations/images/dugong_rug.png differ diff --git a/docs/constant_model_loss_transformations/images/dugong_scatter.png b/docs/constant_model_loss_transformations/images/dugong_scatter.png new file mode 100644 index 000000000..4bf3a8b06 Binary files /dev/null and b/docs/constant_model_loss_transformations/images/dugong_scatter.png differ diff --git a/docs/constant_model_loss_transformations/images/error.png b/docs/constant_model_loss_transformations/images/error.png new file mode 100644 index 000000000..f37677abb Binary files /dev/null and b/docs/constant_model_loss_transformations/images/error.png differ diff --git a/docs/constant_model_loss_transformations/images/mae_loss_infinite.png b/docs/constant_model_loss_transformations/images/mae_loss_infinite.png new file mode 100644 index 000000000..2bd5e9e07 Binary files /dev/null and b/docs/constant_model_loss_transformations/images/mae_loss_infinite.png differ diff --git a/docs/constant_model_loss_transformations/images/mse_loss_26.png b/docs/constant_model_loss_transformations/images/mse_loss_26.png new file mode 100644 index 000000000..7c39cc767 Binary files /dev/null and b/docs/constant_model_loss_transformations/images/mse_loss_26.png differ diff --git a/docs/constant_model_loss_transformations/images/outliers.png b/docs/constant_model_loss_transformations/images/outliers.png new file mode 100644 index 000000000..61f295ddb Binary files /dev/null and b/docs/constant_model_loss_transformations/images/outliers.png differ diff --git a/docs/constant_model_loss_transformations/images/slr_loss_surface.png b/docs/constant_model_loss_transformations/images/slr_loss_surface.png new file mode 100644 index 000000000..66320e5d9 Binary files /dev/null and b/docs/constant_model_loss_transformations/images/slr_loss_surface.png differ diff --git a/docs/constant_model_loss_transformations/images/slr_modeling.png b/docs/constant_model_loss_transformations/images/slr_modeling.png new file mode 100644 index 000000000..c51158f5f Binary files /dev/null and b/docs/constant_model_loss_transformations/images/slr_modeling.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations.html b/docs/constant_model_loss_transformations/loss_transformations.html new file mode 100644 index 000000000..27be73416 --- /dev/null +++ b/docs/constant_model_loss_transformations/loss_transformations.html @@ -0,0 +1,2620 @@ + + + + + + + + + +11  Constant Model, Loss, and Transformations – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

11  Constant Model, Loss, and Transformations

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Derive the optimal model parameters for the constant model under MSE and MAE cost functions.
  • +
  • Evaluate the differences between MSE and MAE risk.
  • +
  • Understand the need for linearization of variables and apply the Tukey-Mosteller bulge diagram for transformations.
  • +
+
+
+
+

Last time, we introduced the modeling process. We set up a framework to predict target variables as functions of our features, following a set workflow:

+
    +
  1. Choose a model - how should we represent the world?
  2. +
  3. Choose a loss function - how do we quantify prediction error?
  4. +
  5. Fit the model - how do we choose the best parameter of our model given our data?
  6. +
  7. Evaluate model performance - how do we evaluate whether this process gave rise to a good model?
  8. +
+

To illustrate this process, we derived the optimal model parameters under simple linear regression (SLR) with mean squared error (MSE) as the cost function. A summary of the SLR modeling process is shown below:

+
+

modeling

+
+

In this lecture, we’ll dive deeper into step 4 - evaluating model performance - using SLR as an example. Additionally, we’ll also explore the modeling process with new models, continue familiarizing ourselves with the modeling process by finding the best model parameters under a new model, the constant model, and test out two different loss functions to understand how our choice of loss influences model design. Later on, we’ll consider what happens when a linear model isn’t the best choice to capture trends in our data and what solutions there are to create better models.

+

Before we get into Step 4, let’s quickly review some important terminology.

+
+

11.0.1 Prediction vs. Estimation

+

The terms prediction and estimation are often used somewhat interchangeably, but there is a subtle difference between them. Estimation is the task of using data to calculate model parameters. Prediction is the task of using a model to predict outputs for unseen data. In our simple linear regression model,

+

\[\hat{y} = \hat{\theta_0} + \hat{\theta_1}\]

+

we estimate the parameters by minimizing average loss; then, we predict using these estimations. Least Squares Estimation is when we choose the parameters that minimize MSE.

+
+
+

11.1 Step 4: Evaluating the SLR Model

+

Now that we’ve explored the mathematics behind (1) choosing a model, (2) choosing a loss function, and (3) fitting the model, we’re left with one final question – how “good” are the predictions made by this “best” fitted model? To determine this, we can:

+
    +
  1. Visualize data and compute statistics:

    +
      +
    • Plot the original data.
    • +
    • Compute each column’s mean and standard deviation. If the mean and standard deviation of our predictions are close to those of the original observed \(y_i\)’s, we might be inclined to say that our model has done well.
    • +
    • (If we’re fitting a linear model) Compute the correlation \(r\). A large magnitude for the correlation coefficient between the feature and response variables could also indicate that our model has done well.
    • +
  2. +
  3. Performance metrics:

    +
      +
    • We can take the Root Mean Squared Error (RMSE). +
        +
      • It’s the square root of the mean squared error (MSE), which is the average loss that we’ve been minimizing to determine optimal model parameters.
      • +
      • RMSE is in the same units as \(y\).
      • +
      • A lower RMSE indicates more “accurate” predictions, as we have a lower “average loss” across the data.
      • +
    • +
    +

    \[\text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2}\]

  4. +
  5. Visualization:

    +
      +
    • Look at the residual plot of \(e_i = y_i - \hat{y_i}\) to visualize the difference between actual and predicted values. The good residual plot should not show any pattern between input/features \(x_i\) and residual values \(e_i\).
    • +
  6. +
+

To illustrate this process, let’s take a look at Anscombe’s quartet.

+
+

11.1.1 Four Mysterious Datasets (Anscombe’s quartet)

+

Let’s take a look at four different datasets.

+
+
+Code +
import numpy as np
+import pandas as pd
+import matplotlib.pyplot as plt
+%matplotlib inline
+import seaborn as sns
+import itertools
+from mpl_toolkits.mplot3d import Axes3D
+
+
+
+
+Code +
# Big font helper
+def adjust_fontsize(size=None):
+    SMALL_SIZE = 8
+    MEDIUM_SIZE = 10
+    BIGGER_SIZE = 12
+    if size != None:
+        SMALL_SIZE = MEDIUM_SIZE = BIGGER_SIZE = size
+
+    plt.rc("font", size=SMALL_SIZE)  # controls default text sizes
+    plt.rc("axes", titlesize=SMALL_SIZE)  # fontsize of the axes title
+    plt.rc("axes", labelsize=MEDIUM_SIZE)  # fontsize of the x and y labels
+    plt.rc("xtick", labelsize=SMALL_SIZE)  # fontsize of the tick labels
+    plt.rc("ytick", labelsize=SMALL_SIZE)  # fontsize of the tick labels
+    plt.rc("legend", fontsize=SMALL_SIZE)  # legend fontsize
+    plt.rc("figure", titlesize=BIGGER_SIZE)  # fontsize of the figure title
+
+
+# Helper functions
+def standard_units(x):
+    return (x - np.mean(x)) / np.std(x)
+
+
+def correlation(x, y):
+    return np.mean(standard_units(x) * standard_units(y))
+
+
+def slope(x, y):
+    return correlation(x, y) * np.std(y) / np.std(x)
+
+
+def intercept(x, y):
+    return np.mean(y) - slope(x, y) * np.mean(x)
+
+
+def fit_least_squares(x, y):
+    theta_0 = intercept(x, y)
+    theta_1 = slope(x, y)
+    return theta_0, theta_1
+
+
+def predict(x, theta_0, theta_1):
+    return theta_0 + theta_1 * x
+
+
+def compute_mse(y, yhat):
+    return np.mean((y - yhat) ** 2)
+
+
+plt.style.use("default")  # Revert style to default mpl
+
+
+
+
+Code +
plt.style.use("default")  # Revert style to default mpl
+NO_VIZ, RESID, RESID_SCATTER = range(3)
+
+
+def least_squares_evaluation(x, y, visualize=NO_VIZ):
+    # statistics
+    print(f"x_mean : {np.mean(x):.2f}, y_mean : {np.mean(y):.2f}")
+    print(f"x_stdev: {np.std(x):.2f}, y_stdev: {np.std(y):.2f}")
+    print(f"r = Correlation(x, y): {correlation(x, y):.3f}")
+
+    # Performance metrics
+    ahat, bhat = fit_least_squares(x, y)
+    yhat = predict(x, ahat, bhat)
+    print(f"\theta_0: {ahat:.2f}, \theta_1: {bhat:.2f}")
+    print(f"RMSE: {np.sqrt(compute_mse(y, yhat)):.3f}")
+
+    # visualization
+    fig, ax_resid = None, None
+    if visualize == RESID_SCATTER:
+        fig, axs = plt.subplots(1, 2, figsize=(8, 3))
+        axs[0].scatter(x, y)
+        axs[0].plot(x, yhat)
+        axs[0].set_title("LS fit")
+        ax_resid = axs[1]
+    elif visualize == RESID:
+        fig = plt.figure(figsize=(4, 3))
+        ax_resid = plt.gca()
+
+    if ax_resid is not None:
+        ax_resid.scatter(x, y - yhat, color="red")
+        ax_resid.plot([4, 14], [0, 0], color="black")
+        ax_resid.set_title("Residuals")
+
+    return fig
+
+
+
+
+Code +
# Load in four different datasets: I, II, III, IV
+x = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]
+y1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]
+y2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]
+y3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]
+x4 = [8, 8, 8, 8, 8, 8, 8, 19, 8, 8, 8]
+y4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89]
+
+anscombe = {
+    "I": pd.DataFrame(list(zip(x, y1)), columns=["x", "y"]),
+    "II": pd.DataFrame(list(zip(x, y2)), columns=["x", "y"]),
+    "III": pd.DataFrame(list(zip(x, y3)), columns=["x", "y"]),
+    "IV": pd.DataFrame(list(zip(x4, y4)), columns=["x", "y"]),
+}
+
+# Plot the scatter plot and line of best fit
+fig, axs = plt.subplots(2, 2, figsize=(10, 10))
+
+for i, dataset in enumerate(["I", "II", "III", "IV"]):
+    ans = anscombe[dataset]
+    x, y = ans["x"], ans["y"]
+    ahat, bhat = fit_least_squares(x, y)
+    yhat = predict(x, ahat, bhat)
+    axs[i // 2, i % 2].scatter(x, y, alpha=0.6, color="red")  # plot the x, y points
+    axs[i // 2, i % 2].plot(x, yhat)  # plot the line of best fit
+    axs[i // 2, i % 2].set_xlabel(f"$x_{i+1}$")
+    axs[i // 2, i % 2].set_ylabel(f"$y_{i+1}$")
+    axs[i // 2, i % 2].set_title(f"Dataset {dataset}")
+
+plt.show()
+
+
+
+
+

+
+
+
+
+

While these four sets of datapoints look very different, they actually all have identical means \(\bar x\), \(\bar y\), standard deviations \(\sigma_x\), \(\sigma_y\), correlation \(r\), and RMSE! If we only look at these statistics, we would probably be inclined to say that these datasets are similar.

+
+
+Code +
for dataset in ["I", "II", "III", "IV"]:
+    print(f">>> Dataset {dataset}:")
+    ans = anscombe[dataset]
+    fig = least_squares_evaluation(ans["x"], ans["y"], visualize=NO_VIZ)
+    print()
+    print()
+
+
+
>>> Dataset I:
+x_mean : 9.00, y_mean : 7.50
+x_stdev: 3.16, y_stdev: 1.94
+r = Correlation(x, y): 0.816
+    heta_0: 3.00,   heta_1: 0.50
+RMSE: 1.119
+
+
+>>> Dataset II:
+x_mean : 9.00, y_mean : 7.50
+x_stdev: 3.16, y_stdev: 1.94
+r = Correlation(x, y): 0.816
+    heta_0: 3.00,   heta_1: 0.50
+RMSE: 1.119
+
+
+>>> Dataset III:
+x_mean : 9.00, y_mean : 7.50
+x_stdev: 3.16, y_stdev: 1.94
+r = Correlation(x, y): 0.816
+    heta_0: 3.00,   heta_1: 0.50
+RMSE: 1.118
+
+
+>>> Dataset IV:
+x_mean : 9.00, y_mean : 7.50
+x_stdev: 3.16, y_stdev: 1.94
+r = Correlation(x, y): 0.817
+    heta_0: 3.00,   heta_1: 0.50
+RMSE: 1.118
+
+
+
+
+

We may also wish to visualize the model’s residuals, defined as the difference between the observed and predicted \(y_i\) value (\(e_i = y_i - \hat{y}_i\)). This gives a high-level view of how “off” each prediction is from the true observed value. Recall that you explored this concept in Data 8: a good regression fit should display no clear pattern in its plot of residuals. The residual plots for Anscombe’s quartet are displayed below. Note how only the first plot shows no clear pattern to the magnitude of residuals. This is an indication that SLR is not the best choice of model for the remaining three sets of points.

+ +
+
+Code +
# Residual visualization
+fig, axs = plt.subplots(2, 2, figsize=(10, 10))
+
+for i, dataset in enumerate(["I", "II", "III", "IV"]):
+    ans = anscombe[dataset]
+    x, y = ans["x"], ans["y"]
+    ahat, bhat = fit_least_squares(x, y)
+    yhat = predict(x, ahat, bhat)
+    axs[i // 2, i % 2].scatter(
+        x, y - yhat, alpha=0.6, color="red"
+    )  # plot the x, y points
+    axs[i // 2, i % 2].plot(
+        x, np.zeros_like(x), color="black"
+    )  # plot the residual line
+    axs[i // 2, i % 2].set_xlabel(f"$x_{i+1}$")
+    axs[i // 2, i % 2].set_ylabel(f"$e_{i+1}$")
+    axs[i // 2, i % 2].set_title(f"Dataset {dataset} Residuals")
+
+plt.show()
+
+
+
+
+

+
+
+
+
+
+
+
+

11.2 Constant Model + MSE

+

Now, we’ll shift from the SLR model to the constant model, also known as a summary statistic. The constant model is slightly different from the simple linear regression model we’ve explored previously. Rather than generating predictions from an inputted feature variable, the constant model always predicts the same constant number. This ignores any relationships between variables. For example, let’s say we want to predict the number of drinks a boba shop sells in a day. Boba tea sales likely depend on the time of year, the weather, how the customers feel, whether school is in session, etc., but the constant model ignores these factors in favor of a simpler model. In other words, the constant model employs a simplifying assumption.

+

It is also a parametric, statistical model:

+

\[\hat{y} = \theta_0\]

+

\(\theta_0\) is the parameter of the constant model, just as \(\theta_0\) and \(\theta_1\) were the parameters in SLR. Since our parameter \(\theta_0\) is 1-dimensional (\(\theta_0 \in \mathbb{R}\)), we now have no input to our model and will always predict \(\hat{y} = \theta_0\).

+
+

11.2.1 Deriving the optimal \(\theta_0\)

+

Our task now is to determine what value of \(\theta_0\) best represents the optimal model – in other words, what number should we guess each time to have the lowest possible average loss on our data?

+

Like before, we’ll use Mean Squared Error (MSE). Recall that the MSE is average squared loss (L2 loss) over the data \(D = \{y_1, y_2, ..., y_n\}\).

+

\[\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \hat{y_i})^2 \]

+

Our modeling process now looks like this:

+
    +
  1. Choose a model: constant model
  2. +
  3. Choose a loss function: L2 loss
  4. +
  5. Fit the model
  6. +
  7. Evaluate model performance
  8. +
+

Given the constant model \(\hat{y} = \theta_0\), we can rewrite the MSE equation as

+

\[\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2 \]

+

We can fit the model by finding the optimal \(\hat{\theta_0}\) that minimizes the MSE using a calculus approach.

+
    +
  1. Differentiate with respect to \(\theta_0\):
  2. +
+

\[ +\begin{align} +\frac{d}{d\theta_0}\text{R}(\theta) & = \frac{d}{d\theta_0}(\frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2) +\\ &= \frac{1}{n}\sum^{n}_{i=1} \frac{d}{d\theta_0} (y_i - \theta_0)^2 \quad \quad \text{a derivative of sums is a sum of derivatives} +\\ &= \frac{1}{n}\sum^{n}_{i=1} 2 (y_i - \theta_0) (-1) \quad \quad \text{chain rule} +\\ &= {\frac{-2}{n}}\sum^{n}_{i=1} (y_i - \theta_0) \quad \quad \text{simply constants} +\end{align} +\]

+
    +
  1. Set the derivative equation equal to 0:

    +

    \[ +0 = {\frac{-2}{n}}\sum^{n}_{i=1} (y_i - \hat{\theta_0}) +\]

  2. +
  3. Solve for \(\hat{\theta_0}\)

  4. +
+

\[ +\begin{align} +0 &= {\frac{-2}{n}}\sum^{n}_{i=1} (y_i - \hat{\theta_0}) +\\ &= \sum^{n}_{i=1} (y_i - \hat{\theta_0}) \quad \quad \text{divide both sides by} \frac{-2}{n} +\\ &= \left(\sum^{n}_{i=1} y_i\right) - \left(\sum^{n}_{i=1} \theta_0\right) \quad \quad \text{separate sums} +\\ &= \left(\sum^{n}_{i=1} y_i\right) - (n \cdot \hat{\theta_0}) \quad \quad \text{c + c + … + c = nc} +\\ n \cdot \hat{\theta_0} &= \sum^{n}_{i=1} y_i +\\ \hat{\theta_0} &= \frac{1}{n} \sum^{n}_{i=1} y_i +\\ \hat{\theta_0} &= \bar{y} +\end{align} +\]

+

Let’s take a moment to interpret this result. \(\hat{\theta_0} = \bar{y}\) is the optimal parameter for constant model + MSE. It holds true regardless of what data sample you have, and it provides some formal reasoning as to why the mean is such a common summary statistic.

+

Our optimal model parameter is the value of the parameter that minimizes the cost function. This minimum value of the cost function can be expressed:

+

\[R(\hat{\theta_0}) = \min_{\theta_0} R(\theta_0)\]

+

To restate the above in plain English: we are looking at the value of the cost function when it takes the best parameter as input. This optimal model parameter, \(\hat{\theta_0}\), is the value of \(\theta_0\) that minimizes the cost \(R\).

+

For modeling purposes, we care less about the minimum value of cost, \(R(\hat{\theta_0})\), and more about the value of \(\theta\) that results in this lowest average loss. In other words, we concern ourselves with finding the best parameter value such that:

+

\[\hat{\theta} = \underset{\theta}{\operatorname{\arg\min}}\:R(\theta)\]

+

That is, we want to find the argument \(\theta\) that minimizes the cost function.

+
+
+

11.2.2 Comparing Two Different Models, Both Fit with MSE

+

Now that we’ve explored the constant model with an L2 loss, we can compare it to the SLR model that we learned last lecture. Consider the dataset below, which contains information about the ages and lengths of dugongs. Supposed we wanted to predict dugong ages:

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Constant ModelSimple Linear Regression
model\(\hat{y} = \theta_0\)\(\hat{y} = \theta_0 + \theta_1 x\)
datasample of ages \(D = \{y_1, y_2, ..., y_n\}\)sample of ages \(D = \{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\}\)
dimensions\(\hat{\theta_0}\) is 1-D\(\hat{\theta} = [\hat{\theta_0}, \hat{\theta_1}]\) is 2-D
loss surface2-D 3-D
loss model\(\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2\)\(\hat{R}(\theta_0, \theta_1) = \frac{1}{n}\sum^{n}_{i=1} (y_i - (\theta_0 + \theta_1 x))^2\)
RMSE7.724.31
predictions visualizedrug plot scatter plot
+

(Notice how the points for our SLR scatter plot are visually not a great linear fit. We’ll come back to this).

+

The code for generating the graphs and models is included below, but we won’t go over it in too much depth.

+
+
+Code +
dugongs = pd.read_csv("data/dugongs.csv")
+data_constant = dugongs["Age"]
+data_linear = dugongs[["Length", "Age"]]
+
+
+
+
+Code +
# Constant Model + MSE
+plt.style.use('default') # Revert style to default mpl
+adjust_fontsize(size=16)
+%matplotlib inline
+
+def mse_constant(theta, data):
+    return np.mean(np.array([(y_obs - theta) ** 2 for y_obs in data]), axis=0)
+
+thetas = np.linspace(-20, 42, 1000)
+l2_loss_thetas = mse_constant(thetas, data_constant)
+
+# Plotting the loss surface
+plt.plot(thetas, l2_loss_thetas)
+plt.xlabel(r'$\theta_0$')
+plt.ylabel(r'MSE')
+
+# Optimal point
+thetahat = np.mean(data_constant)
+plt.scatter([thetahat], [mse_constant(thetahat, data_constant)], s=50, label = r"$\hat{\theta}_0$")
+plt.legend();
+# plt.show()
+
+
+
+
+

+
+
+
+
+
+
+Code +
# SLR + MSE
+def mse_linear(theta_0, theta_1, data_linear):
+    data_x, data_y = data_linear.iloc[:, 0], data_linear.iloc[:, 1]
+    return np.mean(
+        np.array([(y - (theta_0 + theta_1 * x)) ** 2 for x, y in zip(data_x, data_y)]),
+        axis=0,
+    )
+
+
+# plotting the loss surface
+theta_0_values = np.linspace(-80, 20, 80)
+theta_1_values = np.linspace(-10, 30, 80)
+mse_values = np.array(
+    [[mse_linear(x, y, data_linear) for x in theta_0_values] for y in theta_1_values]
+)
+
+# Optimal point
+data_x, data_y = data_linear.iloc[:, 0], data_linear.iloc[:, 1]
+theta_1_hat = np.corrcoef(data_x, data_y)[0, 1] * np.std(data_y) / np.std(data_x)
+theta_0_hat = np.mean(data_y) - theta_1_hat * np.mean(data_x)
+
+# Create the 3D plot
+fig = plt.figure(figsize=(7, 5))
+ax = fig.add_subplot(111, projection="3d")
+
+X, Y = np.meshgrid(theta_0_values, theta_1_values)
+surf = ax.plot_surface(
+    X, Y, mse_values, cmap="viridis", alpha=0.6
+)  # Use alpha to make it slightly transparent
+
+# Scatter point using matplotlib
+sc = ax.scatter(
+    [theta_0_hat],
+    [theta_1_hat],
+    [mse_linear(theta_0_hat, theta_1_hat, data_linear)],
+    marker="o",
+    color="red",
+    s=100,
+    label="theta hat",
+)
+
+# Create a colorbar
+cbar = fig.colorbar(surf, ax=ax, shrink=0.5, aspect=10)
+cbar.set_label("Cost Value")
+
+ax.set_title("MSE for different $\\theta_0, \\theta_1$")
+ax.set_xlabel("$\\theta_0$")
+ax.set_ylabel("$\\theta_1$")
+ax.set_zlabel("MSE")
+
+# plt.show()
+
+
+
Text(0.5, 0, 'MSE')
+
+
+
+
+

+
+
+
+
+
+
+Code +
# Predictions
+yobs = data_linear["Age"]  # The true observations y
+xs = data_linear["Length"]  # Needed for linear predictions
+n = len(yobs)  # Predictions
+
+yhats_constant = [thetahat for i in range(n)]  # Not used, but food for thought
+yhats_linear = [theta_0_hat + theta_1_hat * x for x in xs]
+
+
+
+
+Code +
# Constant Model Rug Plot
+# In case we're in a weird style state
+sns.set_theme()
+adjust_fontsize(size=16)
+%matplotlib inline
+
+fig = plt.figure(figsize=(8, 1.5))
+sns.rugplot(yobs, height=0.25, lw=2) ;
+plt.axvline(thetahat, color='red', lw=4, label=r"$\hat{\theta}_0$");
+plt.legend()
+plt.yticks([]);
+# plt.show()
+
+
+
+
+

+
+
+
+
+
+
+Code +
# SLR model scatter plot 
+# In case we're in a weird style state
+sns.set_theme()
+adjust_fontsize(size=16)
+%matplotlib inline
+
+sns.scatterplot(x=xs, y=yobs)
+plt.plot(xs, yhats_linear, color='red', lw=4);
+# plt.savefig('dugong_line.png', bbox_inches = 'tight');
+# plt.show()
+
+
+
+
+

+
+
+
+
+

Interpreting the RMSE (Root Mean Squared Error):

+
    +
  • Because the constant error is HIGHER than the linear error,
  • +
  • The constant model is WORSE than the linear model (at least for this metric).
  • +
+
+
+
+

11.3 Constant Model + MAE

+

We see now that changing the model used for prediction leads to a wildly different result for the optimal model parameter. What happens if we instead change the loss function used in model evaluation?

+

This time, we will consider the constant model with L1 (absolute loss) as the loss function. This means that the average loss will be expressed as the Mean Absolute Error (MAE).

+
    +
  1. Choose a model: constant model
  2. +
  3. Choose a loss function: L1 loss
  4. +
  5. Fit the model
  6. +
  7. Evaluate model performance
  8. +
+
+

11.3.1 Deriving the optimal \(\theta_0\)

+

Recall that the MAE is average absolute loss (L1 loss) over the data \(D = \{y_1, y_2, ..., y_n\}\).

+

\[\hat{R}(\theta_0) = \frac{1}{n}\sum^{n}_{i=1} |y_i - \hat{y_i}| \]

+

Given the constant model \(\hat{y} = \theta_0\), we can write the MAE as:

+

\[\hat{R}(\theta_0) = \frac{1}{n}\sum^{n}_{i=1} |y_i - \theta_0| \]

+

To fit the model, we find the optimal parameter value \(\hat{\theta_0}\) that minimizes the MAE by differentiating using a calculus approach:

+
    +
  1. Differentiate with respect to \(\hat{\theta_0}\):
  2. +
+

\[ +\begin{align} +\hat{R}(\theta_0) &= \frac{1}{n}\sum^{n}_{i=1} |y_i - \theta_0| \\ +\frac{d}{d\theta_0} R(\theta_0) &= \frac{d}{d\theta_0} \left(\frac{1}{n} \sum^{n}_{i=1} |y_i - \theta_0| \right) \\ +&= \frac{1}{n} \sum^{n}_{i=1} \frac{d}{d\theta_0} |y_i - \theta_0| +\end{align} +\]

+
    +
  • Here, we seem to have run into a problem: the derivative of an absolute value is undefined when the argument is 0 (i.e. when \(y_i = \theta_0\)). For now, we’ll ignore this issue. It turns out that disregarding this case doesn’t influence our final result.
  • +
  • To perform the derivative, consider two cases. When \(\theta_0\) is less than or equal to \(y_i\), the term \(y_i - \theta_0\) will be positive and the absolute value has no impact. When \(\theta_0\) is greater than \(y_i\), the term \(y_i - \theta_0\) will be negative. Applying the absolute value will convert this to a positive value, which we can express by saying \(-(y_i - \theta_0) = \theta_0 - y_i\).
  • +
+

\[|y_i - \theta_0| = \begin{cases} y_i - \theta_0 \quad \text{ if } \theta_0 \le y_i \\ \theta_0 - y_i \quad \text{if }\theta_0 > y_i \end{cases}\]

+
    +
  • Taking derivatives:
  • +
+

\[\frac{d}{d\theta_0} |y_i - \theta_0| = \begin{cases} \frac{d}{d\theta_0} (y_i - \theta_0) = -1 \quad \text{if }\theta_0 < y_i \\ \frac{d}{d\theta_0} (\theta_0 - y_i) = 1 \quad \text{if }\theta_0 > y_i \end{cases}\]

+
    +
  • This means that we obtain a different value for the derivative for data points where \(\theta_0 < y_i\) and where \(\theta_0 > y_i\). We can summarize this by saying:
  • +
+

\[ +\frac{d}{d\theta_0} R(\theta_0) = \frac{1}{n} \sum^{n}_{i=1} \frac{d}{d\theta_0} |y_i - \theta_0| \\ += \frac{1}{n} \left[\sum_{\theta_0 < y_i} (-1) + \sum_{\theta_0 > y_i} (+1) \right] +\]

+
    +
  • In other words, we take the sum of values for \(i = 1, 2, ..., n\): +
      +
    • \(-1\) if our observation \(y_i\) is greater than our prediction \(\hat{\theta_0}\)
    • +
    • \(+1\) if our observation \(y_i\) is smaller than our prediction \(\hat{\theta_0}\)
    • +
  • +
+
    +
  1. Set the derivative equation equal to 0: \[ 0 = \frac{1}{n}\sum_{\hat{\theta_0} < y_i} (-1) + \frac{1}{n}\sum_{\hat{\theta_0} > y_i} (+1) \]

  2. +
  3. Solve for \(\hat{\theta_0}\): \[ 0 = -\frac{1}{n}\sum_{\hat{\theta_0} < y_i} (1) + \frac{1}{n}\sum_{\hat{\theta_0} > y_i} (1)\]

  4. +
+

\[\sum_{\hat{\theta_0} < y_i} (1) = \sum_{\hat{\theta_0} > y_i} (1) \]

+

Thus, the constant model parameter \(\theta = \hat{\theta_0}\) that minimizes MAE must satisfy:

+

\[ \sum_{\hat{\theta_0} < y_i} (1) = \sum_{\hat{\theta_0} > y_i} (1) \]

+

In other words, the number of observations greater than \(\theta_0\) must be equal to the number of observations less than \(\theta_0\); there must be an equal number of points on the left and right sides of the equation. This is the definition of median, so our optimal value is \[ \hat{\theta_0} = median(y) \]

+
+
+
+

11.4 Summary: Loss Optimization, Calculus, and Critical Points

+

First, define the objective function as average loss.

+
    +
  • Plug in L1 or L2 loss.
  • +
  • Plug in the model so that the resulting expression is a function of \(\theta\).
  • +
+

Then, find the minimum of the objective function:

+
    +
  1. Differentiate with respect to \(\theta\).
  2. +
  3. Set equal to 0.
  4. +
  5. Solve for \(\hat{\theta}\).
  6. +
  7. (If we have multiple parameters) repeat steps 1-3 with partial derivatives.
  8. +
+

Recall critical points from calculus: \(R(\hat{\theta})\) could be a minimum, maximum, or saddle point!

+
    +
  • We should technically also perform the second derivative test, i.e., show \(R''(\hat{\theta}) > 0\).
  • +
  • MSE has a property—convexity—that guarantees that \(R(\hat{\theta})\) is a global minimum.
  • +
  • The proof of convexity for MAE is beyond this course.
  • +
+
+
+

11.5 Comparing Loss Functions

+

We’ve now tried our hand at fitting a model under both MSE and MAE cost functions. How do the two results compare?

+

Let’s consider a dataset where each entry represents the number of drinks sold at a bubble tea store each day. We’ll fit a constant model to predict the number of drinks that will be sold tomorrow.

+
+
drinks = np.array([20, 21, 22, 29, 33])
+drinks
+
+
array([20, 21, 22, 29, 33])
+
+
+

From our derivations above, we know that the optimal model parameter under MSE cost is the mean of the dataset. Under MAE cost, the optimal parameter is the median of the dataset.

+
+
np.mean(drinks), np.median(drinks)
+
+
(np.float64(25.0), np.float64(22.0))
+
+
+

If we plot each empirical risk function across several possible values of \(\theta\), we find that each \(\hat{\theta}\) does indeed correspond to the lowest value of error:

+

error

+

Notice that the MSE above is a smooth function – it is differentiable at all points, making it easy to minimize using numerical methods. The MAE, in contrast, is not differentiable at each of its “kinks.” We’ll explore how the smoothness of the cost function can impact our ability to apply numerical optimization in a few weeks.

+

How do outliers affect each cost function? Imagine we replace the largest value in the dataset with 1000. The mean of the data increases substantially, while the median is nearly unaffected.

+
+
drinks_with_outlier = np.append(drinks, 1033)
+display(drinks_with_outlier)
+np.mean(drinks_with_outlier), np.median(drinks_with_outlier)
+
+
array([  20,   21,   22,   29,   33, 1033])
+
+
+
(np.float64(193.0), np.float64(25.5))
+
+
+

outliers

+

This means that under the MSE, the optimal model parameter \(\hat{\theta}\) is strongly affected by the presence of outliers. Under the MAE, the optimal parameter is not as influenced by outlying data. We can generalize this by saying that the MSE is sensitive to outliers, while the MAE is robust to outliers.

+

Let’s try another experiment. This time, we’ll add an additional, non-outlying datapoint to the data.

+
+
drinks_with_additional_observation = np.append(drinks, 35)
+drinks_with_additional_observation
+
+
array([20, 21, 22, 29, 33, 35])
+
+
+

When we again visualize the cost functions, we find that the MAE now plots a horizontal line between 22 and 29. This means that there are infinitely many optimal values for the model parameter: any value \(\hat{\theta} \in [22, 29]\) will minimize the MAE. In contrast, the MSE still has a single best value for \(\hat{\theta}\). In other words, the MSE has a unique solution for \(\hat{\theta}\); the MAE is not guaranteed to have a single unique solution.

+

+

To summarize our example,

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MSE (Mean Squared Loss)MAE (Mean Absolute Loss)
Loss Function\(\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2\)\(\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} |y_i - \theta_0|\)
Optimal \(\hat{\theta_0}\)\(\hat{\theta_0} = mean(y) = \bar{y}\)\(\hat{\theta_0} = median(y)\)
Loss Surface
ShapeSmooth - easy to minimize using numerical methods (in a few weeks)Piecewise - at each of the “kinks,” it’s not differentiable. Harder to minimize.
OutliersSensitive to outliers (since they change mean substantially). Sensitivity also depends on the dataset size.More robust to outliers.
\(\hat{\theta_0}\) UniquenessUnique \(\hat{\theta_0}\)Infinitely many \(\hat{\theta_0}\)s
+
+
+

11.6 Transformations to fit Linear Models

+

At this point, we have an effective method of fitting models to predict linear relationships. Given a feature variable and target, we can apply our four-step process to find the optimal model parameters.

+

A key word above is linear. When we computed parameter estimates earlier, we assumed that \(x_i\) and \(y_i\) shared a roughly linear relationship. Data in the real world isn’t always so straightforward, but we can transform the data to try and obtain linearity.

+

The Tukey-Mosteller Bulge Diagram is a useful tool for summarizing what transformations can linearize the relationship between two variables. To determine what transformations might be appropriate, trace the shape of the “bulge” made by your data. Find the quadrant of the diagram that matches this bulge. The transformations shown on the vertical and horizontal axes of this quadrant can help improve the fit between the variables.

+

bulge

+

Note that:

+
    +
  • There are multiple solutions. Some will fit better than others.
  • +
  • sqrt and log make a value “smaller.”
  • +
  • Raising to a power makes a value “bigger.”
  • +
  • Each of these transformations equates to increasing or decreasing the scale of an axis.
  • +
+

Other goals in addition to linearity are possible, for example, making data appear more symmetric. Linearity allows us to fit lines to the transformed data.

+

Let’s revisit our dugongs example. The lengths and ages are plotted below:

+
+
+Code +
# `corrcoef` computes the correlation coefficient between two variables
+# `std` finds the standard deviation
+x = dugongs["Length"]
+y = dugongs["Age"]
+r = np.corrcoef(x, y)[0, 1]
+theta_1 = r * np.std(y) / np.std(x)
+theta_0 = np.mean(y) - theta_1 * np.mean(x)
+
+fig, ax = plt.subplots(1, 2, dpi=200, figsize=(8, 3))
+ax[0].scatter(x, y)
+ax[0].set_xlabel("Length")
+ax[0].set_ylabel("Age")
+
+ax[1].scatter(x, y)
+ax[1].plot(x, theta_0 + theta_1 * x, "tab:red")
+ax[1].set_xlabel("Length")
+ax[1].set_ylabel("Age")
+
+
+
Text(0, 0.5, 'Age')
+
+
+
+
+

+
+
+
+
+

Looking at the plot on the left, we see that there is a slight curvature to the data points. Plotting the SLR curve on the right results in a poor fit.

+

For SLR to perform well, we’d like there to be a rough linear trend relating "Age" and "Length". What is making the raw data deviate from a linear relationship? Notice that the data points with "Length" greater than 2.6 have disproportionately high values of "Age" relative to the rest of the data. If we could manipulate these data points to have lower "Age" values, we’d “shift” these points downwards and reduce the curvature in the data. Applying a logarithmic transformation to \(y_i\) (that is, taking \(\log(\) "Age" \()\) ) would achieve just that.

+

An important word on \(\log\): in Data 100 (and most upper-division STEM courses), \(\log\) denotes the natural logarithm with base \(e\). The base-10 logarithm, where relevant, is indicated by \(\log_{10}\).

+
+
+Code +
z = np.log(y)
+
+r = np.corrcoef(x, z)[0, 1]
+theta_1 = r * np.std(z) / np.std(x)
+theta_0 = np.mean(z) - theta_1 * np.mean(x)
+
+fig, ax = plt.subplots(1, 2, dpi=200, figsize=(8, 3))
+ax[0].scatter(x, z)
+ax[0].set_xlabel("Length")
+ax[0].set_ylabel(r"$\log{(Age)}$")
+
+ax[1].scatter(x, z)
+ax[1].plot(x, theta_0 + theta_1 * x, "tab:red")
+ax[1].set_xlabel("Length")
+ax[1].set_ylabel(r"$\log{(Age)}$")
+
+plt.subplots_adjust(wspace=0.3)
+
+
+
+
+

+
+
+
+
+

Our SLR fit looks a lot better! We now have a new target variable: the SLR model is now trying to predict the log of "Age", rather than the untransformed "Age". In other words, we are applying the transformation \(z_i = \log{(y_i)}\). Notice that the resulting model is still linear in the parameters \(\theta = [\theta_0, \theta_1]\). The SLR model becomes:

+

\[\hat{\log{y}} = \theta_0 + \theta_1 x\] \[\hat{z} = \theta_0 + \theta_1 x\]

+

It turns out that this linearized relationship can help us understand the underlying relationship between \(x\) and \(y\). If we rearrange the relationship above, we find:

+

\[\log{(y)} = \theta_0 + \theta_1 x\] \[y = e^{\theta_0 + \theta_1 x}\] \[y = (e^{\theta_0})e^{\theta_1 x}\] \[y_i = C e^{k x}\]

+

For some constants \(C\) and \(k\).

+

\(y\) is an exponential function of \(x\). Applying an exponential fit to the untransformed variables corroborates this finding.

+
+
+Code +
plt.figure(dpi=120, figsize=(4, 3))
+
+plt.scatter(x, y)
+plt.plot(x, np.exp(theta_0) * np.exp(theta_1 * x), "tab:red")
+plt.xlabel("Length")
+plt.ylabel("Age")
+
+
+
Text(0, 0.5, 'Age')
+
+
+
+
+

+
+
+
+
+

You may wonder: why did we choose to apply a log transformation specifically? Why not some other function to linearize the data?

+

Practically, many other mathematical operations that modify the relative scales of "Age" and "Length" could have worked here.

+
+
+

11.7 Multiple Linear Regression

+

Multiple linear regression is an extension of simple linear regression that adds additional features to the model. The multiple linear regression model takes the form:

+

\[\hat{y} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:...\:+\:\theta_p x_{p}\]

+

Our predicted value of \(y\), \(\hat{y}\), is a linear combination of the single observations (features), \(x_i\), and the parameters, \(\theta_i\).

+

We’ll dive deeper into Multiple Linear Regression in the next lecture.

+
+
+

11.8 Bonus: Calculating Constant Model MSE Using an Algebraic Trick

+

Earlier, we calculated the constant model MSE using calculus. It turns out that there is a much more elegant way of performing this same minimization algebraically, without using calculus at all.

+

In this calculation, we use the fact that the sum of deviations from the mean is \(0\) or that \(\sum_{i=1}^{n} (y_i - \bar{y}) = 0\).

+

Let’s quickly walk through the proof for this: \[ +\begin{align} +\sum_{i=1}^{n} (y_i - \bar{y}) &= \sum_{i=1}^{n} y_i - \sum_{i=1}^{n} \bar{y} \\ +&= \sum_{i=1}^{n} y_i - n\bar{y} \\ +&= \sum_{i=1}^{n} y_i - n\frac{1}{n}\sum_{i=1}^{n}y_i \\ +&= \sum_{i=1}^{n} y_i - \sum_{i=1}^{n}y_i \\ +& = 0 +\end{align} +\]

+

In our calculations, we’ll also be using the definition of the variance as a sample. As a refresher:

+

\[\sigma_y^2 = \frac{1}{n}\sum_{i=1}^{n} (y_i - \bar{y})^2\]

+

Getting into our calculation for MSE minimization:

+

\[ +\begin{align} +R(\theta) &= {\frac{1}{n}}\sum^{n}_{i=1} (y_i - \theta)^2 +\\ &= \frac{1}{n}\sum^{n}_{i=1} [(y_i - \bar{y}) + (\bar{y} - \theta)]^2\quad \quad \text{using trick that a-b can be written as (a-c) + (c-b) } \\ +&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \space \space \text{where a, b, and c are any numbers} +\\ &= \frac{1}{n}\sum^{n}_{i=1} [(y_i - \bar{y})^2 + 2(y_i - \bar{y})(\bar{y} - \theta) + (\bar{y} - \theta)^2] +\\ &= \frac{1}{n}[\sum^{n}_{i=1}(y_i - \bar{y})^2 + 2(\bar{y} - \theta)\sum^{n}_{i=1}(y_i - \bar{y}) + n(\bar{y} - \theta)^2] \quad \quad \text{distribute sum to individual terms} +\\ &= \frac{1}{n}\sum^{n}_{i=1}(y_i - \bar{y})^2 + \frac{2}{n}(\bar{y} - \theta)\cdot0 + (\bar{y} - \theta)^2 \quad \quad \text{sum of deviations from mean is 0} +\\ &= \sigma_y^2 + (\bar{y} - \theta)^2 +\end{align} +\]

+

Since variance can’t be negative, we know that our first term, \(\sigma_y^2\) is greater than or equal to \(0\). Also note, that the first term doesn’t involve \(\theta\) at all, meaning changing our model won’t change this value. For the purposes of determining $#, we can then essentially ignore this term.

+

Looking at the second term, \((\bar{y} - \theta)^2\), since it is squared, we know it must be greater than or equal to \(0\). As this term does involve \(\theta\), picking the value of \(\theta\) that minimizes this term will allow us to minimize our average loss. For the second term to equal \(0\), \(\theta = \bar{y}\), or in other words, \(\hat{\theta} = \bar{y} = mean(y)\).

+
+
11.8.0.0.1 Note
+

In the derivation above, we decompose the expected loss, \(R(\theta)\), into two key components: the variance of the data, \(\sigma_y^2\), and the square of the bias, \((\bar{y} - \theta)^2\). This decomposition is insightful for understanding the behavior of estimators in statistical models.

+
    +
  • Variance, \(\sigma_y^2\): This term represents the spread of the data points around their mean, \(\bar{y}\), and is a measure of the data’s inherent variability. Importantly, it does not depend on the choice of \(\theta\), meaning it’s a fixed property of the data. Variance serves as an indicator of the data’s dispersion and is crucial in understanding the dataset’s structure, but it remains constant regardless of how we adjust our model parameter \(\theta\).

  • +
  • Bias Squared, \((\bar{y} - \theta)^2\): This term captures the bias of the estimator, defined as the square of the difference between the mean of the data points, \(\bar{y}\), and the parameter \(\theta\). The bias quantifies the systematic error introduced when estimating \(\theta\). Minimizing this term is essential for improving the accuracy of the estimator. When \(\theta = \bar{y}\), the bias is \(0\), indicating that the estimator is unbiased for the parameter it estimates. This highlights a critical principle in statistical estimation: choosing \(\theta\) to be the sample mean, \(\bar{y}\), minimizes the average loss, rendering the estimator both efficient and unbiased for the population mean.

  • +
+ + + + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-10-output-2.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-10-output-2.png new file mode 100644 index 000000000..1acc1baca Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-10-output-2.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-12-output-1.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-12-output-1.png new file mode 100644 index 000000000..67779b495 Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-12-output-1.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-13-output-1.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-13-output-1.png new file mode 100644 index 000000000..8e3636962 Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-13-output-1.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-18-output-2.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-18-output-2.png new file mode 100644 index 000000000..2a722a072 Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-18-output-2.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-19-output-1.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-19-output-1.png new file mode 100644 index 000000000..e7ad12475 Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-19-output-1.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-20-output-2.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-20-output-2.png new file mode 100644 index 000000000..0407c94e2 Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-20-output-2.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-5-output-1.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-5-output-1.png new file mode 100644 index 000000000..3b44a8d7c Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-5-output-1.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-7-output-1.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-7-output-1.png new file mode 100644 index 000000000..87f9e7be6 Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-7-output-1.png differ diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-9-output-1.png b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-9-output-1.png new file mode 100644 index 000000000..2b9d23263 Binary files /dev/null and b/docs/constant_model_loss_transformations/loss_transformations_files/figure-html/cell-9-output-1.png differ diff --git a/docs/cv_regularization/cv_reg.html b/docs/cv_regularization/cv_reg.html new file mode 100644 index 000000000..8cc3b5a2d --- /dev/null +++ b/docs/cv_regularization/cv_reg.html @@ -0,0 +1,1313 @@ + + + + + + + + + +16  Cross Validation and Regularization – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

16  Cross Validation and Regularization

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Recognize the need for validation and test sets to preview model performance on unseen data
  • +
  • Apply cross-validation to select model hyperparameters
  • +
  • Understand the conceptual basis for L1 and L2 regularization
  • +
+
+
+
+

At the end of the Feature Engineering lecture (Lecture 14), we arrived at the issue of fine-tuning model complexity. We identified that a model that’s too complex can lead to overfitting while a model that’s too simple can lead to underfitting. This brings us to a natural question: how do we control model complexity to avoid under- and overfitting?

+

To answer this question, we will need to address two things: first, we need to understand when our model begins to overfit by assessing its performance on unseen data. We can achieve this through cross-validation. Secondly, we need to introduce a technique to adjust the complexity of our models ourselves – to do so, we will apply regularization.

+
+

16.1 Cross-validation

+
+

16.1.1 Training, Test, and Validation Sets

+
+train-test-split +
+


+

From the last lecture, we learned that increasing model complexity decreased our model’s training error but increased its variance. This makes intuitive sense: adding more features causes our model to fit more closely to data it encountered during training, but it generalizes worse to new data that hasn’t been seen before. For this reason, a low training error is not always representative of our model’s underlying performance – we need to also assess how well it performs on unseen data to ensure that it is not overfitting.

+

Truly, the only way to know when our model overfits is by evaluating it on unseen data. Unfortunately, that means we need to wait for more data. This may be very expensive and time-consuming.

+

How should we proceed? In this section, we will build up a viable solution to this problem.

+
+

16.1.1.1 Test Sets

+

The simplest approach to avoid overfitting is to keep some of our data “secret” from ourselves. We can set aside a random portion of our full dataset to use only for testing purposes. The datapoints in this test set will not be used to fit the model. Instead, we will:

+
    +
  • Use the remaining portion of our dataset – now called the training set – to run ordinary least squares, gradient descent, or some other technique to train our model,
  • +
  • Take the fitted model and use it to make predictions on datapoints in the test set. The model’s performance on the test set (expressed as the MSE, RMSE, etc.) is now indicative of how well it can make predictions on unseen data
  • +
+

Importantly, the optimal model parameters were found by only considering the data in the training set. After the model has been fitted to the training data, we do not change any parameters before making predictions on the test set. Importantly, we only ever make predictions on the test set once after all model design has been completely finalized. We treat the test set performance as the final test of how well a model does. To reiterate, the test set is only ever touched once: to compute the performance of the model after all fine-tuning has been completed.

+

The process of sub-dividing our dataset into training and test sets is known as a train-test split. Typically, between 10% and 20% of the data is allocated to the test set.

+
+train-test-split +
+


+

In sklearn, the train_test_split function (documentation) of the model_selection module allows us to automatically generate train-test splits.

+

We will work with the vehicles dataset from previous lectures. As before, we will attempt to predict the mpg of a vehicle from transformations of its hp. In the cell below, we allocate 20% of the full dataset to testing, and the remaining 80% to training.

+
+
+Code +
import pandas as pd
+import numpy as np
+import seaborn as sns
+import warnings
+warnings.filterwarnings('ignore')
+
+# Load the dataset and construct the design matrix
+vehicles = sns.load_dataset("mpg").rename(columns={"horsepower":"hp"}).dropna()
+X = vehicles[["hp"]]
+X["hp^2"] = vehicles["hp"]**2
+X["hp^3"] = vehicles["hp"]**3
+X["hp^4"] = vehicles["hp"]**4
+
+Y = vehicles["mpg"]
+
+
+
+
from sklearn.model_selection import train_test_split
+
+# `test_size` specifies the proportion of the full dataset that should be allocated to testing
+# `random_state` makes our results reproducible for educational purposes
+X_train, X_test, Y_train, Y_test = train_test_split(
+        X, 
+        Y, 
+        test_size=0.2, 
+        random_state=220
+    )
+
+print(f"Size of full dataset: {X.shape[0]} points")
+print(f"Size of training set: {X_train.shape[0]} points")
+print(f"Size of test set: {X_test.shape[0]} points")
+
+
Size of full dataset: 392 points
+Size of training set: 313 points
+Size of test set: 79 points
+
+
+

After performing our train-test split, we fit a model to the training set and assess its performance on the test set.

+
+
import sklearn.linear_model as lm
+from sklearn.metrics import mean_squared_error
+
+model = lm.LinearRegression()
+
+# Fit to the training set
+model.fit(X_train, Y_train)
+
+# Calculate errors
+train_error = mean_squared_error(Y_train, model.predict(X_train))
+test_error = mean_squared_error(Y_test, model.predict(X_test))
+
+print(f"Training error: {train_error}")
+print(f"Test error: {test_error}")
+
+
Training error: 17.858516841012097
+Test error: 23.19240562932651
+
+
+
+
+

16.1.1.2 Validation Sets

+

Now, what if we were dissatisfied with our test set performance? With our current framework, we’d be stuck. As outlined previously, assessing model performance on the test set is the final stage of the model design process; we can’t go back and adjust our model based on the new discovery that it is overfitting. If we did, then we would be factoring in information from the test set to design our model. The test error would no longer be a true representation of the model’s performance on unseen data!

+

Our solution is to introduce a validation set. A validation set is a random portion of the training set that is set aside for assessing model performance while the model is still being developed. The process for using a validation set is:

+
    +
  1. Perform a train-test split.
  2. +
  3. Set the test set aside; we will not touch it until the very end of the model design process.
  4. +
  5. Set aside a portion of the training set to be used for validation.
  6. +
  7. Fit the model parameters to the datapoints contained in the remaining portion of the training set.
  8. +
  9. Assess the model’s performance on the validation set. Adjust the model as needed, re-fit it to the remaining portion of the training set, then re-evaluate it on the validation set. Repeat as necessary until you are satisfied.
  10. +
  11. After all model development is complete, assess the model’s performance on the test set. This is the final test of how well the model performs on unseen data. No further modifications should be made to the model.
  12. +
+

The process of creating a validation set is called a validation split.

+
+validation-split +
+


+

Note that the validation error behaves quite differently from the training error explored previously. As the model becomes more complex, it makes better predictions on the training data; the variance of the model typically increases as model complexity increases. Validation error, on the other hand, decreases then increases as we increase model complexity. This reflects the transition from under- to overfitting: at low model complexity, the model underfits because it is not complex enough to capture the main trends in the data; at high model complexity, the model overfits because it “memorizes” the training data too closely.

+

We can update our understanding of the relationships between error, complexity, and model variance:

+
+training_validation_curve +
+


+

Our goal is to train a model with complexity near the orange dotted line – this is where our model minimizes the validation error. Note that this relationship is a simplification of the real-world, but it’s a good enough approximation for the purposes of Data 100.

+
+
+
+

16.1.2 K-Fold Cross-Validation

+

Introducing a validation set gave us an “extra” chance to assess model performance on another set of unseen data. We are able to finetune the model design based on its performance on this one set of validation data.

+

But what if, by random chance, our validation set just happened to contain many outliers? It is possible that the validation datapoints we set aside do not actually represent other unseen data that the model might encounter. Ideally, we would like to validate our model’s performance on several different unseen datasets. This would give us greater confidence in our understanding of how the model behaves on new data.

+

Let’s think back to our validation framework. Earlier, we set aside \(x\)% of our training data (say, 20%) to use for validation.

+
+validation_set +
+

In the example above, we set aside the first 20% of training datapoints for the validation set. This was an arbitrary choice. We could have set aside any 20% portion of the training data for validation. In fact, there are 5 non-overlapping “chunks” of training points that we could have designated as the validation set.

+
+possible_validation_sets +
+

The common term for one of these chunks is a fold. In the example above, we had 5 folds, each containing 20% of the training data. This gives us a new perspective: we really have 5 validation sets “hidden” in our training set.

+

In cross-validation, we perform validation splits for each fold in the training set. For a dataset with \(K\) folds, we:

+
    +
  1. Pick one fold to be the validation fold
  2. +
  3. Fit the model to training data from every fold other than the validation fold
  4. +
  5. Compute the model’s error on the validation fold and record it
  6. +
  7. Repeat for all \(K\) folds
  8. +
+The cross-validation error is then the average error across all \(K\) validation folds. In the example below, the cross-validation error is the mean of validation errors #1 to #5. +
+cross_validation +
+
+
+

16.1.3 Model Selection Workflow

+

At this stage, we have refined our model selection workflow. We begin by performing a train-test split to set aside a test set for the final evaluation of model performance. Then, we alternate between adjusting our design matrix and computing the cross-validation error to finetune the model’s design. In the example below, we illustrate the use of 4-fold cross-validation to help inform model design.

+
+model_selection +
+
+
+

16.1.4 Hyperparameters

+

An important use of cross-validation is for hyperparameter selection. A hyperparameter is some value in a model that is chosen before the model is fit to any data. This means that it is distinct from the model parameters, \(\theta_i\), because its value is selected before the training process begins. We cannot use our usual techniques – calculus, ordinary least squares, or gradient descent – to choose its value. Instead, we must decide it ourselves.

+

Some examples of hyperparameters in Data 100 are:

+
    +
  • The degree of our polynomial model (recall that we selected the degree before creating our design matrix and calling .fit)
  • +
  • The learning rate, \(\alpha\), in gradient descent
  • +
  • The regularization penalty, \(\lambda\) (to be introduced later this lecture)
  • +
+

To select a hyperparameter value via cross-validation, we first list out several “guesses” for what the best hyperparameter may be. For each guess, we then run cross-validation to compute the cross-validation error incurred by the model when using that choice of hyperparameter value. We then select the value of the hyperparameter that resulted in the lowest cross-validation error.

+

For example, we may wish to use cross-validation to decide what value we should use for \(\alpha\), which controls the step size of each gradient descent update. To do so, we list out some possible guesses for the best \(\alpha\), like 0.1, 1, and 10. For each possible value, we perform cross-validation to see what error the model has when we use that value of \(\alpha\) to train it.

+
+hyperparameter_tuning +
+
+
+
+

16.2 Regularization

+

We’ve now addressed the first of our two goals for today: creating a framework to assess model performance on unseen data. Now, we’ll discuss our second objective: developing a technique to adjust model complexity. This will allow us to directly tackle the issues of under- and overfitting.

+

Earlier, we adjusted the complexity of our polynomial model by tuning a hyperparameter – the degree of the polynomial. We tested out several different polynomial degrees, computed the validation error for each, and selected the value that minimized the validation error. Tweaking the “complexity” was simple; it was only a matter of adjusting the polynomial degree.

+

In most machine learning problems, complexity is defined differently from what we have seen so far. Today, we’ll explore two different definitions of complexity: the squared and absolute magnitude of \(\theta_i\) coefficients.

+
+

16.2.1 Constraining Model Parameters

+

Think back to our work using gradient descent to descend down a loss surface. You may find it helpful to refer back to the Gradient Descent note to refresh your memory. Our aim was to find the combination of model parameters that the smallest, minimum loss. We visualized this using a contour map by plotting possible parameter values on the horizontal and vertical axes, which allows us to take a bird’s eye view above the loss surface. Notice that the contour map has \(p=2\) parameters for ease of visualization. We want to find the model parameters corresponding to the lowest point on the loss surface.

+
+unconstrained +
+

Let’s review our current modeling framework.

+

\[\hat{\mathbb{Y}} = \theta_0 + \theta_1 \phi_1 + \theta_2 \phi_2 + \ldots + \theta_p \phi_p\]

+

Recall that we represent our features with \(\phi_i\) to reflect the fact that we have performed feature engineering.

+

Previously, we restricted model complexity by limiting the total number of features present in the model. We only included a limited number of polynomial features at a time; all other polynomials were excluded from the model.

+

What if, instead of fully removing particular features, we kept all features and used each one only a “little bit”? If we put a limit on how much each feature can contribute to the predictions, we can still control the model’s complexity without the need to manually determine how many features should be removed.

+

What do we mean by a “little bit”? Consider the case where some parameter \(\theta_i\) is close to or equal to 0. Then, feature \(\phi_i\) barely impacts the prediction – the feature is weighted by such a small value that its presence doesn’t significantly change the value of \(\hat{\mathbb{Y}}\). If we restrict how large each parameter \(\theta_i\) can be, we restrict how much feature \(\phi_i\) contributes to the model. This has the effect of reducing model complexity.

+

In regularization, we restrict model complexity by putting a limit on the magnitudes of the model parameters \(\theta_i\).

+

What do these limits look like? Suppose we specify that the sum of all absolute parameter values can be no greater than some number \(Q\). In other words:

+

\[\sum_{i=1}^p |\theta_i| \leq Q\]

+

where \(p\) is the total number of parameters in the model. You can think of this as us giving our model a “budget” for how it distributes the magnitudes of each parameter. If the model assigns a large value to some \(\theta_i\), it may have to assign a small value to some other \(\theta_j\). This has the effect of increasing feature \(\phi_i\)’s influence on the predictions while decreasing the influence of feature \(\phi_j\). The model will need to be strategic about how the parameter weights are distributed – ideally, more “important” features will receive greater weighting.

+

Notice that the intercept term, \(\theta_0\), is excluded from this constraint. We typically do not regularize the intercept term.

+

Now, let’s think back to gradient descent and visualize the loss surface as a contour map. As a refresher, a loss surface means that each point represents the model’s loss for a particular combination of \(\theta_1\), \(\theta_2\). Let’s say our goal is to find the combination of parameters that gives us the lowest loss.

+
+constrained_gd +
+


With no constraint, the optimal \(\hat{\theta}\) is in the center. We denote this as \(\hat{\theta}_\text{No Reg}\).

+

Applying this constraint limits what combinations of model parameters are valid. We can now only consider parameter combinations with a total absolute sum less than or equal to our number \(Q\). For our 2D example, the constraint \(\sum_{i=1}^p |\theta_i| \leq Q\) can be rewritten as \(|\theta_0| + |\theta_1| \leq Q\). This means that we can only assign our regularized parameter vector \(\hat{\theta}_{\text{Reg}}\) to positions in the green diamond below.

+
+diamondreg +
+


We can no longer select the parameter vector that truly minimizes the loss surface, \(\hat{\theta}_{\text{No Reg}}\), because this combination of parameters does not lie within our allowed region. Instead, we select whatever allowable combination brings us closest to the true minimum loss, which is depicted by the red point below.

+
+diamond +
+


Notice that, under regularization, our optimized \(\theta_1\) and \(\theta_2\) values are much smaller than they were without regularization (indeed, \(\theta_1\) has decreased to 0). The model has decreased in complexity because we have limited how much our features contribute to the model. In fact, by setting its parameter to 0, we have effectively removed the influence of feature \(\phi_1\) from the model altogether.

+

If we change the value of \(Q\), we change the region of allowed parameter combinations. The model will still choose the combination of parameters that produces the lowest loss – the closest point in the constrained region to the true minimizer, \(\hat{\theta}_{\text{No Reg}}\).

+

When \(Q\) is small, we severely restrict the size of our parameters. \(\theta_i\)s are small in value, and features \(\phi_i\) only contribute a little to the model. The allowed region of model parameters contracts, and the model becomes much simpler:

+
+diamondpoint +
+


+

When \(Q\) is large, we do not restrict our parameter sizes by much. \(\theta_i\)s are large in value, and features \(\phi_i\) contribute more to the model. The allowed region of model parameters expands, and the model becomes more complex:

+
+largerq +
+


+

Consider the extreme case of when \(Q\) is extremely large. In this situation, our restriction has essentially no effect, and the allowed region includes the OLS solution!

+
+verylarge +
+


+

Now what if \(Q\) was extremely small? Most parameters are then set to (essentially) 0.

+
    +
  • If the model has no intercept term: \(\hat{\mathbb{Y}} = (0)\phi_1 + (0)\phi_2 + \ldots = 0\).
  • +
  • If the model has an intercept term: \(\hat{\mathbb{Y}} = (0)\phi_1 + (0)\phi_2 + \ldots = \theta_0\). Remember that the intercept term is excluded from the constraint - this is so we avoid the situation where we always predict 0.
  • +
+

Let’s summarize what we have seen.

+
+summary +
+
+
+

16.2.2 L1 (LASSO) Regularization

+

How do we actually apply our constraint \(\sum_{i=1}^p |\theta_i| \leq Q\)? We will do so by modifying the objective function that we seek to minimize when fitting a model.

+

Recall our ordinary least squares objective function: our goal was to find parameters that minimize the model’s mean squared error:

+

\[\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2 = \frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2\]

+

To apply our constraint, we need to rephrase our minimization goal as:

+

\[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2\:\text{such that} \sum_{i=1}^p |\theta_i| \leq Q\]

+

Unfortunately, we can’t directly use this formulation as our objective function – it’s not easy to mathematically optimize over a constraint. Instead, we will apply the magic of the Lagrangian Duality. The details of this are out of scope (take EECS 127 if you’re interested in learning more), but the end result is very useful. It turns out that minimizing the following augmented objective function is equivalent to our minimization goal above.

+

\[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2 + \lambda \sum_{i=1}^p \vert \theta_i \vert\] \[ = \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda \sum_{i=1}^p |\theta_i|\] \[ = \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda || \theta ||_1\]

+

The last two expressions include the MSE expressed using vector notation, and the last expression writes \(\sum_{i=1}^p |\theta_i|\) as it’s L1 norm equivalent form, \(|| \theta ||_1\).

+

Notice that we’ve replaced the constraint with a second term in our objective function. We’re now minimizing a function with an additional regularization term that penalizes large coefficients. In order to minimize this new objective function, we’ll end up balancing two components:

+
    +
  1. Keeping the model’s error on the training data low, represented by the term \(\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 x_{i, 1} + \theta_2 x_{i, 2} + \ldots + \theta_p x_{i, p}))^2\)
  2. +
  3. Keeping the magnitudes of model parameters low, represented by the term \(\lambda \sum_{i=1}^p |\theta_i|\)
  4. +
+

The \(\lambda\) factor controls the degree of regularization. Roughly speaking, \(\lambda\) is related to our \(Q\) constraint from before by the rule \(\lambda \approx \frac{1}{Q}\). To understand why, let’s consider two extreme examples. Recall that our goal is to minimize the cost function: \(\frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda || \theta ||_1\).

+
    +
  • Assume \(\lambda \rightarrow \infty\). Then, \(\lambda || \theta ||_1\) dominates the cost function. In order to neutralize the \(\infty\) and minimize this term, we set \(\theta_j = 0\) for all \(j \ge 1\). This is a very constrained model that is mathematically equivalent to the constant model

  • +
  • Assume \(\lambda \rightarrow 0\). Then, \(\lambda || \theta ||_1=0\). Minimizing the cost function is equivalent to minimizing \(\frac{1}{n} || Y - X\theta ||_2^2\), our usual MSE loss function. The act of minimizing MSE loss is just our familiar OLS, and the optimal solution is the global minimum \(\hat{\theta} = \hat\theta_{No Reg.}\).

  • +
+

We call \(\lambda\) the regularization penalty hyperparameter; it needs to be determined prior to training the model, so we must find the best value via cross-validation.

+

The process of finding the optimal \(\hat{\theta}\) to minimize our new objective function is called L1 regularization. It is also sometimes known by the acronym “LASSO”, which stands for “Least Absolute Shrinkage and Selection Operator.”

+

Unlike ordinary least squares, which can be solved via the closed-form solution \(\hat{\theta}_{OLS} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\), there is no closed-form solution for the optimal parameter vector under L1 regularization. Instead, we use the Lasso model class of sklearn.

+
+
import sklearn.linear_model as lm
+
+# The alpha parameter represents our lambda term
+lasso_model = lm.Lasso(alpha=2)
+lasso_model.fit(X_train, Y_train)
+
+lasso_model.coef_
+
+
array([-2.54932056e-01, -9.48597165e-04,  8.91976284e-06, -1.22872290e-08])
+
+
+

Notice that all model coefficients are very small in magnitude. In fact, some of them are so small that they are essentially 0. An important characteristic of L1 regularization is that many model parameters are set to 0. In other words, LASSO effectively selects only a subset of the features. The reason for this comes back to our loss surface and allowed “diamond” regions from earlier – we can often get closer to the lowest loss contour at a corner of the diamond than along an edge.

+

When a model parameter is set to 0 or close to 0, its corresponding feature is essentially removed from the model. We say that L1 regularization performs feature selection because, by setting the parameters of unimportant features to 0, LASSO “selects” which features are more useful for modeling. L1 regularization indicates that the features with non-zero parameters are more informative for modeling than those with parameters set to zero.

+
+
+

16.2.3 Scaling Features for Regularization

+

The regularization procedure we just performed had one subtle issue. To see what it is, let’s take a look at the design matrix for our lasso_model.

+
+
+Code +
X_train.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
hphp^2hp^3hp^4
25985.07225.0614125.052200625.0
12967.04489.0300763.020151121.0
207102.010404.01061208.0108243216.0
30270.04900.0343000.024010000.0
7197.09409.0912673.088529281.0
+ +
+
+
+

Our features – hp, hp^2, hp^3, and hp^4 – are on drastically different numeric scales! The values contained in hp^4 are orders of magnitude larger than those contained in hp. This can be a problem because the value of hp^4 will naturally contribute more to each predicted \(\hat{y}\) because it is so much greater than the values of the other features. For hp to have much of an impact at all on the prediction, it must be scaled by a large model parameter.

+

By inspecting the fitted parameters of our model, we see that this is the case – the parameter for hp is much larger in magnitude than the parameter for hp^4.

+
+
pd.DataFrame({"Feature":X_train.columns, "Parameter":lasso_model.coef_})
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureParameter
0hp-2.549321e-01
1hp^2-9.485972e-04
2hp^38.919763e-06
3hp^4-1.228723e-08
+ +
+
+
+

Recall that by applying regularization, we give our a model a “budget” for how it can allocate the values of model parameters. For hp to have much of an impact on each prediction, LASSO is forced to “spend” more of this budget on the parameter for hp.

+

We can avoid this issue by scaling the data before regularizing. This is a process where we convert all features to the same numeric scale. A common way to scale data is to perform standardization such that all features have mean 0 and standard deviation 1; essentially, we replace everything with its Z-score.

+

\[z_i = \frac{x_i - \mu}{\sigma}\]

+
+
+

16.2.4 L2 (Ridge) Regularization

+

In all of our work above, we considered the constraint \(\sum_{i=1}^p |\theta_i| \leq Q\) to limit the complexity of the model. What if we had applied a different constraint?

+

In L2 regularization, also known as ridge regression, we constrain the model such that the sum of the squared parameters must be less than some number \(Q\). This constraint takes the form:

+

\[\sum_{i=1}^p \theta_i^2 \leq Q\]

+

As before, we typically do not regularize the intercept term.

+

In our 2D example, the constraint becomes \(\theta_1^2 + \theta_2^2 \leq Q\). Can you see how this is similar to the equation for a circle, \(x^2 + y^2 = r^2\)? The allowed region of parameters for a given value of \(Q\) is now shaped like a ball.

+
+green_constrained_gd_sol +
+

If we modify our objective function like before, we find that our new goal is to minimize the function: \[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2\:\text{such that} \sum_{i=1}^p \theta_i^2 \leq Q\]

+

Notice that all we have done is change the constraint on the model parameters. The first term in the expression, the MSE, has not changed.

+

Using Lagrangian Duality (again, out of scope for Data 100), we can re-express our objective function as: \[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2 + \lambda \sum_{i=1}^p \theta_i^2\] \[= \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda \sum_{i=1}^p \theta_i^2\] \[= \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda || \theta ||_2^2\]

+

The last two expressions include the MSE expressed using vector notation, and the last expression writes \(\sum_{i=1}^p \theta_i^2\) as it’s L2 norm equivalent form, \(|| \theta ||_2^2\).

+

When applying L2 regularization, our goal is to minimize this updated objective function.

+

Unlike L1 regularization, L2 regularization does have a closed-form solution for the best parameter vector when regularization is applied:

+

\[\hat\theta_{\text{ridge}} = (\mathbb{X}^{\top}\mathbb{X} + n\lambda I)^{-1}\mathbb{X}^{\top}\mathbb{Y}\]

+

This solution exists even if \(\mathbb{X}\) is not full column rank. This is a major reason why L2 regularization is often used – it can produce a solution even when there is colinearity in the features. We will discuss the concept of colinearity in a future lecture, but we will not derive this result in Data 100, as it involves a fair bit of matrix calculus.

+

In sklearn, we perform L2 regularization using the Ridge class. It runs gradient descent to minimize the L2 objective function. Notice that we scale the data before regularizing.

+
+
ridge_model = lm.Ridge(alpha=1) # alpha represents the hyperparameter lambda
+ridge_model.fit(X_train, Y_train)
+
+ridge_model.coef_
+
+
array([ 5.89130559e-02, -6.42445915e-03,  4.44468157e-05, -8.83981945e-08])
+
+
+
+
+
+

16.3 Regression Summary

+

Our regression models are summarized below. Note the objective function is what the gradient descent optimizer minimizes.

+ ++++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeModelLossRegularizationObjective FunctionSolution
OLS\(\hat{\mathbb{Y}} = \mathbb{X}\theta\)MSENone\(\frac{1}{n} \|\mathbb{Y}-\mathbb{X} \theta\|^2_2\)\(\hat{\theta}_{OLS} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\) if \(\mathbb{X}\) is full column rank
Ridge\(\hat{\mathbb{Y}} = \mathbb{X} \theta\)MSEL2\(\frac{1}{n} \|\mathbb{Y}-\mathbb{X}\theta\|^2_2 + \lambda \sum_{i=1}^p \theta_i^2\)\(\hat{\theta}_{ridge} = (\mathbb{X}^{\top}\mathbb{X} + n \lambda I)^{-1}\mathbb{X}^{\top}\mathbb{Y}\)
LASSO\(\hat{\mathbb{Y}} = \mathbb{X} \theta\)MSEL1\(\frac{1}{n} \|\mathbb{Y}-\mathbb{X}\theta\|^2_2 + \lambda \sum_{i=1}^p \vert \theta_i \vert\)No closed form solution
+ + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/cv_regularization/images/constrained_gd.png b/docs/cv_regularization/images/constrained_gd.png new file mode 100644 index 000000000..4eda732b7 Binary files /dev/null and b/docs/cv_regularization/images/constrained_gd.png differ diff --git a/docs/cv_regularization/images/cross_validation.png b/docs/cv_regularization/images/cross_validation.png new file mode 100644 index 000000000..9faee18b6 Binary files /dev/null and b/docs/cv_regularization/images/cross_validation.png differ diff --git a/docs/cv_regularization/images/diamond.png b/docs/cv_regularization/images/diamond.png new file mode 100644 index 000000000..cdb03a3b2 Binary files /dev/null and b/docs/cv_regularization/images/diamond.png differ diff --git a/docs/cv_regularization/images/diamondpoint.png b/docs/cv_regularization/images/diamondpoint.png new file mode 100644 index 000000000..2d56ec3f4 Binary files /dev/null and b/docs/cv_regularization/images/diamondpoint.png differ diff --git a/docs/cv_regularization/images/diamondreg.png b/docs/cv_regularization/images/diamondreg.png new file mode 100644 index 000000000..6bd703484 Binary files /dev/null and b/docs/cv_regularization/images/diamondreg.png differ diff --git a/docs/cv_regularization/images/green_constrained_gd_sol.png b/docs/cv_regularization/images/green_constrained_gd_sol.png new file mode 100644 index 000000000..aa481a6f4 Binary files /dev/null and b/docs/cv_regularization/images/green_constrained_gd_sol.png differ diff --git a/docs/cv_regularization/images/hyperparameter_tuning.png b/docs/cv_regularization/images/hyperparameter_tuning.png new file mode 100644 index 000000000..fce75441a Binary files /dev/null and b/docs/cv_regularization/images/hyperparameter_tuning.png differ diff --git a/docs/cv_regularization/images/largerq.png b/docs/cv_regularization/images/largerq.png new file mode 100644 index 000000000..b0d2b7979 Binary files /dev/null and b/docs/cv_regularization/images/largerq.png differ diff --git a/docs/cv_regularization/images/model_selection.png b/docs/cv_regularization/images/model_selection.png new file mode 100644 index 000000000..219273867 Binary files /dev/null and b/docs/cv_regularization/images/model_selection.png differ diff --git a/docs/cv_regularization/images/possible_validation_sets.png b/docs/cv_regularization/images/possible_validation_sets.png new file mode 100644 index 000000000..f41f7d364 Binary files /dev/null and b/docs/cv_regularization/images/possible_validation_sets.png differ diff --git a/docs/cv_regularization/images/simple_under_overfit.png b/docs/cv_regularization/images/simple_under_overfit.png new file mode 100644 index 000000000..51bdffdfc Binary files /dev/null and b/docs/cv_regularization/images/simple_under_overfit.png differ diff --git a/docs/cv_regularization/images/summary.png b/docs/cv_regularization/images/summary.png new file mode 100644 index 000000000..59a4ccaf7 Binary files /dev/null and b/docs/cv_regularization/images/summary.png differ diff --git a/docs/cv_regularization/images/train-test-split.png b/docs/cv_regularization/images/train-test-split.png new file mode 100644 index 000000000..6c9bfd0bc Binary files /dev/null and b/docs/cv_regularization/images/train-test-split.png differ diff --git a/docs/cv_regularization/images/training_validation_curve.png b/docs/cv_regularization/images/training_validation_curve.png new file mode 100644 index 000000000..0f6fd9aa6 Binary files /dev/null and b/docs/cv_regularization/images/training_validation_curve.png differ diff --git a/docs/cv_regularization/images/unconstrained.png b/docs/cv_regularization/images/unconstrained.png new file mode 100644 index 000000000..20ad9e443 Binary files /dev/null and b/docs/cv_regularization/images/unconstrained.png differ diff --git a/docs/cv_regularization/images/validation-split.png b/docs/cv_regularization/images/validation-split.png new file mode 100644 index 000000000..5c8aaa3bf Binary files /dev/null and b/docs/cv_regularization/images/validation-split.png differ diff --git a/docs/cv_regularization/images/validation_set.png b/docs/cv_regularization/images/validation_set.png new file mode 100644 index 000000000..7d816e7d6 Binary files /dev/null and b/docs/cv_regularization/images/validation_set.png differ diff --git a/docs/cv_regularization/images/verylarge.png b/docs/cv_regularization/images/verylarge.png new file mode 100644 index 000000000..b08a41efe Binary files /dev/null and b/docs/cv_regularization/images/verylarge.png differ diff --git a/docs/eda/eda.html b/docs/eda/eda.html new file mode 100644 index 000000000..a427693bd --- /dev/null +++ b/docs/eda/eda.html @@ -0,0 +1,6468 @@ + + + + + + + + + +5  Data Cleaning and EDA – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

5  Data Cleaning and EDA

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+Code +
import numpy as np
+import pandas as pd
+
+import matplotlib.pyplot as plt
+import seaborn as sns
+#%matplotlib inline
+plt.rcParams['figure.figsize'] = (12, 9)
+
+sns.set()
+sns.set_context('talk')
+np.set_printoptions(threshold=20, precision=2, suppress=True)
+pd.set_option('display.max_rows', 30)
+pd.set_option('display.max_columns', None)
+pd.set_option('display.precision', 2)
+# This option stops scientific notation for pandas
+pd.set_option('display.float_format', '{:.2f}'.format)
+
+# Silence some spurious seaborn warnings
+import warnings
+warnings.filterwarnings("ignore", category=FutureWarning)
+
+
+
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Recognize common file formats
  • +
  • Categorize data by its variable type
  • +
  • Build awareness of issues with data faithfulness and develop targeted solutions
  • +
+
+
+
+

In the past few lectures, we’ve learned that pandas is a toolkit to restructure, modify, and explore a dataset. What we haven’t yet touched on is how to make these data transformation decisions. When we receive a new set of data from the “real world,” how do we know what processing we should do to convert this data into a usable form?

+

Data cleaning, also called data wrangling, is the process of transforming raw data to facilitate subsequent analysis. It is often used to address issues like:

+ +

Exploratory Data Analysis (EDA) is the process of understanding a new dataset. It is an open-ended, informal analysis that involves familiarizing ourselves with the variables present in the data, discovering potential hypotheses, and identifying possible issues with the data. This last point can often motivate further data cleaning to address any problems with the dataset’s format; because of this, EDA and data cleaning are often thought of as an “infinite loop,” with each process driving the other.

+

In this lecture, we will consider the key properties of data to consider when performing data cleaning and EDA. In doing so, we’ll develop a “checklist” of sorts for you to consider when approaching a new dataset. Throughout this process, we’ll build a deeper understanding of this early (but very important!) stage of the data science lifecycle.

+
+

5.1 Structure

+

We often prefer rectangular data for data analysis. Rectangular structures are easy to manipulate and analyze. A key element of data cleaning is about transforming data to be more rectangular.

+

There are two kinds of rectangular data: tables and matrices. Tables have named columns with different data types and are manipulated using data transformation languages. Matrices contain numeric data of the same type and are manipulated using linear algebra.

+
+

5.1.1 File Formats

+

There are many file types for storing structured data: TSV, JSON, XML, ASCII, SAS, etc. We’ll only cover CSV, TSV, and JSON in lecture, but you’ll likely encounter other formats as you work with different datasets. Reading documentation is your best bet for understanding how to process the multitude of different file types.

+
+

5.1.1.1 CSV

+

CSVs, which stand for Comma-Separated Values, are a common tabular data format. In the past two pandas lectures, we briefly touched on the idea of file format: the way data is encoded in a file for storage. Specifically, our elections and babynames datasets were stored and loaded as CSVs:

+
+
pd.read_csv("data/elections.csv").head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.21
11824John Quincy AdamsDemocratic-Republican113142win42.79
21828Andrew JacksonDemocratic642806win56.20
31828John Quincy AdamsNational Republican500897loss43.80
41832Andrew JacksonDemocratic702735win54.57
+ +
+
+
+

To better understand the properties of a CSV, let’s take a look at the first few rows of the raw data file to see what it looks like before being loaded into a DataFrame. We’ll use the repr() function to return the raw string with its special characters:

+
+
with open("data/elections.csv", "r") as table:
+    i = 0
+    for row in table:
+        print(repr(row))
+        i += 1
+        if i > 3:
+            break
+
+
'Year,Candidate,Party,Popular vote,Result,%\n'
+'1824,Andrew Jackson,Democratic-Republican,151271,loss,57.21012204\n'
+'1824,John Quincy Adams,Democratic-Republican,113142,win,42.78987796\n'
+'1828,Andrew Jackson,Democratic,642806,win,56.20392707\n'
+
+
+

Each row, or record, in the data is delimited by a newline \n. Each column, or field, in the data is delimited by a comma , (hence, comma-separated!).

+
+
+

5.1.1.2 TSV

+

Another common file type is TSV (Tab-Separated Values). In a TSV, records are still delimited by a newline \n, while fields are delimited by \t tab character.

+

Let’s check out the first few rows of the raw TSV file. Again, we’ll use the repr() function so that print shows the special characters.

+
+
with open("data/elections.txt", "r") as table:
+    i = 0
+    for row in table:
+        print(repr(row))
+        i += 1
+        if i > 3:
+            break
+
+
'\ufeffYear\tCandidate\tParty\tPopular vote\tResult\t%\n'
+'1824\tAndrew Jackson\tDemocratic-Republican\t151271\tloss\t57.21012204\n'
+'1824\tJohn Quincy Adams\tDemocratic-Republican\t113142\twin\t42.78987796\n'
+'1828\tAndrew Jackson\tDemocratic\t642806\twin\t56.20392707\n'
+
+
+

TSVs can be loaded into pandas using pd.read_csv. We’ll need to specify the delimiter with parametersep='\t' (documentation).

+
+
pd.read_csv("data/elections.txt", sep='\t').head(3)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.21
11824John Quincy AdamsDemocratic-Republican113142win42.79
21828Andrew JacksonDemocratic642806win56.20
+ +
+
+
+

An issue with CSVs and TSVs comes up whenever there are commas or tabs within the records. How does pandas differentiate between a comma delimiter vs. a comma within the field itself, for example 8,900? To remedy this, check out the quotechar parameter.

+
+
+

5.1.1.3 JSON

+

JSON (JavaScript Object Notation) files behave similarly to Python dictionaries. A raw JSON is shown below.

+
+
with open("data/elections.json", "r") as table:
+    i = 0
+    for row in table:
+        print(row)
+        i += 1
+        if i > 8:
+            break
+
+
[
+
+ {
+
+   "Year": 1824,
+
+   "Candidate": "Andrew Jackson",
+
+   "Party": "Democratic-Republican",
+
+   "Popular vote": 151271,
+
+   "Result": "loss",
+
+   "%": 57.21012204
+
+ },
+
+
+
+

JSON files can be loaded into pandas using pd.read_json.

+
+
pd.read_json('data/elections.json').head(3)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.21
11824John Quincy AdamsDemocratic-Republican113142win42.79
21828Andrew JacksonDemocratic642806win56.20
+ +
+
+
+
+
5.1.1.3.1 EDA with JSON: Berkeley COVID-19 Data
+

The City of Berkeley Open Data website has a dataset with COVID-19 Confirmed Cases among Berkeley residents by date. Let’s download the file and save it as a JSON (note the source URL file type is also a JSON). In the interest of reproducible data science, we will download the data programatically. We have defined some helper functions in the ds100_utils.py file that we can reuse these helper functions in many different notebooks.

+
+
from ds100_utils import fetch_and_cache
+
+covid_file = fetch_and_cache(
+    "https://data.cityofberkeley.info/api/views/xn6j-b766/rows.json?accessType=DOWNLOAD",
+    "confirmed-cases.json",
+    force=False)
+covid_file          # a file path wrapper object
+
+
Using cached version that was downloaded (UTC): Tue Aug 27 03:33:01 2024
+
+
+
PosixPath('data/confirmed-cases.json')
+
+
+
+
5.1.1.3.1.1 File Size
+

Let’s start our analysis by getting a rough estimate of the size of the dataset to inform the tools we use to view the data. For relatively small datasets, we can use a text editor or spreadsheet. For larger datasets, more programmatic exploration or distributed computing tools may be more fitting. Here we will use Python tools to probe the file.

+

Since there seem to be text files, let’s investigate the number of lines, which often corresponds to the number of records

+
+
import os
+
+print(covid_file, "is", os.path.getsize(covid_file) / 1e6, "MB")
+
+with open(covid_file, "r") as f:
+    print(covid_file, "is", sum(1 for l in f), "lines.")
+
+
data/confirmed-cases.json is 0.116367 MB
+data/confirmed-cases.json is 1110 lines.
+
+
+
+
+
5.1.1.3.1.2 Unix Commands
+

As part of the EDA workflow, Unix commands can come in very handy. In fact, there’s an entire book called “Data Science at the Command Line” that explores this idea in depth! In Jupyter/IPython, you can prefix lines with ! to execute arbitrary Unix commands, and within those lines, you can refer to Python variables and expressions with the syntax {expr}.

+

Here, we use the ls command to list files, using the -lh flags, which request “long format with information in human-readable form.” We also use the wc command for “word count,” but with the -l flag, which asks for line counts instead of words.

+

These two give us the same information as the code above, albeit in a slightly different form:

+
+
!ls -lh {covid_file}
+!wc -l {covid_file}
+
+
-rw-r--r--  1 jianingding21  staff   114K Aug 27 03:33 data/confirmed-cases.json
+    1109 data/confirmed-cases.json
+
+
+
+
+
5.1.1.3.1.3 File Contents
+

Let’s explore the data format using Python.

+
+
with open(covid_file, "r") as f:
+    for i, row in enumerate(f):
+        print(repr(row)) # print raw strings
+        if i >= 4: break
+
+
'{\n'
+'  "meta" : {\n'
+'    "view" : {\n'
+'      "id" : "xn6j-b766",\n'
+'      "name" : "COVID-19 Confirmed Cases",\n'
+
+
+

We can use the head Unix command (which is where pandashead method comes from!) to see the first few lines of the file:

+
+
!head -5 {covid_file}
+
+
{
+  "meta" : {
+    "view" : {
+      "id" : "xn6j-b766",
+      "name" : "COVID-19 Confirmed Cases",
+
+
+

In order to load the JSON file into pandas, Let’s first do some EDA with Oython’s json package to understand the particular structure of this JSON file so that we can decide what (if anything) to load into pandas. Python has relatively good support for JSON data since it closely matches the internal python object model. In the following cell we import the entire JSON datafile into a python dictionary using the json package.

+
+
import json
+
+with open(covid_file, "rb") as f:
+    covid_json = json.load(f)
+
+

The covid_json variable is now a dictionary encoding the data in the file:

+
+
type(covid_json)
+
+
dict
+
+
+

We can examine what keys are in the top level JSON object by listing out the keys.

+
+
covid_json.keys()
+
+
dict_keys(['meta', 'data'])
+
+
+

Observation: The JSON dictionary contains a meta key which likely refers to metadata (data about the data). Metadata is often maintained with the data and can be a good source of additional information.

+

We can investigate the metadata further by examining the keys associated with the metadata.

+
+
covid_json['meta'].keys()
+
+
dict_keys(['view'])
+
+
+

The meta key contains another dictionary called view. This likely refers to metadata about a particular “view” of some underlying database. We will learn more about views when we study SQL later in the class.

+
+
covid_json['meta']['view'].keys()
+
+
dict_keys(['id', 'name', 'assetType', 'attribution', 'averageRating', 'category', 'createdAt', 'description', 'displayType', 'downloadCount', 'hideFromCatalog', 'hideFromDataJson', 'newBackend', 'numberOfComments', 'oid', 'provenance', 'publicationAppendEnabled', 'publicationDate', 'publicationGroup', 'publicationStage', 'rowsUpdatedAt', 'rowsUpdatedBy', 'tableId', 'totalTimesRated', 'viewCount', 'viewLastModified', 'viewType', 'approvals', 'columns', 'grants', 'metadata', 'owner', 'query', 'rights', 'tableAuthor', 'tags', 'flags'])
+
+
+

Notice that this a nested/recursive data structure. As we dig deeper we reveal more and more keys and the corresponding data:

+
meta
+|-> data
+    | ... (haven't explored yet)
+|-> view
+    | -> id
+    | -> name
+    | -> attribution 
+    ...
+    | -> description
+    ...
+    | -> columns
+    ...
+

There is a key called description in the view sub dictionary. This likely contains a description of the data:

+
+
print(covid_json['meta']['view']['description'])
+
+
Counts of confirmed COVID-19 cases among Berkeley residents by date.
+
+
+
+
+
5.1.1.3.1.4 Examining the Data Field for Records
+

We can look at a few entries in the data field. This is what we’ll load into pandas.

+
+
for i in range(3):
+    print(f"{i:03} | {covid_json['data'][i]}")
+
+
000 | ['row-kzbg.v7my-c3y2', '00000000-0000-0000-0405-CB14DE51DAA7', 0, 1643733903, None, 1643733903, None, '{ }', '2020-02-28T00:00:00', '1', '1']
+001 | ['row-jkyx_9u4r-h2yw', '00000000-0000-0000-F806-86D0DBE0E17F', 0, 1643733903, None, 1643733903, None, '{ }', '2020-02-29T00:00:00', '0', '1']
+002 | ['row-qifg_4aug-y3ym', '00000000-0000-0000-2DCE-4D1872F9B216', 0, 1643733903, None, 1643733903, None, '{ }', '2020-03-01T00:00:00', '0', '1']
+
+
+

Observations: * These look like equal-length records, so maybe data is a table! * But what do each of values in the record mean? Where can we find column headers?

+

For that, we’ll need the columns key in the metadata dictionary. This returns a list:

+
+
type(covid_json['meta']['view']['columns'])
+
+
list
+
+
+
+
+
5.1.1.3.1.5 Summary of exploring the JSON file
+
    +
  1. The above metadata tells us a lot about the columns in the data including column names, potential data anomalies, and a basic statistic.
  2. +
  3. Because of its non-tabular structure, JSON makes it easier (than CSV) to create self-documenting data, meaning that information about the data is stored in the same file as the data.
  4. +
  5. Self-documenting data can be helpful since it maintains its own description and these descriptions are more likely to be updated as data changes.
  6. +
+
+
+
5.1.1.3.1.6 Loading COVID Data into pandas
+

Finally, let’s load the data (not the metadata) into a pandas DataFrame. In the following block of code we:

+
    +
  1. Translate the JSON records into a DataFrame:

    +
      +
    • fields: covid_json['meta']['view']['columns']
    • +
    • records: covid_json['data']
    • +
  2. +
  3. Remove columns that have no metadata description. This would be a bad idea in general, but here we remove these columns since the above analysis suggests they are unlikely to contain useful information.

  4. +
  5. Examine the tail of the table.

  6. +
+
+
# Load the data from JSON and assign column titles
+covid = pd.DataFrame(
+    covid_json['data'],
+    columns=[c['name'] for c in covid_json['meta']['view']['columns']])
+
+covid.tail()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
sididpositioncreated_atcreated_metaupdated_atupdated_metametaDateNew CasesCumulative Cases
699row-49b6_x8zv.gyum00000000-0000-0000-A18C-9174A6D0577401643733903None1643733903None{ }2022-01-27T00:00:0010610694
700row-gs55-p5em.y4v900000000-0000-0000-F41D-5724AEABB4D601643733903None1643733903None{ }2022-01-28T00:00:0022310917
701row-3pyj.tf95-qu6700000000-0000-0000-BEE3-B0188D2518BD01643733903None1643733903None{ }2022-01-29T00:00:0013911056
702row-cgnd.8syv.jvjn00000000-0000-0000-C318-63CF75F7F74001643733903None1643733903None{ }2022-01-30T00:00:003311089
703row-qywv_24x6-237y00000000-0000-0000-FE92-9789FED3AA2001643733903None1643733903None{ }2022-01-31T00:00:004211131
+ +
+
+
+
+
+
+
+
+

5.1.2 Primary and Foreign Keys

+

Last time, we introduced .merge as the pandas method for joining multiple DataFrames together. In our discussion of joins, we touched on the idea of using a “key” to determine what rows should be merged from each table. Let’s take a moment to examine this idea more closely.

+

The primary key is the column or set of columns in a table that uniquely determine the values of the remaining columns. It can be thought of as the unique identifier for each individual row in the table. For example, a table of Data 100 students might use each student’s Cal ID as the primary key.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Cal IDNameMajor
03034619471OskiData Science
13035619472OllieComputer Science
23025619473OrrieData Science
33046789372OllieEconomics
+ +
+
+
+

The foreign key is the column or set of columns in a table that reference primary keys in other tables. Knowing a dataset’s foreign keys can be useful when assigning the left_on and right_on parameters of .merge. In the table of office hour tickets below, "Cal ID" is a foreign key referencing the previous table.

+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OH RequestCal IDQuestion
013034619471HW 2 Q1
123035619472HW 2 Q3
233025619473Lab 3 Q4
343035619472HW 2 Q7
+ +
+
+
+
+
+

5.1.3 Variable Types

+

Variables are columns. A variable is a measurement of a particular concept. Variables have two common properties: data type/storage type and variable type/feature type. The data type of a variable indicates how each variable value is stored in memory (integer, floating point, boolean, etc.) and affects which pandas functions are used. The variable type is a conceptualized measurement of information (and therefore indicates what values a variable can take on). Variable type is identified through expert knowledge, exploring the data itself, or consulting the data codebook. The variable type affects how one visualizes and inteprets the data. In this class, “variable types” are conceptual.

+

After loading data into a file, it’s a good idea to take the time to understand what pieces of information are encoded in the dataset. In particular, we want to identify what variable types are present in our data. Broadly speaking, we can categorize variables into one of two overarching types.

+

Quantitative variables describe some numeric quantity or amount. We can divide quantitative data further into:

+
    +
  • Continuous quantitative variables: numeric data that can be measured on a continuous scale to arbitrary precision. Continuous variables do not have a strict set of possible values – they can be recorded to any number of decimal places. For example, weights, GPA, or CO2 concentrations.
  • +
  • Discrete quantitative variables: numeric data that can only take on a finite set of possible values. For example, someone’s age or the number of siblings they have.
  • +
+

Qualitative variables, also known as categorical variables, describe data that isn’t measuring some quantity or amount. The sub-categories of categorical data are:

+
    +
  • Ordinal qualitative variables: categories with ordered levels. Specifically, ordinal variables are those where the difference between levels has no consistent, quantifiable meaning. Some examples include levels of education (high school, undergrad, grad, etc.), income bracket (low, medium, high), or Yelp rating.
  • +
  • Nominal qualitative variables: categories with no specific order. For example, someone’s political affiliation or Cal ID number.
  • +
+
+
+

+
Classification of variable types
+
+
+

Note that many variables don’t sit neatly in just one of these categories. Qualitative variables could have numeric levels, and conversely, quantitative variables could be stored as strings.

+
+
+
+

5.2 Granularity, Scope, and Temporality

+

After understanding the structure of the dataset, the next task is to determine what exactly the data represents. We’ll do so by considering the data’s granularity, scope, and temporality.

+
+

5.2.1 Granularity

+

The granularity of a dataset is what a single row represents. You can also think of it as the level of detail included in the data. To determine the data’s granularity, ask: what does each row in the dataset represent? Fine-grained data contains a high level of detail, with a single row representing a small individual unit. For example, each record may represent one person. Coarse-grained data is encoded such that a single row represents a large individual unit – for example, each record may represent a group of people.

+
+
+

5.2.2 Scope

+

The scope of a dataset is the subset of the population covered by the data. If we were investigating student performance in Data Science courses, a dataset with a narrow scope might encompass all students enrolled in Data 100 whereas a dataset with an expansive scope might encompass all students in California.

+
+
+

5.2.3 Temporality

+

The temporality of a dataset describes the periodicity over which the data was collected as well as when the data was most recently collected or updated.

+

Time and date fields of a dataset could represent a few things:

+
    +
  1. when the “event” happened
  2. +
  3. when the data was collected, or when it was entered into the system
  4. +
  5. when the data was copied into the database
  6. +
+

To fully understand the temporality of the data, it also may be necessary to standardize time zones or inspect recurring time-based trends in the data (do patterns recur in 24-hour periods? Over the course of a month? Seasonally?). The convention for standardizing time is the Coordinated Universal Time (UTC), an international time standard measured at 0 degrees latitude that stays consistent throughout the year (no daylight savings). We can represent Berkeley’s time zone, Pacific Standard Time (PST), as UTC-7 (with daylight savings).

+
+

5.2.3.1 Temporality with pandasdt accessors

+

Let’s briefly look at how we can use pandasdt accessors to work with dates/times in a dataset using the dataset you’ll see in Lab 3: the Berkeley PD Calls for Service dataset.

+
+
+Code +
calls = pd.read_csv("data/Berkeley_PD_-_Calls_for_Service.csv")
+calls.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CASENOOFFENSEEVENTDTEVENTTMCVLEGENDCVDOWInDbDateBlock_LocationBLKADDRCityState
021014296THEFT MISD. (UNDER $950)04/01/2021 12:00:00 AM10:58LARCENY406/15/2021 12:00:00 AMBerkeley, CA\n(37.869058, -122.270455)NaNBerkeleyCA
121014391THEFT MISD. (UNDER $950)04/01/2021 12:00:00 AM10:38LARCENY406/15/2021 12:00:00 AMBerkeley, CA\n(37.869058, -122.270455)NaNBerkeleyCA
221090494THEFT MISD. (UNDER $950)04/19/2021 12:00:00 AM12:15LARCENY106/15/2021 12:00:00 AM2100 BLOCK HASTE ST\nBerkeley, CA\n(37.864908,...2100 BLOCK HASTE STBerkeleyCA
321090204THEFT FELONY (OVER $950)02/13/2021 12:00:00 AM17:00LARCENY606/15/2021 12:00:00 AM2600 BLOCK WARRING ST\nBerkeley, CA\n(37.86393...2600 BLOCK WARRING STBerkeleyCA
421090179BURGLARY AUTO02/08/2021 12:00:00 AM6:20BURGLARY - VEHICLE106/15/2021 12:00:00 AM2700 BLOCK GARBER ST\nBerkeley, CA\n(37.86066,...2700 BLOCK GARBER STBerkeleyCA
+ +
+
+
+

Looks like there are three columns with dates/times: EVENTDT, EVENTTM, and InDbDate.

+

Most likely, EVENTDT stands for the date when the event took place, EVENTTM stands for the time of day the event took place (in 24-hr format), and InDbDate is the date this call is recorded onto the database.

+

If we check the data type of these columns, we will see they are stored as strings. We can convert them to datetime objects using pandas to_datetime function.

+
+
calls["EVENTDT"] = pd.to_datetime(calls["EVENTDT"])
+calls.head()
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57895/874729699.py:1: UserWarning:
+
+Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CASENOOFFENSEEVENTDTEVENTTMCVLEGENDCVDOWInDbDateBlock_LocationBLKADDRCityState
021014296THEFT MISD. (UNDER $950)2021-04-0110:58LARCENY406/15/2021 12:00:00 AMBerkeley, CA\n(37.869058, -122.270455)NaNBerkeleyCA
121014391THEFT MISD. (UNDER $950)2021-04-0110:38LARCENY406/15/2021 12:00:00 AMBerkeley, CA\n(37.869058, -122.270455)NaNBerkeleyCA
221090494THEFT MISD. (UNDER $950)2021-04-1912:15LARCENY106/15/2021 12:00:00 AM2100 BLOCK HASTE ST\nBerkeley, CA\n(37.864908,...2100 BLOCK HASTE STBerkeleyCA
321090204THEFT FELONY (OVER $950)2021-02-1317:00LARCENY606/15/2021 12:00:00 AM2600 BLOCK WARRING ST\nBerkeley, CA\n(37.86393...2600 BLOCK WARRING STBerkeleyCA
421090179BURGLARY AUTO2021-02-086:20BURGLARY - VEHICLE106/15/2021 12:00:00 AM2700 BLOCK GARBER ST\nBerkeley, CA\n(37.86066,...2700 BLOCK GARBER STBerkeleyCA
+ +
+
+
+

Now, we can use the dt accessor on this column.

+

We can get the month:

+
+
calls["EVENTDT"].dt.month.head()
+
+
0    4
+1    4
+2    4
+3    2
+4    2
+Name: EVENTDT, dtype: int32
+
+
+

Which day of the week the date is on:

+
+
calls["EVENTDT"].dt.dayofweek.head()
+
+
0    3
+1    3
+2    0
+3    5
+4    0
+Name: EVENTDT, dtype: int32
+
+
+

Check the mimimum values to see if there are any suspicious-looking, 70s dates:

+
+
calls.sort_values("EVENTDT").head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CASENOOFFENSEEVENTDTEVENTTMCVLEGENDCVDOWInDbDateBlock_LocationBLKADDRCityState
251320057398BURGLARY COMMERCIAL2020-12-1716:05BURGLARY - COMMERCIAL406/15/2021 12:00:00 AM600 BLOCK GILMAN ST\nBerkeley, CA\n(37.878405,...600 BLOCK GILMAN STBerkeleyCA
62420057207ASSAULT/BATTERY MISD.2020-12-1716:50ASSAULT406/15/2021 12:00:00 AM2100 BLOCK SHATTUCK AVE\nBerkeley, CA\n(37.871...2100 BLOCK SHATTUCK AVEBerkeleyCA
15420092214THEFT FROM AUTO2020-12-1718:30LARCENY - FROM VEHICLE406/15/2021 12:00:00 AM800 BLOCK SHATTUCK AVE\nBerkeley, CA\n(37.8918...800 BLOCK SHATTUCK AVEBerkeleyCA
65920057324THEFT MISD. (UNDER $950)2020-12-1715:44LARCENY406/15/2021 12:00:00 AM1800 BLOCK 4TH ST\nBerkeley, CA\n(37.869888, -...1800 BLOCK 4TH STBerkeleyCA
99320057573BURGLARY RESIDENTIAL2020-12-1722:15BURGLARY - RESIDENTIAL406/15/2021 12:00:00 AM1700 BLOCK STUART ST\nBerkeley, CA\n(37.857495...1700 BLOCK STUART STBerkeleyCA
+ +
+
+
+

Doesn’t look like it! We are good!

+

We can also do many things with the dt accessor like switching time zones and converting time back to UNIX/POSIX time. Check out the documentation on .dt accessor and time series/date functionality.

+
+
+
+
+

5.3 Faithfulness

+

At this stage in our data cleaning and EDA workflow, we’ve achieved quite a lot: we’ve identified how our data is structured, come to terms with what information it encodes, and gained insight as to how it was generated. Throughout this process, we should always recall the original intent of our work in Data Science – to use data to better understand and model the real world. To achieve this goal, we need to ensure that the data we use is faithful to reality; that is, that our data accurately captures the “real world.”

+

Data used in research or industry is often “messy” – there may be errors or inaccuracies that impact the faithfulness of the dataset. Signs that data may not be faithful include:

+
    +
  • Unrealistic or “incorrect” values, such as negative counts, locations that don’t exist, or dates set in the future
  • +
  • Violations of obvious dependencies, like an age that does not match a birthday
  • +
  • Clear signs that data was entered by hand, which can lead to spelling errors or fields that are incorrectly shifted
  • +
  • Signs of data falsification, such as fake email addresses or repeated use of the same names
  • +
  • Duplicated records or fields containing the same information
  • +
  • Truncated data, e.g. Microsoft Excel would limit the number of rows to 655536 and the number of columns to 255
  • +
+

We often solve some of these more common issues in the following ways:

+
    +
  • Spelling errors: apply corrections or drop records that aren’t in a dictionary
  • +
  • Time zone inconsistencies: convert to a common time zone (e.g. UTC)
  • +
  • Duplicated records or fields: identify and eliminate duplicates (using primary keys)
  • +
  • Unspecified or inconsistent units: infer the units and check that values are in reasonable ranges in the data
  • +
+
+

5.3.1 Missing Values

+

Another common issue encountered with real-world datasets is that of missing data. One strategy to resolve this is to simply drop any records with missing values from the dataset. This does, however, introduce the risk of inducing biases – it is possible that the missing or corrupt records may be systemically related to some feature of interest in the data. Another solution is to keep the data as NaN values.

+

A third method to address missing data is to perform imputation: infer the missing values using other data available in the dataset. There is a wide variety of imputation techniques that can be implemented; some of the most common are listed below.

+
    +
  • Average imputation: replace missing values with the average value for that field
  • +
  • Hot deck imputation: replace missing values with some random value
  • +
  • Regression imputation: develop a model to predict missing values and replace with the predicted value from the model.
  • +
  • Multiple imputation: replace missing values with multiple random values
  • +
+

Regardless of the strategy used to deal with missing data, we should think carefully about why particular records or fields may be missing – this can help inform whether or not the absence of these values is significant or meaningful.

+
+
+
+

5.4 EDA Demo 1: Tuberculosis in the United States

+

Now, let’s walk through the data-cleaning and EDA workflow to see what can we learn about the presence of Tuberculosis in the United States!

+

We will examine the data included in the original CDC article published in 2021.

+
+

5.4.1 CSVs and Field Names

+

Suppose Table 1 was saved as a CSV file located in data/cdc_tuberculosis.csv.

+

We can then explore the CSV (which is a text file, and does not contain binary-encoded data) in many ways: 1. Using a text editor like emacs, vim, VSCode, etc. 2. Opening the CSV directly in DataHub (read-only), Excel, Google Sheets, etc. 3. The Python file object 4. pandas, using pd.read_csv()

+

To try out options 1 and 2, you can view or download the Tuberculosis from the lecture demo notebook under the data folder in the left hand menu. Notice how the CSV file is a type of rectangular data (i.e., tabular data) stored as comma-separated values.

+

Next, let’s try out option 3 using the Python file object. We’ll look at the first four lines:

+
+
+Code +
with open("data/cdc_tuberculosis.csv", "r") as f:
+    i = 0
+    for row in f:
+        print(row)
+        i += 1
+        if i > 3:
+            break
+
+
+
,No. of TB cases,,,TB incidence,,
+
+U.S. jurisdiction,2019,2020,2021,2019,2020,2021
+
+Total,"8,900","7,173","7,860",2.71,2.16,2.37
+
+Alabama,87,72,92,1.77,1.43,1.83
+
+
+
+

Whoa, why are there blank lines interspaced between the lines of the CSV?

+

You may recall that all line breaks in text files are encoded as the special newline character \n. Python’s print() prints each string (including the newline), and an additional newline on top of that.

+

If you’re curious, we can use the repr() function to return the raw string with all special characters:

+
+
+Code +
with open("data/cdc_tuberculosis.csv", "r") as f:
+    i = 0
+    for row in f:
+        print(repr(row)) # print raw strings
+        i += 1
+        if i > 3:
+            break
+
+
+
',No. of TB cases,,,TB incidence,,\n'
+'U.S. jurisdiction,2019,2020,2021,2019,2020,2021\n'
+'Total,"8,900","7,173","7,860",2.71,2.16,2.37\n'
+'Alabama,87,72,92,1.77,1.43,1.83\n'
+
+
+

Finally, let’s try option 4 and use the tried-and-true Data 100 approach: pandas.

+
+
tb_df = pd.read_csv("data/cdc_tuberculosis.csv")
+tb_df.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Unnamed: 0No. of TB casesUnnamed: 2Unnamed: 3TB incidenceUnnamed: 5Unnamed: 6
0U.S. jurisdiction2019202020212019.002020.002021.00
1Total8,9007,1737,8602.712.162.37
2Alabama8772921.771.431.83
3Alaska5858587.917.927.92
4Arizona1831361292.511.891.77
+ +
+
+
+

You may notice some strange things about this table: what’s up with the “Unnamed” column names and the first row?

+

Congratulations — you’re ready to wrangle your data! Because of how things are stored, we’ll need to clean the data a bit to name our columns better.

+

A reasonable first step is to identify the row with the right header. The pd.read_csv() function (documentation) has the convenient header parameter that we can set to use the elements in row 1 as the appropriate columns:

+
+
tb_df = pd.read_csv("data/cdc_tuberculosis.csv", header=1) # row index
+tb_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdiction2019202020212019.12020.12021.1
0Total8,9007,1737,8602.712.162.37
1Alabama8772921.771.431.83
2Alaska5858587.917.927.92
3Arizona1831361292.511.891.77
4Arkansas6459692.121.962.28
+ +
+
+
+

Wait…but now we can’t differentiate betwen the “Number of TB cases” and “TB incidence” year columns. pandas has tried to make our lives easier by automatically adding “.1” to the latter columns, but this doesn’t help us, as humans, understand the data.

+

We can do this manually with df.rename() (documentation):

+
+
rename_dict = {'2019': 'TB cases 2019',
+               '2020': 'TB cases 2020',
+               '2021': 'TB cases 2021',
+               '2019.1': 'TB incidence 2019',
+               '2020.1': 'TB incidence 2020',
+               '2021.1': 'TB incidence 2021'}
+tb_df = tb_df.rename(columns=rename_dict)
+tb_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdictionTB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021
0Total8,9007,1737,8602.712.162.37
1Alabama8772921.771.431.83
2Alaska5858587.917.927.92
3Arizona1831361292.511.891.77
4Arkansas6459692.121.962.28
+ +
+
+
+
+
+

5.4.2 Record Granularity

+

You might already be wondering: what’s up with that first record?

+

Row 0 is what we call a rollup record, or summary record. It’s often useful when displaying tables to humans. The granularity of record 0 (Totals) vs the rest of the records (States) is different.

+

Okay, EDA step two. How was the rollup record aggregated?

+

Let’s check if Total TB cases is the sum of all state TB cases. If we sum over all rows, we should get 2x the total cases in each of our TB cases by year (why do you think this is?).

+
+
+Code +
tb_df.sum(axis=0)
+
+
+
U.S. jurisdiction    TotalAlabamaAlaskaArizonaArkansasCaliforniaCol...
+TB cases 2019        8,9008758183642,111666718245583029973261085237...
+TB cases 2020        7,1737258136591,706525417194122219282169239376...
+TB cases 2021        7,8609258129691,750585443194992281064255127494...
+TB incidence 2019                                               109.94
+TB incidence 2020                                                93.09
+TB incidence 2021                                               102.94
+dtype: object
+
+
+

Whoa, what’s going on with the TB cases in 2019, 2020, and 2021? Check out the column types:

+
+
+Code +
tb_df.dtypes
+
+
+
U.S. jurisdiction     object
+TB cases 2019         object
+TB cases 2020         object
+TB cases 2021         object
+TB incidence 2019    float64
+TB incidence 2020    float64
+TB incidence 2021    float64
+dtype: object
+
+
+

Since there are commas in the values for TB cases, the numbers are read as the object datatype, or storage type (close to the Python string datatype), so pandas is concatenating strings instead of adding integers (recall that Python can “sum”, or concatenate, strings together: "data" + "100" evaluates to "data100").

+

Fortunately read_csv also has a thousands parameter (documentation):

+
+
# improve readability: chaining method calls with outer parentheses/line breaks
+tb_df = (
+    pd.read_csv("data/cdc_tuberculosis.csv", header=1, thousands=',')
+    .rename(columns=rename_dict)
+)
+tb_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdictionTB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021
0Total8900717378602.712.162.37
1Alabama8772921.771.431.83
2Alaska5858587.917.927.92
3Arizona1831361292.511.891.77
4Arkansas6459692.121.962.28
+ +
+
+
+
+
tb_df.sum()
+
+
U.S. jurisdiction    TotalAlabamaAlaskaArizonaArkansasCaliforniaCol...
+TB cases 2019                                                    17800
+TB cases 2020                                                    14346
+TB cases 2021                                                    15720
+TB incidence 2019                                               109.94
+TB incidence 2020                                                93.09
+TB incidence 2021                                               102.94
+dtype: object
+
+
+

The total TB cases look right. Phew!

+

Let’s just look at the records with state-level granularity:

+
+
+Code +
state_tb_df = tb_df[1:]
+state_tb_df.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdictionTB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021
1Alabama8772921.771.431.83
2Alaska5858587.917.927.92
3Arizona1831361292.511.891.77
4Arkansas6459692.121.962.28
5California2111170617505.354.324.46
+ +
+
+
+
+
+

5.4.3 Gather Census Data

+

U.S. Census population estimates source (2019), source (2020-2021).

+

Running the below cells cleans the data. There are a few new methods here: * df.convert_dtypes() (documentation) conveniently converts all float dtypes into ints and is out of scope for the class. * df.drop_na() (documentation) will be explained in more detail next time.

+
+
+Code +
# 2010s census data
+census_2010s_df = pd.read_csv("data/nst-est2019-01.csv", header=3, thousands=",")
+census_2010s_df = (
+    census_2010s_df
+    .reset_index()
+    .drop(columns=["index", "Census", "Estimates Base"])
+    .rename(columns={"Unnamed: 0": "Geographic Area"})
+    .convert_dtypes()                 # "smart" converting of columns, use at your own risk
+    .dropna()                         # we'll introduce this next time
+)
+census_2010s_df['Geographic Area'] = census_2010s_df['Geographic Area'].str.strip('.')
+
+# with pd.option_context('display.min_rows', 30): # shows more rows
+#     display(census_2010s_df)
+    
+census_2010s_df.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Geographic Area2010201120122013201420152016201720182019
0United States309321666311556874313830990315993715318301008320635163322941311324985539326687501328239523
1Northeast55380134556042235577521655901806560060115603468456042330560592405604662055982803
2Midwest66974416671578006733674367560379677451676786058367987540681267816823662868329004
3South114866680116006522117241208118364400119624037120997341122351760123542189124569433125580448
4West72100436727883297347782374167130749257937574255576559681772573297783482078347268
+ +
+
+
+

Occasionally, you will want to modify code that you have imported. To reimport those modifications you can either use python’s importlib library:

+
from importlib import reload
+reload(utils)
+

or use iPython magic which will intelligently import code when files change:

+
%load_ext autoreload
+%autoreload 2
+
+
+Code +
# census 2020s data
+census_2020s_df = pd.read_csv("data/NST-EST2022-POP.csv", header=3, thousands=",")
+census_2020s_df = (
+    census_2020s_df
+    .reset_index()
+    .drop(columns=["index", "Unnamed: 1"])
+    .rename(columns={"Unnamed: 0": "Geographic Area"})
+    .convert_dtypes()                 # "smart" converting of columns, use at your own risk
+    .dropna()                         # we'll introduce this next time
+)
+census_2020s_df['Geographic Area'] = census_2020s_df['Geographic Area'].str.strip('.')
+
+census_2020s_df.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Geographic Area202020212022
0United States331511512332031554333287557
1Northeast574488985725925757040406
2Midwest689610436883650568787595
3South126450613127346029128716192
4West786509587858976378743364
+ +
+
+
+
+
+

5.4.4 Joining Data (Merging DataFrames)

+

Time to merge! Here we use the DataFrame method df1.merge(right=df2, ...) on DataFrame df1 (documentation). Contrast this with the function pd.merge(left=df1, right=df2, ...) (documentation). Feel free to use either.

+
+
# merge TB DataFrame with two US census DataFrames
+tb_census_df = (
+    tb_df
+    .merge(right=census_2010s_df,
+           left_on="U.S. jurisdiction", right_on="Geographic Area")
+    .merge(right=census_2020s_df,
+           left_on="U.S. jurisdiction", right_on="Geographic Area")
+)
+tb_census_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdictionTB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021Geographic Area_x2010201120122013201420152016201720182019Geographic Area_y202020212022
0Alabama8772921.771.431.83Alabama4785437479906948155884830081484179948523474863525487448648876814903185Alabama503136250498465074296
1Alaska5858587.917.927.92Alaska713910722128730443737068736283737498741456739700735139731545Alaska732923734182733583
2Arizona1831361292.511.891.77Arizona6407172647264365549786632764673041368296766941072704400871580247278717Arizona717994372648777359197
3Arkansas6459692.121.962.28Arkansas2921964294066729521642959400296739229780482989918300134530097333017804Arkansas301419530281223045637
4California2111170617505.354.324.46California37319502376383693794880038260787385969723891804539167117393584973946158839512223California395016533914299139029342
+ +
+
+
+

Having all of these columns is a little unwieldy. We could either drop the unneeded columns now, or just merge on smaller census DataFrames. Let’s do the latter.

+
+
# try merging again, but cleaner this time
+tb_census_df = (
+    tb_df
+    .merge(right=census_2010s_df[["Geographic Area", "2019"]],
+           left_on="U.S. jurisdiction", right_on="Geographic Area")
+    .drop(columns="Geographic Area")
+    .merge(right=census_2020s_df[["Geographic Area", "2020", "2021"]],
+           left_on="U.S. jurisdiction", right_on="Geographic Area")
+    .drop(columns="Geographic Area")
+)
+tb_census_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdictionTB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021201920202021
0Alabama8772921.771.431.83490318550313625049846
1Alaska5858587.917.927.92731545732923734182
2Arizona1831361292.511.891.77727871771799437264877
3Arkansas6459692.121.962.28301780430141953028122
4California2111170617505.354.324.46395122233950165339142991
+ +
+
+
+
+
+

5.4.5 Reproducing Data: Compute Incidence

+

Let’s recompute incidence to make sure we know where the original CDC numbers came from.

+

From the CDC report: TB incidence is computed as “Cases per 100,000 persons using mid-year population estimates from the U.S. Census Bureau.”

+

If we define a group as 100,000 people, then we can compute the TB incidence for a given state population as

+

\[\text{TB incidence} = \frac{\text{TB cases in population}}{\text{groups in population}} = \frac{\text{TB cases in population}}{\text{population}/100000} \]

+

\[= \frac{\text{TB cases in population}}{\text{population}} \times 100000\]

+

Let’s try this for 2019:

+
+
tb_census_df["recompute incidence 2019"] = tb_census_df["TB cases 2019"]/tb_census_df["2019"]*100000
+tb_census_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdictionTB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021201920202021recompute incidence 2019
0Alabama8772921.771.431.834903185503136250498461.77
1Alaska5858587.917.927.927315457329237341827.93
2Arizona1831361292.511.891.777278717717994372648772.51
3Arkansas6459692.121.962.283017804301419530281222.12
4California2111170617505.354.324.463951222339501653391429915.34
+ +
+
+
+

Awesome!!!

+

Let’s use a for-loop and Python format strings to compute TB incidence for all years. Python f-strings are just used for the purposes of this demo, but they’re handy to know when you explore data beyond this course (documentation).

+
+
# recompute incidence for all years
+for year in [2019, 2020, 2021]:
+    tb_census_df[f"recompute incidence {year}"] = tb_census_df[f"TB cases {year}"]/tb_census_df[f"{year}"]*100000
+tb_census_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
U.S. jurisdictionTB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021201920202021recompute incidence 2019recompute incidence 2020recompute incidence 2021
0Alabama8772921.771.431.834903185503136250498461.771.431.82
1Alaska5858587.917.927.927315457329237341827.937.917.90
2Arizona1831361292.511.891.777278717717994372648772.511.891.78
3Arkansas6459692.121.962.283017804301419530281222.121.962.28
4California2111170617505.354.324.463951222339501653391429915.344.324.47
+ +
+
+
+

These numbers look pretty close!!! There are a few errors in the hundredths place, particularly in 2021. It may be useful to further explore reasons behind this discrepancy.

+
+
tb_census_df.describe()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021201920202021recompute incidence 2019recompute incidence 2020recompute incidence 2021
count51.0051.0051.0051.0051.0051.0051.0051.0051.0051.0051.0051.00
mean174.51140.65154.122.101.781.976436069.086500225.736510422.632.101.781.97
std341.74271.06286.781.501.341.487360660.477408168.467394300.081.501.341.47
min1.000.002.000.170.000.21578759.00577605.00579483.000.170.000.21
25%25.5029.0023.001.291.211.231789606.001820311.001844920.001.301.211.23
50%70.0067.0069.001.801.521.704467673.004507445.004506589.001.811.521.69
75%180.50139.00150.002.581.992.227446805.007451987.007502811.002.581.992.22
max2111.001706.001750.007.917.927.9239512223.0039501653.0039142991.007.937.917.90
+ +
+
+
+
+
+

5.4.6 Bonus EDA: Reproducing the Reported Statistic

+

How do we reproduce that reported statistic in the original CDC report?

+
+

Reported TB incidence (cases per 100,000 persons) increased 9.4%, from 2.2 during 2020 to 2.4 during 2021 but was lower than incidence during 2019 (2.7). Increases occurred among both U.S.-born and non–U.S.-born persons.

+
+

This is TB incidence computed across the entire U.S. population! How do we reproduce this? * We need to reproduce the “Total” TB incidences in our rolled record. * But our current tb_census_df only has 51 entries (50 states plus Washington, D.C.). There is no rolled record. * What happened…?

+

Let’s get exploring!

+

Before we keep exploring, we’ll set all indexes to more meaningful values, instead of just numbers that pertain to some row at some point. This will make our cleaning slightly easier.

+
+
+Code +
tb_df = tb_df.set_index("U.S. jurisdiction")
+tb_df.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021
U.S. jurisdiction
Total8900717378602.712.162.37
Alabama8772921.771.431.83
Alaska5858587.917.927.92
Arizona1831361292.511.891.77
Arkansas6459692.121.962.28
+ +
+
+
+
+
census_2010s_df = census_2010s_df.set_index("Geographic Area")
+census_2010s_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
2010201120122013201420152016201720182019
Geographic Area
United States309321666311556874313830990315993715318301008320635163322941311324985539326687501328239523
Northeast55380134556042235577521655901806560060115603468456042330560592405604662055982803
Midwest66974416671578006733674367560379677451676786058367987540681267816823662868329004
South114866680116006522117241208118364400119624037120997341122351760123542189124569433125580448
West72100436727883297347782374167130749257937574255576559681772573297783482078347268
+ +
+
+
+
+
census_2020s_df = census_2020s_df.set_index("Geographic Area")
+census_2020s_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
202020212022
Geographic Area
United States331511512332031554333287557
Northeast574488985725925757040406
Midwest689610436883650568787595
South126450613127346029128716192
West786509587858976378743364
+ +
+
+
+

It turns out that our merge above only kept state records, even though our original tb_df had the “Total” rolled record:

+
+
tb_df.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021
U.S. jurisdiction
Total8900717378602.712.162.37
Alabama8772921.771.431.83
Alaska5858587.917.927.92
Arizona1831361292.511.891.77
Arkansas6459692.121.962.28
+ +
+
+
+

Recall that merge by default does an inner merge by default, meaning that it only preserves keys that are present in both DataFrames.

+

The rolled records in our census DataFrame have different Geographic Area fields, which was the key we merged on:

+
+
census_2010s_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
2010201120122013201420152016201720182019
Geographic Area
United States309321666311556874313830990315993715318301008320635163322941311324985539326687501328239523
Northeast55380134556042235577521655901806560060115603468456042330560592405604662055982803
Midwest66974416671578006733674367560379677451676786058367987540681267816823662868329004
South114866680116006522117241208118364400119624037120997341122351760123542189124569433125580448
West72100436727883297347782374167130749257937574255576559681772573297783482078347268
+ +
+
+
+

The Census DataFrame has several rolled records. The aggregate record we are looking for actually has the Geographic Area named “United States”.

+

One straightforward way to get the right merge is to rename the value itself. Because we now have the Geographic Area index, we’ll use df.rename() (documentation):

+
+
# rename rolled record for 2010s
+census_2010s_df.rename(index={'United States':'Total'}, inplace=True)
+census_2010s_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
2010201120122013201420152016201720182019
Geographic Area
Total309321666311556874313830990315993715318301008320635163322941311324985539326687501328239523
Northeast55380134556042235577521655901806560060115603468456042330560592405604662055982803
Midwest66974416671578006733674367560379677451676786058367987540681267816823662868329004
South114866680116006522117241208118364400119624037120997341122351760123542189124569433125580448
West72100436727883297347782374167130749257937574255576559681772573297783482078347268
+ +
+
+
+
+
# same, but for 2020s rename rolled record
+census_2020s_df.rename(index={'United States':'Total'}, inplace=True)
+census_2020s_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
202020212022
Geographic Area
Total331511512332031554333287557
Northeast574488985725925757040406
Midwest689610436883650568787595
South126450613127346029128716192
West786509587858976378743364
+ +
+
+
+


+

Next let’s rerun our merge. Note the different chaining, because we are now merging on indexes (df.merge() documentation).

+
+
tb_census_df = (
+    tb_df
+    .merge(right=census_2010s_df[["2019"]],
+           left_index=True, right_index=True)
+    .merge(right=census_2020s_df[["2020", "2021"]],
+           left_index=True, right_index=True)
+)
+tb_census_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021201920202021
Total8900717378602.712.162.37328239523331511512332031554
Alabama8772921.771.431.83490318550313625049846
Alaska5858587.917.927.92731545732923734182
Arizona1831361292.511.891.77727871771799437264877
Arkansas6459692.121.962.28301780430141953028122
+ +
+
+
+


+

Finally, let’s recompute our incidences:

+
+
# recompute incidence for all years
+for year in [2019, 2020, 2021]:
+    tb_census_df[f"recompute incidence {year}"] = tb_census_df[f"TB cases {year}"]/tb_census_df[f"{year}"]*100000
+tb_census_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TB cases 2019TB cases 2020TB cases 2021TB incidence 2019TB incidence 2020TB incidence 2021201920202021recompute incidence 2019recompute incidence 2020recompute incidence 2021
Total8900717378602.712.162.373282395233315115123320315542.712.162.37
Alabama8772921.771.431.834903185503136250498461.771.431.82
Alaska5858587.917.927.927315457329237341827.937.917.90
Arizona1831361292.511.891.777278717717994372648772.511.891.78
Arkansas6459692.121.962.283017804301419530281222.121.962.28
+ +
+
+
+

We reproduced the total U.S. incidences correctly!

+

We’re almost there. Let’s revisit the quote:

+
+

Reported TB incidence (cases per 100,000 persons) increased 9.4%, from 2.2 during 2020 to 2.4 during 2021 but was lower than incidence during 2019 (2.7). Increases occurred among both U.S.-born and non–U.S.-born persons.

+
+

Recall that percent change from \(A\) to \(B\) is computed as \(\text{percent change} = \frac{B - A}{A} \times 100\).

+
+
incidence_2020 = tb_census_df.loc['Total', 'recompute incidence 2020']
+incidence_2020
+
+
np.float64(2.1637257652759883)
+
+
+
+
incidence_2021 = tb_census_df.loc['Total', 'recompute incidence 2021']
+incidence_2021
+
+
np.float64(2.3672448914298068)
+
+
+
+
difference = (incidence_2021 - incidence_2020)/incidence_2020 * 100
+difference
+
+
np.float64(9.405957511804143)
+
+
+
+
+
+

5.5 EDA Demo 2: Mauna Loa CO2 Data – A Lesson in Data Faithfulness

+

Mauna Loa Observatory has been monitoring CO2 concentrations since 1958.

+
+
co2_file = "data/co2_mm_mlo.txt"
+
+

Let’s do some EDA!!

+
+

5.5.1 Reading this file into Pandas?

+

Let’s instead check out this .txt file. Some questions to keep in mind: Do we trust this file extension? What structure is it?

+

Lines 71-78 (inclusive) are shown below:

+
line number |                            file contents
+
+71          |   #            decimal     average   interpolated    trend    #days
+72          |   #             date                             (season corr)
+73          |   1958   3    1958.208      315.71      315.71      314.62     -1
+74          |   1958   4    1958.292      317.45      317.45      315.29     -1
+75          |   1958   5    1958.375      317.50      317.50      314.71     -1
+76          |   1958   6    1958.458      -99.99      317.10      314.85     -1
+77          |   1958   7    1958.542      315.86      315.86      314.98     -1
+78          |   1958   8    1958.625      314.93      314.93      315.94     -1
+

Notice how:

+
    +
  • The values are separated by white space, possibly tabs.
  • +
  • The data line up down the rows. For example, the month appears in 7th to 8th position of each line.
  • +
  • The 71st and 72nd lines in the file contain column headings split over two lines.
  • +
+

We can use read_csv to read the data into a pandas DataFrame, and we provide several arguments to specify that the separators are white space, there is no header (we will set our own column names), and to skip the first 72 rows of the file.

+
+
co2 = pd.read_csv(
+    co2_file, header = None, skiprows = 72,
+    sep = r'\s+'       #delimiter for continuous whitespace (stay tuned for regex next lecture))
+)
+co2.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
0123456
0195831958.21315.71315.71314.62-1
1195841958.29317.45317.45315.29-1
2195851958.38317.50317.50314.71-1
3195861958.46-99.99317.10314.85-1
4195871958.54315.86315.86314.98-1
+ +
+
+
+

Congratulations! You’ve wrangled the data!

+


+

…But our columns aren’t named. We need to do more EDA.

+
+
+

5.5.2 Exploring Variable Feature Types

+

The NOAA webpage might have some useful tidbits (in this case it doesn’t).

+

Using this information, we’ll rerun pd.read_csv, but this time with some custom column names.

+
+
co2 = pd.read_csv(
+    co2_file, header = None, skiprows = 72,
+    sep = '\s+', #regex for continuous whitespace (next lecture)
+    names = ['Yr', 'Mo', 'DecDate', 'Avg', 'Int', 'Trend', 'Days']
+)
+co2.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YrMoDecDateAvgIntTrendDays
0195831958.21315.71315.71314.62-1
1195841958.29317.45317.45315.29-1
2195851958.38317.50317.50314.71-1
3195861958.46-99.99317.10314.85-1
4195871958.54315.86315.86314.98-1
+ +
+
+
+
+
+

5.5.3 Visualizing CO2

+

Scientific studies tend to have very clean data, right…? Let’s jump right in and make a time series plot of CO2 monthly averages.

+
+
+Code +
sns.lineplot(x='DecDate', y='Avg', data=co2);
+
+
+
+
+

+
+
+
+
+

The code above uses the seaborn plotting library (abbreviated sns). We will cover this in the Visualization lecture, but now you don’t need to worry about how it works!

+

Yikes! Plotting the data uncovered a problem. The sharp vertical lines suggest that we have some missing values. What happened here?

+
+
co2.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YrMoDecDateAvgIntTrendDays
0195831958.21315.71315.71314.62-1
1195841958.29317.45317.45315.29-1
2195851958.38317.50317.50314.71-1
3195861958.46-99.99317.10314.85-1
4195871958.54315.86315.86314.98-1
+ +
+
+
+
+
co2.tail()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YrMoDecDateAvgIntTrendDays
733201942019.29413.32413.32410.4926
734201952019.38414.66414.66411.2028
735201962019.46413.92413.92411.5827
736201972019.54411.77411.77411.4323
737201982019.62409.95409.95411.8429
+ +
+
+
+

Some data have unusual values like -1 and -99.99.

+

Let’s check the description at the top of the file again.

+
    +
  • -1 signifies a missing value for the number of days Days the equipment was in operation that month.
  • +
  • -99.99 denotes a missing monthly average Avg
  • +
+

How can we fix this? First, let’s explore other aspects of our data. Understanding our data will help us decide what to do with the missing values.

+


+
+
+

5.5.4 Sanity Checks: Reasoning about the data

+

First, we consider the shape of the data. How many rows should we have?

+
    +
  • If chronological order, we should have one record per month.
  • +
  • Data from March 1958 to August 2019.
  • +
  • We should have $ 12 (2019-1957) - 2 - 4 = 738 $ records.
  • +
+
+
co2.shape
+
+
(738, 7)
+
+
+

Nice!! The number of rows (i.e. records) match our expectations.

+

Let’s now check the quality of each feature.

+
+
+

5.5.5 Understanding Missing Value 1: Days

+

Days is a time field, so let’s analyze other time fields to see if there is an explanation for missing values of days of operation.

+

Let’s start with months, Mo.

+

Are we missing any records? The number of months should have 62 or 61 instances (March 1957-August 2019).

+
+
co2["Mo"].value_counts().sort_index()
+
+
Mo
+1     61
+2     61
+3     62
+4     62
+5     62
+6     62
+7     62
+8     62
+9     61
+10    61
+11    61
+12    61
+Name: count, dtype: int64
+
+
+

As expected Jan, Feb, Sep, Oct, Nov, and Dec have 61 occurrences and the rest 62.

+


+

Next let’s explore days Days itself, which is the number of days that the measurement equipment worked.

+
+
+Code +
sns.displot(co2['Days']);
+plt.title("Distribution of days feature"); # suppresses unneeded plotting output
+
+
+
+
+

+
+
+
+
+

In terms of data quality, a handful of months have averages based on measurements taken on fewer than half the days. In addition, there are nearly 200 missing values–that’s about 27% of the data!

+


+

Finally, let’s check the last time feature, year Yr.

+

Let’s check to see if there is any connection between missing-ness and the year of the recording.

+
+
+Code +
sns.scatterplot(x="Yr", y="Days", data=co2);
+plt.title("Day field by Year"); # the ; suppresses output
+
+
+
+
+

+
+
+
+
+

Observations:

+
    +
  • All of the missing data are in the early years of operation.
  • +
  • It appears there may have been problems with equipment in the mid to late 80s.
  • +
+

Potential Next Steps:

+
    +
  • Confirm these explanations through documentation about the historical readings.
  • +
  • Maybe drop the earliest recordings? However, we would want to delay such action until after we have examined the time trends and assess whether there are any potential problems.
  • +
+


+
+
+

5.5.6 Understanding Missing Value 2: Avg

+

Next, let’s return to the -99.99 values in Avg to analyze the overall quality of the CO2 measurements. We’ll plot a histogram of the average CO2 measurements

+
+
+Code +
# Histograms of average CO2 measurements
+sns.displot(co2['Avg']);
+
+
+
+
+

+
+
+
+
+

The non-missing values are in the 300-400 range (a regular range of CO2 levels).

+

We also see that there are only a few missing Avg values (<1% of values). Let’s examine all of them:

+
+
co2[co2["Avg"] < 0]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YrMoDecDateAvgIntTrendDays
3195861958.46-99.99317.10314.85-1
71958101958.79-99.99312.66315.61-1
71196421964.12-99.99320.07319.61-1
72196431964.21-99.99320.73319.55-1
73196441964.29-99.99321.77319.48-1
2131975121975.96-99.99330.59331.600
313198441984.29-99.99346.84344.272
+ +
+
+
+

There doesn’t seem to be a pattern to these values, other than that most records also were missing Days data.

+
+
+

5.5.7 Drop, NaN, or Impute Missing Avg Data?

+

How should we address the invalid Avg data?

+
    +
  1. Drop records
  2. +
  3. Set to NaN
  4. +
  5. Impute using some strategy
  6. +
+

Remember we want to fix the following plot:

+
+
+Code +
sns.lineplot(x='DecDate', y='Avg', data=co2)
+plt.title("CO2 Average By Month");
+
+
+
+
+

+
+
+
+
+

Since we are plotting Avg vs DecDate, we should just focus on dealing with missing values for Avg.

+

Let’s consider a few options: 1. Drop those records 2. Replace -99.99 with NaN 3. Substitute it with a likely value for the average CO2?

+

What do you think are the pros and cons of each possible action?

+

Let’s examine each of these three options.

+
+
# 1. Drop missing values
+co2_drop = co2[co2['Avg'] > 0]
+co2_drop.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YrMoDecDateAvgIntTrendDays
0195831958.21315.71315.71314.62-1
1195841958.29317.45317.45315.29-1
2195851958.38317.50317.50314.71-1
4195871958.54315.86315.86314.98-1
5195881958.62314.93314.93315.94-1
+ +
+
+
+
+
# 2. Replace NaN with -99.99
+co2_NA = co2.replace(-99.99, np.nan)
+co2_NA.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YrMoDecDateAvgIntTrendDays
0195831958.21315.71315.71314.62-1
1195841958.29317.45317.45315.29-1
2195851958.38317.50317.50314.71-1
3195861958.46NaN317.10314.85-1
4195871958.54315.86315.86314.98-1
+ +
+
+
+

We’ll also use a third version of the data.

+

First, we note that the dataset already comes with a substitute value for the -99.99.

+

From the file description:

+
+

The interpolated column includes average values from the preceding column (average) and interpolated values where data are missing. Interpolated values are computed in two steps…

+
+

The Int feature has values that exactly match those in Avg, except when Avg is -99.99, and then a reasonable estimate is used instead.

+

So, the third version of our data will use the Int feature instead of Avg.

+
+
# 3. Use interpolated column which estimates missing Avg values
+co2_impute = co2.copy()
+co2_impute['Avg'] = co2['Int']
+co2_impute.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YrMoDecDateAvgIntTrendDays
0195831958.21315.71315.71314.62-1
1195841958.29317.45317.45315.29-1
2195851958.38317.50317.50314.71-1
3195861958.46317.10317.10314.85-1
4195871958.54315.86315.86314.98-1
+ +
+
+
+

What’s a reasonable estimate?

+

To answer this question, let’s zoom in on a short time period, say the measurements in 1958 (where we know we have two missing values).

+
+
+Code +
# results of plotting data in 1958
+
+def line_and_points(data, ax, title):
+    # assumes single year, hence Mo
+    ax.plot('Mo', 'Avg', data=data)
+    ax.scatter('Mo', 'Avg', data=data)
+    ax.set_xlim(2, 13)
+    ax.set_title(title)
+    ax.set_xticks(np.arange(3, 13))
+
+def data_year(data, year):
+    return data[data["Yr"] == 1958]
+    
+# uses matplotlib subplots
+# you may see more next week; focus on output for now
+fig, axes = plt.subplots(ncols = 3, figsize=(12, 4), sharey=True)
+
+year = 1958
+line_and_points(data_year(co2_drop, year), axes[0], title="1. Drop Missing")
+line_and_points(data_year(co2_NA, year), axes[1], title="2. Missing Set to NaN")
+line_and_points(data_year(co2_impute, year), axes[2], title="3. Missing Interpolated")
+
+fig.suptitle(f"Monthly Averages for {year}")
+plt.tight_layout()
+
+
+
+
+

+
+
+
+
+

In the big picture since there are only 7 Avg values missing (<1% of 738 months), any of these approaches would work.

+

However there is some appeal to option C, Imputing:

+
    +
  • Shows seasonal trends for CO2
  • +
  • We are plotting all months in our data as a line plot
  • +
+

Let’s replot our original figure with option 3:

+
+
+Code +
sns.lineplot(x='DecDate', y='Avg', data=co2_impute)
+plt.title("CO2 Average By Month, Imputed");
+
+
+
+
+

+
+
+
+
+

Looks pretty close to what we see on the NOAA website!

+
+
+

5.5.8 Presenting the Data: A Discussion on Data Granularity

+

From the description:

+
    +
  • Monthly measurements are averages of average day measurements.
  • +
  • The NOAA GML website has datasets for daily/hourly measurements too.
  • +
+

The data you present depends on your research question.

+

How do CO2 levels vary by season?

+
    +
  • You might want to keep average monthly data.
  • +
+

Are CO2 levels rising over the past 50+ years, consistent with global warming predictions?

+
    +
  • You might be happier with a coarser granularity of average year data!
  • +
+
+
+Code +
co2_year = co2_impute.groupby('Yr').mean()
+sns.lineplot(x='Yr', y='Avg', data=co2_year)
+plt.title("CO2 Average By Year");
+
+
+
+
+

+
+
+
+
+

Indeed, we see a rise by nearly 100 ppm of CO2 since Mauna Loa began recording in 1958.

+
+
+
+

5.6 Summary

+

We went over a lot of content this lecture; let’s summarize the most important points:

+
+

5.6.1 Dealing with Missing Values

+

There are a few options we can take to deal with missing data:

+
    +
  • Drop missing records
  • +
  • Keep NaN missing values
  • +
  • Impute using an interpolated column
  • +
+
+
+

5.6.2 EDA and Data Wrangling

+

There are several ways to approach EDA and Data Wrangling:

+
    +
  • Examine the data and metadata: what is the date, size, organization, and structure of the data?
  • +
  • Examine each field/attribute/dimension individually.
  • +
  • Examine pairs of related dimensions (e.g. breaking down grades by major).
  • +
  • Along the way, we can: +
      +
    • Visualize or summarize the data.
    • +
    • Validate assumptions about data and its collection process. Pay particular attention to when the data was collected.
    • +
    • Identify and address anomalies.
    • +
    • Apply data transformations and corrections (we’ll cover this in the upcoming lecture).
    • +
    • Record everything you do! Developing in Jupyter Notebook promotes reproducibility of your own work!
    • +
  • +
+ + + + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/eda/eda_files/figure-html/cell-62-output-1.png b/docs/eda/eda_files/figure-html/cell-62-output-1.png new file mode 100644 index 000000000..2e13ba75f Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-62-output-1.png differ diff --git a/docs/eda/eda_files/figure-html/cell-67-output-1.png b/docs/eda/eda_files/figure-html/cell-67-output-1.png new file mode 100644 index 000000000..25ce5066b Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-67-output-1.png differ diff --git a/docs/eda/eda_files/figure-html/cell-68-output-1.png b/docs/eda/eda_files/figure-html/cell-68-output-1.png new file mode 100644 index 000000000..87476da2f Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-68-output-1.png differ diff --git a/docs/eda/eda_files/figure-html/cell-69-output-1.png b/docs/eda/eda_files/figure-html/cell-69-output-1.png new file mode 100644 index 000000000..e5de329e3 Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-69-output-1.png differ diff --git a/docs/eda/eda_files/figure-html/cell-71-output-1.png b/docs/eda/eda_files/figure-html/cell-71-output-1.png new file mode 100644 index 000000000..b61af206f Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-71-output-1.png differ diff --git a/docs/eda/eda_files/figure-html/cell-75-output-1.png b/docs/eda/eda_files/figure-html/cell-75-output-1.png new file mode 100644 index 000000000..d7165c169 Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-75-output-1.png differ diff --git a/docs/eda/eda_files/figure-html/cell-76-output-1.png b/docs/eda/eda_files/figure-html/cell-76-output-1.png new file mode 100644 index 000000000..93f427235 Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-76-output-1.png differ diff --git a/docs/eda/eda_files/figure-html/cell-77-output-1.png b/docs/eda/eda_files/figure-html/cell-77-output-1.png new file mode 100644 index 000000000..da6803619 Binary files /dev/null and b/docs/eda/eda_files/figure-html/cell-77-output-1.png differ diff --git a/docs/eda/images/variable.png b/docs/eda/images/variable.png new file mode 100644 index 000000000..3cd730a94 Binary files /dev/null and b/docs/eda/images/variable.png differ diff --git a/docs/feature_engineering/feature_engineering.html b/docs/feature_engineering/feature_engineering.html new file mode 100644 index 000000000..9075eb71a --- /dev/null +++ b/docs/feature_engineering/feature_engineering.html @@ -0,0 +1,1789 @@ + + + + + + + + + +14  Feature Engineering – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

14  Feature Engineering

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Recognize the value of feature engineering as a tool to improve model performance
  • +
  • Implement polynomial feature generation and one hot encoding
  • +
  • Understand the interactions between model complexity, model variance, and training error
  • +
+
+
+
+

At this point, we’ve grown quite familiar with the modeling process. We’ve introduced the concept of loss, used it to fit several types of models, and, most recently, extended our analysis to multiple regression. Along the way, we’ve forged our way through the mathematics of deriving the optimal model parameters in all its gory detail. It’s time to make our lives a little easier – let’s implement the modeling process in code!

+

In this lecture, we’ll explore two techniques for model fitting:

+
    +
  1. Translating our derived formulas for regression to python
  2. +
  3. Using python’s sklearn package
  4. +
+

With our new programming frameworks in hand, we will also add sophistication to our models by introducing more complex features to enhance model performance.

+
+

14.1 Gradient Descent Cont.

+

Before we dive into feature engineering, let’s quickly review gradient descent, which we covered in the last lecture. Recall that gradient descent is a powerful technique for choosing the model parameters that minimize the loss function.

+
+

14.1.1 Gradient Descent Review

+

As we learned earlier, we set the derivative of the loss function to zero and solve to determine the optimal parameters \(\theta\) that minimize loss. For a loss surface in 2D (or higher), the best way to minimize loss is to “walk” down the loss surface until we reach our optimal parameters \(\vec{\theta}\). The gradient vector tells us which direction to “walk” in.

+

For example, the vector of parameter values \(\vec{\theta} = \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \end{bmatrix}\) gives us a two parameter model (d = 2). To calculate our gradient vector, we can take the partial derivative of loss with respect to each parameter: \(\frac{\partial L}{\partial \theta_0}\) and \(\frac{\partial L}{\partial \theta_1}\).

+

Its gradient vector would then be the 2D vector: \[\nabla_{\vec{\theta}} L = \begin{bmatrix} \frac{\partial L}{\partial \theta_0} \\ \frac{\partial L}{\partial \theta_1} \end{bmatrix}\]

+

Note that \(-\nabla_{\vec{\theta}} L\) always points in the downhill direction of the surface.

+

Recall that we also discussed the gradient descent update rule, where we nudge \(\theta\) in a negative gradient direction until \(\theta\) converges.

+

As a refresher, the rule is as follows: \[\vec{\theta}^{(t+1)} = \vec{\theta}^{(t)} - \alpha \nabla_{\vec{\theta}} L(\vec{\theta}^{(t)}) \]

+
    +
  • \(\theta\) is a vector with our model weights
  • +
  • \(L\) is the loss function
  • +
  • \(\alpha\) is the learning rate
  • +
  • \(\vec{\theta}^{(t)}\) is the current value of \(\theta\)
  • +
  • \(\vec{\theta}^{(t+1)}\) is the next value of \(\theta\)
  • +
  • \(\nabla_{\vec{\theta}} L(\vec{\theta}^{(t)})\) is the gradient of the loss function evaluated at the current \(\theta\): \[\frac{1}{n}\sum_{i=1}^{n}\nabla_{\vec{\theta}} l(y_i, f_{\vec{\theta}^{(t)}}(X_i))\]
  • +
+

Let’s now walk through an example of calculating and updating the gradient vector. Say our model and loss are: \[\begin{align} +f_{\vec{\theta}}(\vec{x}) &= \vec{x}^T\vec{\theta} = \theta_0x_0 + \theta_1x_1 +\\l(y, \hat{y}) &= (y - \hat{y})^2 +\end{align} +\]

+

Plugging in \(f_{\vec{\theta}}(\vec{x})\) for \(\hat{y}\), our loss function becomes \(l(\vec{\theta}, \vec{x}, y_i) = (y_i - \theta_0x_0 - \theta_1x_1)^2\).

+

To calculate our gradient vector, we can start by computing the partial derivative of the loss function with respect to \(\theta_0\): \[\frac{\partial}{\partial \theta_{0}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_0)\]

+

Let’s now do the same but with respect to \(\theta_1\): \[\frac{\partial}{\partial \theta_{1}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_1)\]

+

Putting this together, our gradient vector is: \[\nabla_{\theta} l(\vec{\theta}, \vec{x}, y_i) = \begin{bmatrix} -2(y_i - \theta_0x_0 - \theta_1x_1)(x_0) \\ -2(y_i - \theta_0x_0 - \theta_1x_1)(x_1) \end{bmatrix}\]

+

Remember that we need to keep updating \(\theta\) until the algorithm converges to a solution and stops updating significantly (or at all). When updating \(\theta\), we’ll have a fixed number of updates and subsequent updates will be quite small (we won’t change \(\theta\) by much).

+
+
+

14.1.2 Stochastic (Mini-batch) Gradient Descent

+

Let’s now dive deeper into gradient and stochastic gradient descent. In the previous lecture, we discussed how finding the gradient across all the data is extremeley computationally taxing and takes a lot of resources to calculate.

+

We know that the solution to the normal equation is \(\hat{\theta} = (\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y}\). Let’s break this down and determine the computational complexity for this solution.

+
+complexity_normal_solution +
+

Let \(n\) be the number of samples (rows) and \(d\) be the number of features (columns).

+
    +
  • Computing \((\mathbb{X}^{\top}\mathbb{X})\) takes \(O(nd^2)\) time, and it’s inverse takes another \(O(d^3)\) time to calculate; overall, \((\mathbb{X}^{\top}\mathbb{X})^{-1}\) takes \(O(nd^2) + O(d^3)\) time.
  • +
  • \(\mathbb{X}^{\top}\mathbb{Y}\) takes \(O(nd)\) time.
  • +
  • Multiplying \((\mathbb{X}^{\top}\mathbb{X})^{-1}\) and \(\mathbb{X}^{\top}\mathbb{Y}\) takes \(O(d^2)\) time.
  • +
+

In total, calculating the solution to the normal equation takes \(O(nd^2) + O(d^3) + O(nd) + O(d^2)\) time. We can see that \(O(nd^2) + O(d^3)\) dominates the complexity — this can be problematic for high-dimensional models and very large datasets.

+

On the other hand, the time complexity of a single gradient descent step takes only \(O(nd)\) time.

+
+complexity_grad_descent +
+

Suppose we run \(T\) iterations. The final complexity would then be \(O(Tnd)\). Typically, \(n\) is much larger than \(T\) and \(d\). How can we reduce the cost of this algorithm using a technique from Data 100? Do we really need to use \(n\) data points? We don’t! Instead, we can use stochastic gradient descent.

+

We know that our true gradient of \(\nabla_{\vec{\theta}} L (\vec{\theta^{(t)}}) = \frac{1}{n}\sum_{i=1}^{n}\nabla_{\vec{\theta}} l(y_i, f_{\vec{\theta}^{(t)}}(X_i))\) has a time complexity of \(O(nd)\). Instead of using all \(n\) samples to calculate the true gradient of the loss surface, let’s use a sample of our data to approximate. Say we sample \(b\) records (\(s_1, \cdots, s_b\)) from our \(n\) datapoints. Our new (stochastic) gradient descent function will be \(\nabla_{\vec{\theta}} L (\vec{\theta^{(t)}}) = \frac{1}{b}\sum_{i=1}^{b}\nabla_{\vec{\theta}} l(y_{s_i}, f_{\vec{\theta}^{(t)}}(X_{s_i}))\) and will now have a time complexity of \(O(bd)\), which is much faster!

+

Stochastic gradient descent helps us approximate the gradient while also reducing the time complexity and computational cost. The time complexity scales with the number of datapoints selected in the sample. To sample data, there are two approaches we can use:

+
    +
  1. Shuffle the data and select samples one at a time.
  2. +
  3. Take a simple random sample for each gradient computation.
  4. +
+

But how do we decide our mini-batch size (\(b\)), or the number of datapoints in our sample? The original stochastic gradient descent algorithm uses \(b=1\) so that only one sample is used to approximate the gradient at a time. Although we don’t use such a small mini-batch size often, \(b\) typically is small. When choosing \(b\), there are several factors to consider: a larger batch size results in a better gradient estimate, parallelism, and other systems factors. On the other hand, a smaller batch size will be faster and have more frequent updates. It is up to data scientists to balance the tradeoff between batch size and time complexity.

+

Summarizing our two gradient descent techniques:

+
    +
  • (Batch) Gradient Descent: Gradient descent computes the true descent and always descends towards the true minimum of the loss. While accurate, it can often be computationally expensive.
  • +
+
+batch_grad_descent +
+
    +
  • (Minibatch) Stochastic gradient descent: Stochastic gradient descent approximates the true gradient descent. It may not descend towards the true minimum with each update, but it’s often less computationally expensive than batch gradient descent.
  • +
+
+stochastic_grad_descent +
+
+
+
+

14.2 Feature Engineering

+

At this point in the course, we’ve equipped ourselves with some powerful techniques to build and optimize models. We’ve explored how to develop models of multiple variables, as well as how to transform variables to help linearize a dataset and fit these models to maximize their performance.

+

All of this was done with one major caveat: the regression models we’ve worked with so far are all linear in the input variables. We’ve assumed that our predictions should be some combination of linear variables. While this works well in some cases, the real world isn’t always so straightforward. We’ll learn an important method to address this issue – feature engineering – and consider some new problems that can arise when we do so.

+

Feature engineering is the process of transforming raw features into more informative features that can be used in modeling or EDA tasks and improve model performance.

+

Feature engineering allows you to:

+
    +
  • Capture domain knowledge
  • +
  • Express non-linear relationships using linear models
  • +
  • Use non-numeric (qualitative) features in models
  • +
+
+
+

14.3 Feature Functions

+

A feature function describes the transformations we apply to raw features in a dataset to create a design matrix of transformed features. We typically denote the feature function as \(\Phi\) (the Greek letter “phi” that we use to represent the true function). When we apply the feature function to our original dataset \(\mathbb{X}\), the result, \(\Phi(\mathbb{X})\), is a transformed design matrix ready to be used in modeling.

+

For example, we might design a feature function that computes the square of an existing feature and adds it to the design matrix. In this case, our existing matrix \([x]\) is transformed to \([x, x^2]\). Its dimension increases from 1 to 2. Often, the dimension of the featurized dataset increases as seen here.

+
+phi +
+

The new features introduced by the feature function can then be used in modeling. Often, we use the symbol \(\phi_i\) to represent transformed features after feature engineering.

+

\[ +\begin{align} +\hat{y} &= \theta_0 + \theta_1 x + \theta_2 x^2 \\ +\hat{y} &= \theta_0 + \theta_1 \phi_1 + \theta_2 \phi_2 +\end{align} +\]

+

In matrix notation, the symbol \(\Phi\) is sometimes used to denote the design matrix after feature engineering has been performed. Note that in the usage below, \(\Phi\) is now a feature-engineered matrix, rather than a function.

+

\[\hat{\mathbb{Y}} = \Phi \theta\]

+

More formally, we describe a feature function as transforming the original \(\mathbb{R}^{n \times p}\) dataset \(\mathbb{X}\) to a featurized \(\mathbb{R}^{n \times p'}\) dataset \(\mathbb{\Phi}\), where \(p'\) is typically greater than \(p\).

+

\[\mathbb{X} \in \mathbb{R}^{n \times p} \longrightarrow \Phi \in \mathbb{R}^{n \times p'}\]

+
+
+

14.4 One Hot Encoding

+

Feature engineering opens up a whole new set of possibilities for designing better-performing models. As you will see in lab and homework, feature engineering is one of the most important parts of the entire modeling process.

+

A particularly powerful use of feature engineering is to allow us to perform regression on non-numeric features. One hot encoding is a feature engineering technique that generates numeric features from categorical data, allowing us to use our usual methods to fit a regression model on the data.

+

To illustrate how this works, we’ll refer back to the tips dataset from previous lectures. Consider the "day" column of the dataset:

+
+
+Code +
import numpy as np
+import seaborn as sns
+import pandas as pd
+import sklearn.linear_model as lm
+tips = sns.load_dataset("tips")
+tips.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
total_billtipsexsmokerdaytimesize
016.991.01FemaleNoSunDinner2
110.341.66MaleNoSunDinner3
221.013.50MaleNoSunDinner3
323.683.31MaleNoSunDinner2
424.593.61FemaleNoSunDinner4
+ +
+
+
+

At first glance, it doesn’t seem possible to fit a regression model to this data – we can’t directly perform any mathematical operations on the entry “Sun”.

+

To resolve this, we instead create a new table with a feature for each unique value in the original "day" column. We then iterate through the "day" column. For each entry in "day" we fill the corresponding feature in the new table with 1. All other features are set to 0.

+
+ohe +
+


+In short, each category of a categorical variable gets its own feature +
    +
  • +Value = 1 if a row belongs to the category +
  • +
  • +Value = 0 otherwise +
  • +
+

The OneHotEncoder class of sklearn (documentation) offers a quick way to perform this one-hot encoding. You will explore its use in detail in the lab. For now, recognize that we follow a very similar workflow to when we were working with the LinearRegression class: we initialize a OneHotEncoder object, fit it to our data, and finally use .transform to apply the fitted encoder.

+
+
from sklearn.preprocessing import OneHotEncoder
+
+# Initialize a OneHotEncoder object
+ohe = OneHotEncoder()
+
+# Fit the encoder
+ohe.fit(tips[["day"]])
+
+# Use the encoder to transform the raw "day" feature
+encoded_day = ohe.transform(tips[["day"]]).toarray()
+encoded_day_df = pd.DataFrame(encoded_day, columns=ohe.get_feature_names_out())
+
+encoded_day_df.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
day_Friday_Satday_Sunday_Thur
00.00.01.00.0
10.00.01.00.0
20.00.01.00.0
30.00.01.00.0
40.00.01.00.0
+ +
+
+
+

The one-hot encoded features can then be used in the design matrix to train a model:

+
+ohemodel +
+

\[\hat{y} = \theta_1 (\text{total}\_\text{bill}) + \theta_2 (\text{size}) + \theta_3 (\text{day}\_\text{Fri}) + \theta_4 (\text{day}\_\text{Sat}) + \theta_5 (\text{day}\_\text{Sun}) + \theta_6 (\text{day}\_\text{Thur})\]

+

Or in shorthand:

+

\[\hat{y} = \theta_{1}\phi_{1} + \theta_{2}\phi_{2} + \theta_{3}\phi_{3} + \theta_{4}\phi_{4} + \theta_{5}\phi_{5} + \theta_{6}\phi_{6}\]

+

Now, the day feature (or rather, the four new boolean features that represent day) can be used to fit a model.

+

Using sklearn to fit the new model, we can determine the model coefficients, allowing us to understand how each feature impacts the predicted tip.

+
+
from sklearn.linear_model import LinearRegression
+data_w_ohe = tips[["total_bill", "size", "day"]].join(encoded_day_df).drop(columns = "day")
+ohe_model = lm.LinearRegression(fit_intercept=False) #Tell sklearn to not add an additional bias column. Why?
+ohe_model.fit(data_w_ohe, tips["tip"])
+
+pd.DataFrame({"Feature":data_w_ohe.columns, "Model Coefficient":ohe_model.coef_})
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FeatureModel Coefficient
0total_bill0.092994
1size0.187132
2day_Fri0.745787
3day_Sat0.621129
4day_Sun0.732289
5day_Thur0.668294
+ +
+
+
+

For example, when looking at the coefficient for day_Fri, we can now understand the impact of it being Friday on the predicted tip — if it is a Friday, the predicted tip increases by approximately $0.75.

+

When one-hot encoding, keep in mind that any set of one-hot encoded columns will always sum to a column of all ones, representing the bias column. More formally, the bias column is a linear combination of the OHE columns.

+
+bias +
+

We must be careful not to include this bias column in our design matrix. Otherwise, there will be linear dependence in the model, meaning \(\mathbb{X}^{\top}\mathbb{X}\) would no longer be invertible, and our OLS estimate \(\hat{\theta} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\) fails.

+

To resolve this issue, we simply omit one of the one-hot encoded columns or do not include an intercept term. The adjusted design matrices are shown below.

+
+remove +
+

Either approach works — we still retain the same information as the omitted column being a linear combination of the remaining columns.

+
+
+

14.5 Polynomial Features

+

We have encountered a few cases now where models with linear features have performed poorly on datasets that show clear non-linear curvature.

+

As an example, consider the vehicles dataset, which contains information about cars. Suppose we want to use the hp (horsepower) of a car to predict its "mpg" (gas mileage in miles per gallon). If we visualize the relationship between these two variables, we see a non-linear curvature. Fitting a linear model to these variables results in a high (poor) value of RMSE.

+

\[\hat{y} = \theta_0 + \theta_1 (\text{hp})\]

+
+
+Code +
pd.options.mode.chained_assignment = None 
+vehicles = sns.load_dataset("mpg").dropna().rename(columns = {"horsepower": "hp"}).sort_values("hp")
+
+X = vehicles[["hp"]]
+Y = vehicles["mpg"]
+
+hp_model = lm.LinearRegression()
+hp_model.fit(X, Y)
+hp_model_predictions = hp_model.predict(X)
+
+import matplotlib.pyplot as plt
+
+sns.scatterplot(data=vehicles, x="hp", y="mpg")
+plt.plot(vehicles["hp"], hp_model_predictions, c="tab:red");
+
+print(f"MSE of model with (hp) feature: {np.mean((Y-hp_model_predictions)**2)}")
+
+
+
MSE of model with (hp) feature: 23.943662938603108
+
+
+
+
+

+
+
+
+
+

As we can see from the plot, the data follows a curved line rather than a straight one. To capture this non-linearity, we can incorporate non-linear features. Let’s introduce a polynomial term, \(\text{hp}^2\), into our regression model. The model now takes the form:

+

\[\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2)\] \[\hat{y} = \theta_0 + \theta_1 \phi_1 + \theta_2 \phi_2\]

+

How can we fit a model with non-linear features? We can use the exact same techniques as before: ordinary least squares, gradient descent, or sklearn. This is because our new model is still a linear model. Although it contains non-linear features, it is linear with respect to the model parameters. All of our previous work on fitting models was done under the assumption that we were working with linear models. Because our new model is still linear, we can apply our existing methods to determine the optimal parameters.

+
+
# Add a hp^2 feature to the design matrix
+X = vehicles[["hp"]]
+X["hp^2"] = vehicles["hp"]**2
+
+# Use sklearn to fit the model
+hp2_model = lm.LinearRegression()
+hp2_model.fit(X, Y)
+hp2_model_predictions = hp2_model.predict(X)
+
+sns.scatterplot(data=vehicles, x="hp", y="mpg")
+plt.plot(vehicles["hp"], hp2_model_predictions, c="tab:red");
+
+print(f"MSE of model with (hp^2) feature: {np.mean((Y-hp2_model_predictions)**2)}")
+
+
MSE of model with (hp^2) feature: 18.98476890761722
+
+
+
+
+

+
+
+
+
+

Looking a lot better! By incorporating a squared feature, we are able to capture the curvature of the dataset. Our model is now a parabola centered on our data. Notice that our new model’s error has decreased relative to the original model with linear features.

+
+
+

14.6 Complexity and Overfitting

+

We’ve seen now that feature engineering allows us to build all sorts of features to improve the performance of the model. In particular, we saw that designing a more complex feature (squaring hp in the vehicles data previously) substantially improved the model’s ability to capture non-linear relationships. To take full advantage of this, we might be inclined to design increasingly complex features. Consider the following three models, each of different order (the maximum exponent power of each model):

+
    +
  • Model with order 2: \(\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2)\)
  • +
  • Model with order 3: \(\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2) + \theta_3 (\text{hp}^3)\)
  • +
  • Model with order 4: \(\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2) + \theta_3 (\text{hp}^3) + \theta_4 (\text{hp}^4)\)
  • +
+
+degree_comparison +
+

As we can see in the plots above, MSE continues to decrease with each additional polynomial term. To visualize it further, let’s plot models as the complexity increases from 0 to 7:

+
+degree_comparison +
+

When we use our model to make predictions on the same data that was used to fit the model, we find that the MSE decreases with each additional polynomial term (as our model gets more complex). The training error is the model’s error when generating predictions from the same data that was used for training purposes. We can conclude that the training error goes down as the complexity of the model increases.

+
+train_error +
+

This seems like good news – when working on the training data, we can improve model performance by designing increasingly complex models.

+
+
+
+ +
+
+Math Fact: Polynomial Degrees +
+
+
+

Given \(N\) overlapping data points, we can always find a polynomial of degree \(N-1\) that goes through all those points.

+For example, there always exists a degree-4 polynomial curve that can perfectly model a dataset of 5 datapoints: +
+train_error +
+
+
+

However, high model complexity comes with its own set of issues. When building the vehicles models above, we trained the models on the entire dataset and then evaluated their performance on this same dataset. In reality, we are likely to instead train the model on a sample from the population, then use it to make predictions on data it didn’t encounter during training.

+

Let’s walk through a more realistic example. Say we are given a training dataset of just 6 datapoints and want to train a model to then make predictions on a different set of points. We may be tempted to make a highly complex model (e.g., degree 5), especially given it makes perfect predictions on the training data as clear on the left. However, as shown in the graph on the right, this model would perform horribly on the rest of the population!

+
+complex +
+

This phenomenon called overfitting. The model effectively just memorized the training data it encountered when it was fitted, leaving it unable to generalize well to data it didn’t encounter during training. This is a problem: we want models that are generalizable to “unseen” data.

+

Additionally, since complex models are sensitive to the specific dataset used to train them, they have high variance. A model with high variance tends to vary more dramatically when trained on different datasets. Going back to our example above, we can see our degree-5 model varies erratically when we fit it to different samples of 6 points from vehicles.

+
+resamples +
+

We now face a dilemma: we know that we can decrease training error by increasing model complexity, but models that are too complex start to overfit and can’t be reapplied to new datasets due to high variance.

+
+bvt +
+

We can see that there is a clear trade-off that comes from the complexity of our model. As model complexity increases, the model’s error on the training data decreases. At the same time, the model’s variance tends to increase.

+

The takeaway here: we need to strike a balance in the complexity of our models; we want models that are generalizable to “unseen” data. A model that is too simple won’t be able to capture the key relationships between our variables of interest; a model that is too complex runs the risk of overfitting.

+

This begs the question: how do we control the complexity of a model? Stay tuned for Lecture 17 on Cross-Validation and Regularization!

+
+
+

14.7 [Bonus] Stochastic Gradient Descent in PyTorch

+

While this material is out of scope for Data 100, it is useful if you plan to enter a career in data science!

+

In practice, you will use software packages such as PyTorch when computing gradients and implementing gradient descent. You’ll often follow three main steps:

+
    +
  1. Sample a batch of the data.
  2. +
  3. Compute the loss and the gradient.
  4. +
  5. Update your gradient until you reach an appropriate estimate of the true gradient.
  6. +
+
+pytorch_sgd +
+

If you want to learn more, this Intro to PyTorch tutorial is a great resource to get started!

+ + + + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/feature_engineering/feature_engineering_files/figure-html/cell-5-output-2.png b/docs/feature_engineering/feature_engineering_files/figure-html/cell-5-output-2.png new file mode 100644 index 000000000..76bc711df Binary files /dev/null and b/docs/feature_engineering/feature_engineering_files/figure-html/cell-5-output-2.png differ diff --git a/docs/feature_engineering/feature_engineering_files/figure-html/cell-6-output-2.png b/docs/feature_engineering/feature_engineering_files/figure-html/cell-6-output-2.png new file mode 100644 index 000000000..c6034932f Binary files /dev/null and b/docs/feature_engineering/feature_engineering_files/figure-html/cell-6-output-2.png differ diff --git a/docs/feature_engineering/images/bias.png b/docs/feature_engineering/images/bias.png new file mode 100644 index 000000000..e6455ca22 Binary files /dev/null and b/docs/feature_engineering/images/bias.png differ diff --git a/docs/feature_engineering/images/bvt.png b/docs/feature_engineering/images/bvt.png new file mode 100644 index 000000000..7baffea82 Binary files /dev/null and b/docs/feature_engineering/images/bvt.png differ diff --git a/docs/feature_engineering/images/complex.png b/docs/feature_engineering/images/complex.png new file mode 100644 index 000000000..61769f1a3 Binary files /dev/null and b/docs/feature_engineering/images/complex.png differ diff --git a/docs/feature_engineering/images/complexity_grad_descent.png b/docs/feature_engineering/images/complexity_grad_descent.png new file mode 100644 index 000000000..8a48dbbe4 Binary files /dev/null and b/docs/feature_engineering/images/complexity_grad_descent.png differ diff --git a/docs/feature_engineering/images/complexity_normal_solution.png b/docs/feature_engineering/images/complexity_normal_solution.png new file mode 100644 index 000000000..c41ad6a7a Binary files /dev/null and b/docs/feature_engineering/images/complexity_normal_solution.png differ diff --git a/docs/feature_engineering/images/degree_comparison.png b/docs/feature_engineering/images/degree_comparison.png new file mode 100644 index 000000000..9bb1992e7 Binary files /dev/null and b/docs/feature_engineering/images/degree_comparison.png differ diff --git a/docs/feature_engineering/images/degree_comparison2.png b/docs/feature_engineering/images/degree_comparison2.png new file mode 100644 index 000000000..95ee200a0 Binary files /dev/null and b/docs/feature_engineering/images/degree_comparison2.png differ diff --git a/docs/feature_engineering/images/gd.png b/docs/feature_engineering/images/gd.png new file mode 100644 index 000000000..6ba0c3376 Binary files /dev/null and b/docs/feature_engineering/images/gd.png differ diff --git a/docs/feature_engineering/images/ohe.png b/docs/feature_engineering/images/ohe.png new file mode 100644 index 000000000..c5f26296c Binary files /dev/null and b/docs/feature_engineering/images/ohe.png differ diff --git a/docs/feature_engineering/images/ohemodel.png b/docs/feature_engineering/images/ohemodel.png new file mode 100644 index 000000000..06dddaea7 Binary files /dev/null and b/docs/feature_engineering/images/ohemodel.png differ diff --git a/docs/feature_engineering/images/perfect_poly_fits.png b/docs/feature_engineering/images/perfect_poly_fits.png new file mode 100644 index 000000000..86943ecfc Binary files /dev/null and b/docs/feature_engineering/images/perfect_poly_fits.png differ diff --git a/docs/feature_engineering/images/phi.png b/docs/feature_engineering/images/phi.png new file mode 100644 index 000000000..4c0b04e91 Binary files /dev/null and b/docs/feature_engineering/images/phi.png differ diff --git a/docs/feature_engineering/images/pytorchsgd.png b/docs/feature_engineering/images/pytorchsgd.png new file mode 100644 index 000000000..85b07dbcd Binary files /dev/null and b/docs/feature_engineering/images/pytorchsgd.png differ diff --git a/docs/feature_engineering/images/remove.png b/docs/feature_engineering/images/remove.png new file mode 100644 index 000000000..bd09ddcf1 Binary files /dev/null and b/docs/feature_engineering/images/remove.png differ diff --git a/docs/feature_engineering/images/resamples.png b/docs/feature_engineering/images/resamples.png new file mode 100644 index 000000000..28f904ab1 Binary files /dev/null and b/docs/feature_engineering/images/resamples.png differ diff --git a/docs/feature_engineering/images/sgd.png b/docs/feature_engineering/images/sgd.png new file mode 100644 index 000000000..ee579a100 Binary files /dev/null and b/docs/feature_engineering/images/sgd.png differ diff --git a/docs/feature_engineering/images/train_error.png b/docs/feature_engineering/images/train_error.png new file mode 100644 index 000000000..a2993b42b Binary files /dev/null and b/docs/feature_engineering/images/train_error.png differ diff --git a/docs/gradient_descent/gradient_descent.html b/docs/gradient_descent/gradient_descent.html new file mode 100644 index 000000000..637833192 --- /dev/null +++ b/docs/gradient_descent/gradient_descent.html @@ -0,0 +1,3108 @@ + + + + + + + + + +13  sklearn and Gradient Descent – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

13  sklearn and Gradient Descent

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Apply the sklearn library for model creation and training
  • +
  • Optimizing complex models
  • +
  • Identifying cases where straight calculus or geometric arguments won’t help solve the loss function
  • +
  • Applying gradient descent for numerical optimization
  • +
+
+
+
+
+
+Code +
import pandas as pd
+import seaborn as sns
+import plotly.express as px
+import matplotlib.pyplot as plt
+import numpy as np
+from sklearn.linear_model import LinearRegression
+pd.options.mode.chained_assignment = None  # default='warn'
+
+
+
+

13.1 sklearn

+
+

13.1.1 Implementing Derived Formulas in Code

+

Throughout this lecture, we’ll refer to the penguins dataset.

+
+
+Code +
import pandas as pd
+import seaborn as sns
+import numpy as np
+
+penguins = sns.load_dataset("penguins")
+penguins = penguins[penguins["species"] == "Adelie"].dropna()
+penguins.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
speciesislandbill_length_mmbill_depth_mmflipper_length_mmbody_mass_gsex
0AdelieTorgersen39.118.7181.03750.0Male
1AdelieTorgersen39.517.4186.03800.0Female
2AdelieTorgersen40.318.0195.03250.0Female
4AdelieTorgersen36.719.3193.03450.0Female
5AdelieTorgersen39.320.6190.03650.0Male
+ +
+
+
+

Our goal will be to predict the value of the "bill_depth_mm" for a particular penguin given its "flipper_length_mm" and "body_mass_g". We’ll also add a bias column of all ones to represent the intercept term of our models.

+
+
# Add a bias column of all ones to `penguins`
+penguins["bias"] = np.ones(len(penguins), dtype=int) 
+
+# Define the design matrix, X...
+# Note that we use .to_numpy() to convert our DataFrame into a NumPy array so it is in Matrix form
+X = penguins[["bias", "flipper_length_mm", "body_mass_g"]].to_numpy()
+
+# ...as well as the target variable, Y
+# Again, we use .to_numpy() to convert our DataFrame into a NumPy array so it is in Matrix form
+Y = penguins[["bill_depth_mm"]].to_numpy()
+
+

In the lecture on ordinary least squares, we expressed multiple linear regression using matrix notation.

+

\[\hat{\mathbb{Y}} = \mathbb{X}\theta\]

+

We used a geometric approach to derive the following expression for the optimal model parameters:

+

\[\hat{\theta} = (\mathbb{X}^T \mathbb{X})^{-1}\mathbb{X}^T \mathbb{Y}\]

+

That’s a whole lot of matrix manipulation. How do we implement it in python?

+

There are three operations we need to perform here: multiplying matrices, taking transposes, and finding inverses.

+
    +
  • To perform matrix multiplication, use the @ operator
  • +
  • To take a transpose, call the .T attribute of an NumPy array or DataFrame
  • +
  • To compute an inverse, use NumPy’s in-built method np.linalg.inv
  • +
+

Putting this all together, we can compute the OLS estimate for the optimal model parameters, stored in the array theta_hat.

+
+
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ Y
+theta_hat
+
+
array([[1.10029953e+01],
+       [9.82848689e-03],
+       [1.47749591e-03]])
+
+
+

To make predictions using our optimized parameter values, we matrix-multiply the design matrix with the parameter vector:

+

\[\hat{\mathbb{Y}} = \mathbb{X}\theta\]

+
+
Y_hat = X @ theta_hat
+pd.DataFrame(Y_hat).head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
0
018.322561
118.445578
217.721412
317.997254
418.263268
+ +
+
+
+
+
+

13.1.2 The sklearn Workflow

+

We’ve already saved a lot of time (and avoided tedious calculations) by translating our derived formulas into code. However, we still had to go through the process of writing out the linear algebra ourselves.

+

To make life even easier, we can turn to the sklearn python library. sklearn is a robust library of machine learning tools used extensively in research and industry. It is the standard for simple machine learning tasks and gives us a wide variety of in-built modeling frameworks and methods, so we’ll keep returning to sklearn techniques as we progress through Data 100.

+

Regardless of the specific type of model being implemented, sklearn follows a standard set of steps for creating a model:

+
    +
  1. Import the LinearRegression model from sklearn

    +
    from sklearn.linear_model import LinearRegression
  2. +
  3. Create a model object. This generates a new instance of the model class. You can think of it as making a new “copy” of a standard “template” for a model. In code, this looks like:

    +
    my_model = LinearRegression()
  4. +
  5. Fit the model to the X design matrix and Y target vector. This calculates the optimal model parameters “behind the scenes” without us explicitly working through the calculations ourselves. The fitted parameters are then stored within the model for use in future predictions:

    +
    my_model.fit(X, Y)
  6. +
  7. Use the fitted model to make predictions on the X input data using .predict.

    +
    my_model.predict(X)
  8. +
+

To extract the fitted parameters, we can use:

+
my_model.coef_
+
+my_model.intercept_
+

Let’s put this into action with our multiple regression task!

+

1. Initialize an instance of the model class

+

sklearn stores “templates” of useful models for machine learning. We begin the modeling process by making a “copy” of one of these templates for our own use. Model initialization looks like ModelClass(), where ModelClass is the type of model we wish to create.

+

For now, let’s create a linear regression model using LinearRegression.

+

my_model is now an instance of the LinearRegression class. You can think of it as the “idea” of a linear regression model. We haven’t trained it yet, so it doesn’t know any model parameters and cannot be used to make predictions. In fact, we haven’t even told it what data to use for modeling! It simply waits for further instructions.

+
+
my_model = LinearRegression()
+
+

2. Train the model using .fit

+

Before the model can make predictions, we will need to fit it to our training data. When we fit the model, sklearn will run gradient descent behind the scenes to determine the optimal model parameters. It will then save these model parameters to our model instance for future use.

+

All sklearn model classes include a .fit method, which is used to fit the model. It takes in two inputs: the design matrix, X, and the target variable, Y.

+

Let’s start by fitting a model with just one feature: the flipper length. We create a design matrix X by pulling out the "flipper_length_mm" column from the DataFrame.

+
+
# .fit expects a 2D data design matrix, so we use double brackets to extract a DataFrame
+X = penguins[["flipper_length_mm"]]
+Y = penguins["bill_depth_mm"]
+
+my_model.fit(X, Y)
+
+
LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
+
+
+

Notice that we use double brackets to extract this column. Why double brackets instead of just single brackets? The .fit method, by default, expects to receive 2-dimensional data – some kind of data that includes both rows and columns. Writing penguins["flipper_length_mm"] would return a 1D Series, causing sklearn to error. We avoid this by writing penguins[["flipper_length_mm"]] to produce a 2D DataFrame.

+

And in just three lines of code, our model has run gradient descent to determine the optimal model parameters! Our single-feature model takes the form:

+

\[\text{bill depth} = \theta_0 + \theta_1 \text{flipper length}\]

+

Note that LinearRegression will automatically include an intercept term.

+

The fitted model parameters are stored as attributes of the model instance. my_model.intercept_ will return the value of \(\hat{\theta}_0\) as a scalar. my_model.coef_ will return all values \(\hat{\theta}_1, +\hat{\theta}_1, ...\) in an array. Because our model only contains one feature, we see just the value of \(\hat{\theta}_1\) in the cell below.

+
+
# The intercept term, theta_0
+my_model.intercept_
+
+
np.float64(7.297305899612313)
+
+
+
+
# All parameters theta_1, ..., theta_p
+my_model.coef_
+
+
array([0.05812622])
+
+
+

3. Use the fitted model to make predictions

+

Now that the model has been trained, we can use it to make predictions! To do so, we use the .predict method. .predict takes in one argument: the design matrix that should be used to generate predictions. To understand how the model performs on the training set, we would pass in the training data. Alternatively, to make predictions on unseen data, we would pass in a new dataset that wasn’t used to train the model.

+

Below, we call .predict to generate model predictions on the original training data. As before, we use double brackets to ensure that we extract 2-dimensional data.

+
+
Y_hat_one_feature = my_model.predict(penguins[["flipper_length_mm"]])
+
+print(f"The RMSE of the model is {np.sqrt(np.mean((Y-Y_hat_one_feature)**2))}")
+
+
The RMSE of the model is 1.154936309923901
+
+
+

What if we wanted a model with two features?

+

\[\text{bill depth} = \theta_0 + \theta_1 \text{flipper length} + \theta_2 \text{body mass}\]

+

We repeat this three-step process by intializing a new model object, then calling .fit and .predict as before.

+
+
# Step 1: initialize LinearRegression model
+two_feature_model = LinearRegression()
+
+# Step 2: fit the model
+X_two_features = penguins[["flipper_length_mm", "body_mass_g"]]
+Y = penguins["bill_depth_mm"]
+
+two_feature_model.fit(X_two_features, Y)
+
+# Step 3: make predictions
+Y_hat_two_features = two_feature_model.predict(X_two_features)
+
+print(f"The RMSE of the model is {np.sqrt(np.mean((Y-Y_hat_two_features)**2))}")
+
+
The RMSE of the model is 0.9881331104079043
+
+
+

We can also see that we obtain the same predictions using sklearn as we did when applying the ordinary least squares formula before!

+
+
+Code +
pd.DataFrame({"Y_hat from OLS":np.squeeze(Y_hat), "Y_hat from sklearn":Y_hat_two_features}).head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Y_hat from OLSY_hat from sklearn
018.32256118.322561
118.44557818.445578
217.72141217.721412
317.99725417.997254
418.26326818.263268
+ +
+
+
+
+
+
+

13.2 Gradient Descent

+

At this point, we’ve grown quite familiar with the process of choosing a model and a corresponding loss function and optimizing parameters by choosing the values of \(\theta\) that minimize the loss function. So far, we’ve optimized \(\theta\) by

+
    +
  1. Using calculus to take the derivative of the loss function with respect to \(\theta\), setting it equal to 0, and solving for \(\theta\).
  2. +
  3. Using the geometric argument of orthogonality to derive the OLS solution \(\hat{\theta} = (\mathbb{X}^T \mathbb{X})^{-1}\mathbb{X}^T \mathbb{Y}\).
  4. +
+

One thing to note, however, is that the techniques we used above can only be applied if we make some big assumptions. For the calculus approach, we assumed that the loss function was differentiable at all points and that we could algebraically solve for the zero points of the derivative; for the geometric approach, OLS only applies when using a linear model with MSE loss. What happens when we have more complex models with different, more complex loss functions? The techniques we’ve learned so far will not work, so we need a new optimization technique: gradient descent.

+
+

BIG IDEA: use an iterative algorithm to numerically compute the minimum of the loss.

+
+
+

13.2.1 Minimizing an Arbitrary 1D Function

+

Let’s consider an arbitrary function. Our goal is to find the value of \(x\) that minimizes this function.

+
+
def arbitrary(x):
+    return (x**4 - 15*x**3 + 80*x**2 - 180*x + 144)/10
+
+

arbitrary

+
+

13.2.1.1 The Naive Approach: Guess and Check

+

Above, we saw that the minimum is somewhere around 5.3. Let’s see if we can figure out how to find the exact minimum algorithmically from scratch. One very slow (and terrible) way would be manual guess-and-check.

+
+
arbitrary(6)
+
+
0.0
+
+
+

A somewhat better (but still slow) approach is to use brute force to try out a bunch of x values and return the one that yields the lowest loss.

+
+
def simple_minimize(f, xs):
+    # Takes in a function f and a set of values xs. 
+    # Calculates the value of the function f at all values x in xs
+    # Takes the minimum value of f(x) and returns the corresponding value x 
+    y = [f(x) for x in xs]  
+    return xs[np.argmin(y)]
+
+guesses = [5.3, 5.31, 5.32, 5.33, 5.34, 5.35]
+simple_minimize(arbitrary, guesses)
+
+
5.33
+
+
+

This process is essentially the same as before where we made a graphical plot, it’s just that we’re only looking at 20 selected points.

+
+
+Code +
xs = np.linspace(1, 7, 200)
+sparse_xs = np.linspace(1, 7, 5)
+
+ys = arbitrary(xs)
+sparse_ys = arbitrary(sparse_xs)
+
+fig = px.line(x = xs, y = arbitrary(xs))
+fig.add_scatter(x = sparse_xs, y = arbitrary(sparse_xs), mode = "markers")
+fig.update_layout(showlegend= False)
+fig.update_layout(autosize=False, width=800, height=600)
+fig.show()
+
+
+
+
+
+

This basic approach suffers from three major flaws:

+
    +
  1. If the minimum is outside our range of guesses, the answer will be completely wrong.
  2. +
  3. Even if our range of guesses is correct, if the guesses are too coarse, our answer will be inaccurate.
  4. +
  5. It is very computationally inefficient, considering potentially vast numbers of guesses that are useless.
  6. +
+
+
+

13.2.1.2 Scipy.optimize.minimize

+

One way to minimize this mathematical function is to use the scipy.optimize.minimize function. It takes a function and a starting guess and tries to find the minimum.

+
+
from scipy.optimize import minimize
+
+# takes a function f and a starting point x0 and returns a readout 
+# with the optimal input value of x which minimizes f
+minimize(arbitrary, x0 = 3.5)
+
+
  message: Optimization terminated successfully.
+  success: True
+   status: 0
+      fun: -0.13827491292966557
+        x: [ 2.393e+00]
+      nit: 3
+      jac: [ 6.486e-06]
+ hess_inv: [[ 7.385e-01]]
+     nfev: 20
+     njev: 10
+
+
+

scipy.optimize.minimize is great. It may also seem a bit magical. How could you write a function that can find the minimum of any mathematical function? There are a number of ways to do this, which we’ll explore in today’s lecture, eventually arriving at the important idea of gradient descent, which is the principle that scipy.optimize.minimize uses.

+

It turns out that under the hood, the fit method for LinearRegression models uses gradient descent. Gradient descent is also how much of machine learning works, including even advanced neural network models.

+

In Data 100, the gradient descent process will usually be invisible to us, hidden beneath an abstraction layer. However, to be good data scientists, it’s important that we know the underlying principles that optimization functions harness to find optimal parameters.

+
+
+

13.2.1.3 Digging into Gradient Descent

+

Looking at the function across this domain, it is clear that the function’s minimum value occurs around \(\theta = 5.3\). Let’s pretend for a moment that we couldn’t see the full view of the cost function. How would we guess the value of \(\theta\) that minimizes the function?

+

It turns out that the first derivative of the function can give us a clue. In the plots below, the line indicates the value of the derivative of each value of \(\theta\). The derivative is negative where it is red and positive where it is green.

+

Say we make a guess for the minimizing value of \(\theta\). Remember that we read plots from left to right, and assume that our starting \(\theta\) value is to the left of the optimal \(\hat{\theta}\). If the guess “undershoots” the true minimizing value – our guess for \(\theta\) is lower than the value of the \(\hat{\theta}\) that minimizes the function – the derivative will be negative. This means that if we increase \(\theta\) (move further to the right), then we can decrease our loss function further. If this guess “overshoots” the true minimizing value, the derivative will be positive, implying the converse.

+
+ + + + +
+step +
+
+

We can use this pattern to help formulate our next guess for the optimal \(\hat{\theta}\). Consider the case where we’ve undershot \(\theta\) by guessing too low of a value. We’ll want our next guess to be greater in value than our previous guess – that is, we want to shift our guess to the right. You can think of this as following the slope “downhill” to the function’s minimum value.

+
+ + + + +
+neg_step +
+
+

If we’ve overshot \(\hat{\theta}\) by guessing too high of a value, we’ll want our next guess to be lower in value – we want to shift our guess for \(\hat{\theta}\) to the left.

+
+ + + + +
+pos_step +
+
+

In other words, the derivative of the function at each point tells us the direction of our next guess.

+
    +
  • A negative slope means we want to step to the right, or move in the positive direction.
  • +
  • A positive slope means we want to step to the left, or move in the negative direction.
  • +
+
+
+

13.2.1.4 Algorithm Attempt 1

+

Armed with this knowledge, let’s try to see if we can use the derivative to optimize the function.

+

We start by making some guess for the minimizing value of \(x\). Then, we look at the derivative of the function at this value of \(x\), and step downhill in the opposite direction. We can express our new rule as a recurrence relation:

+

\[x^{(t+1)} = x^{(t)} - \frac{d}{dx} f(x^{(t)})\]

+

Translating this statement into English: we obtain our next guess for the minimizing value of \(x\) at timestep \(t+1\) (\(x^{(t+1)}\)) by taking our last guess (\(x^{(t)}\)) and subtracting the derivative of the function at that point (\(\frac{d}{dx} f(x^{(t)})\)).

+

A few steps are shown below, where the old step is shown as a transparent point, and the next step taken is the green-filled dot.

+
+ + + + +
+grad_descent_2 +
+
+

Looking pretty good! We do have a problem though – once we arrive close to the minimum value of the function, our guesses “bounce” back and forth past the minimum without ever reaching it.

+
+ + + + +
+grad_descent_2 +
+
+

In other words, each step we take when updating our guess moves us too far. We can address this by decreasing the size of each step.

+
+
+

13.2.1.5 Algorithm Attempt 2

+

Let’s update our algorithm to use a learning rate (also sometimes called the step size), which controls how far we move with each update. We represent the learning rate with \(\alpha\).

+

\[x^{(t+1)} = x^{(t)} - \alpha \frac{d}{dx} f(x^{(t)})\]

+

A small \(\alpha\) means that we will take small steps; a large \(\alpha\) means we will take large steps. When do we stop updating? We stop updating either after a fixed number of updates or after a subsequent update doesn’t change much.

+

Updating our function to use \(\alpha=0.3\), our algorithm successfully converges (settles on a solution and stops updating significantly, or at all) on the minimum value.

+
+ + + + +
+grad_descent_3 +
+
+
+
+
+

13.2.2 Convexity

+

In our analysis above, we focused our attention on the global minimum of the loss function. You may be wondering: what about the local minimum that’s just to the left?

+

If we had chosen a different starting guess for \(\theta\), or a different value for the learning rate \(\alpha\), our algorithm may have gotten “stuck” and converged on the local minimum, rather than on the true optimum value of loss.

+
+ + + + +
+local +
+
+

If the loss function is convex, gradient descent is guaranteed to converge and find the global minimum of the objective function. Formally, a function \(f\) is convex if: \[tf(a) + (1-t)f(b) \geq f(ta + (1-t)b)\] for all \(a, b\) in the domain of \(f\) and \(t \in [0, 1]\).

+

To put this into words: if you drew a line between any two points on the curve, all values on the curve must be on or below the line. Importantly, any local minimum of a convex function is also its global minimum so we avoid the situation where the algorithm converges on some critical point that is not the minimum of the function.

+
+ + + + +
+convex +
+
+

In summary, non-convex loss functions can cause problems with optimization. This means that our choice of loss function is a key factor in our modeling process. It turns out that MSE is convex, which is a major reason why it is such a popular choice of loss function. Gradient descent is only guaranteed to converge (given enough iterations and an appropriate step size) for convex functions.

+
+
+

13.2.3 Gradient Descent in 1 Dimension

+
+

Terminology clarification: In past lectures, we have used “loss” to refer to the error incurred on a single datapoint. In applications, we usually care more about the average error across all datapoints. Going forward, we will take the “model’s loss” to mean the model’s average error across the dataset. This is sometimes also known as the empirical risk, cost function, or objective function. \[L(\theta) = R(\theta) = \frac{1}{n} \sum_{i=1}^{n} L(y, \hat{y})\]

+
+

In our discussion above, we worked with some arbitrary function \(f\). As data scientists, we will almost always work with gradient descent in the context of optimizing models – specifically, we want to apply gradient descent to find the minimum of a loss function. In a modeling context, our goal is to minimize a loss function by choosing the minimizing model parameters.

+

Recall our modeling workflow from the past few lectures:

+
    +
  1. Define a model with some parameters \(\theta_i\)
  2. +
  3. Choose a loss function
  4. +
  5. Select the values of \(\theta_i\) that minimize the loss function on the data
  6. +
+

Gradient descent is a powerful technique for completing this last task. By applying the gradient descent algorithm, we can select values for our parameters \(\theta_i\) that will lead to the model having minimal loss on the training data.

+

When using gradient descent in a modeling context, we:

+
    +
  1. Make guesses for the minimizing \(\theta_i\)
  2. +
  3. Compute the derivative of the loss function \(L\)
  4. +
+

We can “translate” our gradient descent rule from before by replacing \(x\) with \(\theta\) and \(f\) with \(L\):

+

\[\theta^{(t+1)} = \theta^{(t)} - \alpha \frac{d}{d\theta} L(\theta^{(t)})\]

+
+

13.2.3.1 Gradient Descent on the tips Dataset

+

To see this in action, let’s consider a case where we have a linear model with no offset. We want to predict the tip (y) given the price of a meal (x). To do this, we

+
    +
  • Choose a model: \(\hat{y} = \theta_1 x\),
  • +
  • Choose a loss function: \(L(\theta) = MSE(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \theta_1x_i)^2\).
  • +
+

Let’s apply our gradient_descent function from before to optimize our model on the tips dataset. We will try to select the best parameter \(\theta_i\) to predict the tip \(y\) from the total_bill \(x\).

+
+
df = sns.load_dataset("tips")
+df.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
total_billtipsexsmokerdaytimesize
016.991.01FemaleNoSunDinner2
110.341.66MaleNoSunDinner3
221.013.50MaleNoSunDinner3
323.683.31MaleNoSunDinner2
424.593.61FemaleNoSunDinner4
+ +
+
+
+

We can visualize the value of the MSE on our dataset for different possible choices of \(\theta_1\). To optimize our model, we want to select the value of \(\theta_1\) that leads to the lowest MSE.

+
+
+Code +
import plotly.graph_objects as go
+
+def derivative_arbitrary(x):
+    return (4*x**3 - 45*x**2 + 160*x - 180)/10
+
+fig = go.Figure()
+roots = np.array([2.3927, 3.5309, 5.3263])
+
+fig.add_trace(go.Scatter(x = xs, y = arbitrary(xs), 
+                         mode = "lines", name = "f"))
+fig.add_trace(go.Scatter(x = xs, y = derivative_arbitrary(xs), 
+                         mode = "lines", name = "df", line = {"dash": "dash"}))
+fig.add_trace(go.Scatter(x = np.array(roots), y = 0*roots, 
+                         mode = "markers", name = "df = zero", marker_size = 12))
+fig.update_layout(font_size = 20, yaxis_range=[-1, 3])
+fig.update_layout(autosize=False, width=800, height=600)
+fig.show()
+
+
+
+
+
+

To apply gradient descent, we need to compute the derivative of the loss function with respect to our parameter \(\theta_1\).

+
    +
  • Given our loss function, \[L(\theta) = MSE(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \theta_1x_i)^2\]
  • +
  • We take the derivative with respect to \(\theta_1\) \[\frac{\partial}{\partial \theta_{1}} L(\theta_1^{(t)}) = \frac{-2}{n} \sum_{i=1}^n (y_i - \theta_1^{(t)} x_i) x_i\]
  • +
  • Which results in the gradient descent update rule \[\theta_1^{(t+1)} = \theta_1^{(t)} - \alpha \frac{d}{d\theta}L(\theta_1^{(t)})\]
  • +
+

for some learning rate \(\alpha\).

+

Implementing this in code, we can visualize the MSE loss on the tips data. MSE is convex, so there is one global minimum.

+
+
+Code +
def gradient_descent(df, initial_guess, alpha, n):
+    """Performs n steps of gradient descent on df using learning rate alpha starting
+       from initial_guess. Returns a numpy array of all guesses over time."""
+    guesses = [initial_guess]
+    current_guess = initial_guess
+    while len(guesses) < n:
+        current_guess = current_guess - alpha * df(current_guess)
+        guesses.append(current_guess)
+        
+    return np.array(guesses)
+
+def mse_single_arg(theta_1):
+    """Returns the MSE on our data for the given theta1"""
+    x = df["total_bill"]
+    y_obs = df["tip"]
+    y_hat = theta_1 * x
+    return np.mean((y_hat - y_obs) ** 2)
+
+def mse_loss_derivative_single_arg(theta_1):
+    """Returns the derivative of the MSE on our data for the given theta1"""
+    x = df["total_bill"]
+    y_obs = df["tip"]
+    y_hat = theta_1 * x
+    
+    return np.mean(2 * (y_hat - y_obs) * x)
+
+loss_df = pd.DataFrame({"theta_1":np.linspace(-1.5, 1), "MSE":[mse_single_arg(theta_1) for theta_1 in np.linspace(-1.5, 1)]})
+
+trajectory = gradient_descent(mse_loss_derivative_single_arg, -0.5, 0.0001, 100)
+
+plt.plot(loss_df["theta_1"], loss_df["MSE"])
+plt.scatter(trajectory, [mse_single_arg(guess) for guess in trajectory], c="white", edgecolor="firebrick")
+plt.scatter(trajectory[-1], mse_single_arg(trajectory[-1]), c="firebrick")
+plt.xlabel(r"$\theta_1$")
+plt.ylabel(r"$L(\theta_1)$");
+
+print(f"Final guess for theta_1: {trajectory[-1]}")
+
+
+
Final guess for theta_1: 0.14369554654231262
+
+
+
+
+

+
+
+
+
+
+
+
+

13.2.4 Gradient Descent on Multi-Dimensional Models

+

The function we worked with above was one-dimensional – we were only minimizing the function with respect to a single parameter, \(\theta\). However, models usually have a cost function with multiple parameters that need to be optimized. For example, simple linear regression has 2 parameters: \[\hat{y} + \theta_0 + \theta_1x\] and multiple linear regression has \(p+1\) parameters: \[\mathbb{Y} = \theta_0 + \theta_1 \Bbb{X}_{:,1} + \theta_2 \Bbb{X}_{:,2} + \cdots + \theta_p \Bbb{X}_{:,p}\]

+

We’ll need to expand gradient descent so we can update our guesses for all model parameters all in one go.

+

With multiple parameters to optimize, we consider a loss surface, or the model’s loss for a particular combination of possible parameter values.

+
+
+Code +
import plotly.graph_objects as go
+
+
+def mse_loss(theta, X, y_obs):
+    y_hat = X @ theta
+    return np.mean((y_hat - y_obs) ** 2)    
+
+tips_with_bias = df.copy()
+tips_with_bias["bias"] = 1
+tips_with_bias = tips_with_bias[["bias", "total_bill"]]
+
+uvalues = np.linspace(0, 2, 10)
+vvalues = np.linspace(-0.1, 0.35, 10)
+(u,v) = np.meshgrid(uvalues, vvalues)
+thetas = np.vstack((u.flatten(),v.flatten()))
+
+def mse_loss_single_arg(theta):
+    return mse_loss(theta, tips_with_bias, df["tip"])
+
+MSE = np.array([mse_loss_single_arg(t) for t in thetas.T])
+
+loss_surface = go.Surface(x=u, y=v, z=np.reshape(MSE, u.shape))
+
+ind = np.argmin(MSE)
+optimal_point = go.Scatter3d(name = "Optimal Point",
+    x = [thetas.T[ind,0]], y = [thetas.T[ind,1]], 
+    z = [MSE[ind]],
+    marker=dict(size=10, color="red"))
+
+fig = go.Figure(data=[loss_surface, optimal_point])
+fig.update_layout(scene = dict(
+    xaxis_title = "theta0",
+    yaxis_title = "theta1",
+    zaxis_title = "MSE"), autosize=False, width=800, height=600)
+
+fig.show()
+
+
+
+
+
+

We can also visualize a bird’s-eye view of the loss surface from above using a contour plot:

+
+
+Code +
contour = go.Contour(x=u[0], y=v[:, 0], z=np.reshape(MSE, u.shape))
+fig = go.Figure(contour)
+fig.update_layout(
+    xaxis_title = "theta0",
+    yaxis_title = "theta1", autosize=False, width=800, height=600)
+
+fig.show()
+
+
+
+
+
+
+

13.2.4.1 The Gradient Vector

+

As before, the derivative of the loss function tells us the best way towards the minimum value.

+

On a 2D (or higher) surface, the best way to go down (gradient) is described by a vector.

+
+ + + + +
+loss_surface +
+
+
+

Math Aside: Partial Derivatives

+
+
+
    +
  • For an equation with multiple variables, we take a partial derivative by differentiating with respect to just one variable at a time. The partial derivative is denoted with a \(\partial\). Intuitively, we want to see how the function changes if we only vary one variable while holding other variables constant.
  • +
  • Using \(f(x, y) = 3x^2 + y\) as an example, +
      +
    • taking the partial derivative with respect to x and treating y as a constant gives us \(\frac{\partial f}{\partial x} = 6x\)
    • +
    • taking the partial derivative with respect to y and treating x as a constant gives us \(\frac{\partial f}{\partial y} = 1\)
    • +
  • +
+
+

For the vector of parameter values \(\vec{\theta} = \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \end{bmatrix}\), we take the partial derivative of loss with respect to each parameter: \(\frac{\partial L}{\partial \theta_0}\) and \(\frac{\partial L}{\partial \theta_1}\).

+
+

For example, consider the 2D function: \[f(\theta_0, \theta_1) = 8 \theta_0^2 + 3\theta_0\theta_1\] For a function of 2 variables \(f(\theta_0, \theta_1)\), we define the gradient \[ +\begin{align} +\frac{\partial f}{\partial \theta_{0}} &= 16\theta_0 + 3\theta_1 \\ +\frac{\partial f}{\partial \theta_{1}} &= 3\theta_0 \\ +\nabla_{\vec{\theta}} f(\vec{\theta}) &= \begin{bmatrix} 16\theta_0 + 3\theta_1 \\ 3\theta_0 \\ \end{bmatrix} +\end{align} +\]

+
+

The gradient vector of a generic function of \(p+1\) variables is therefore \[\nabla_{\vec{\theta}} L = \begin{bmatrix} \frac{\partial L}{\partial \theta_0} \\ \frac{\partial L}{\partial \theta_1} \\ \vdots \end{bmatrix}\] where \(\nabla_\theta L\) always points in the downhill direction of the surface. We can interpret each gradient as: “If I nudge the \(i\)th model weight, what happens to loss?”

+

We can use this to update our 1D gradient rule for models with multiple parameters.

+
    +
  • Recall our 1D update rule: \[\theta^{(t+1)} = \theta^{(t)} - \alpha \frac{d}{d\theta}L(\theta^{(t)})\]

  • +
  • For models with multiple parameters, we work in terms of vectors: \[\begin{bmatrix} + \theta_{0}^{(t+1)} \\ + \theta_{1}^{(t+1)} \\ + \vdots + \end{bmatrix} = \begin{bmatrix} + \theta_{0}^{(t)} \\ + \theta_{1}^{(t)} \\ + \vdots + \end{bmatrix} - \alpha \begin{bmatrix} + \frac{\partial L}{\partial \theta_{0}} \\ + \frac{\partial L}{\partial \theta_{1}} \\ + \vdots \\ + \end{bmatrix}\]

  • +
  • Written in a more compact form, \[\vec{\theta}^{(t+1)} = \vec{\theta}^{(t)} - \alpha \nabla_{\vec{\theta}} L(\theta^{(t)}) \]

    +
      +
    • \(\theta\) is a vector with our model weights
    • +
    • \(L\) is the loss function
    • +
    • \(\alpha\) is the learning rate (ours is constant, but other techniques use an \(\alpha\) that decreases over time)
    • +
    • \(\vec{\theta}^{(t)}\) is the current value of \(\theta\)
    • +
    • \(\vec{\theta}^{(t+1)}\) is the next value of \(\theta\)
    • +
    • \(\nabla_{\vec{\theta}} L(\theta^{(t)})\) is the gradient of the loss function evaluated at the current \(\vec{\theta}^{(t)}\)
    • +
  • +
+
+
+
+

13.2.5 Batch Gradient Descent and Stochastic Gradient Descent

+

Formally, the algorithm we derived above is called batch gradient descent. For each iteration of the algorithm, the derivative of loss is computed across the entire batch of all \(n\) datapoints. While this update rule works well in theory, it is not practical in most circumstances. For large datasets (with perhaps billions of datapoints), finding the gradient across all the data is incredibly computationally taxing; gradient descent will converge slowly because each individual update is slow.

+

Stochastic (mini-batch) gradient descent tries to address this issue. In stochastic descent, only a sample of the full dataset is used at each update. We estimate the true gradient of the loss surface using just that sample of data. The batch size is the number of data points used in each sample. The sampling strategy is generally without replacement (data is shuffled and batch size examples are selected one at a time.)

+

Each complete “pass” through the data is known as a training epoch. After shuffling the data, in a single training epoch of stochastic gradient descent, we

+
    +
  • Compute the gradient on the first x% of the data. Update the parameter guesses.
  • +
  • Compute the gradient on the next x% of the data. Update the parameter guesses.
  • +
  • \(\dots\)
  • +
  • Compute the gradient on the last x% of the data. Update the parameter guesses.
  • +
+

Every data point appears once in a single training epoch. We then perform several training epochs until we’re satisfied.

+

Batch gradient descent is a deterministic technique – because the entire dataset is used at each update iteration, the algorithm will always advance towards the minimum of the loss surface. In contrast, stochastic gradient descent involve an element of randomness. Since only a subset of the full data is used to update the guess for \(\vec{\theta}\) at each iteration, there’s a chance the algorithm will not progress towards the true minimum of loss with each update. Over the longer term, these stochastic techniques should still converge towards the optimal solution.

+

The diagrams below represent a “bird’s eye view” of a loss surface from above. Notice that batch gradient descent takes a direct path towards the optimal \(\hat{\theta}\). Stochastic gradient descent, in contrast, “hops around” on its path to the minimum point on the loss surface. This reflects the randomness of the sampling process at each update step.

+
+ + + + +
+stochastic +
+
+

To summarize the tradeoffs of batch size:

+ +++++ + + + + + + + + + + + + + + + + + + + +
-Smaller Batch SizeLarger Batch Size
ProsMore frequent gradient updatesLeverage hardware acceleration to improve overall system performance and higher quality gradient updates
ConsMore variability in the gradient estimatesLess frequent gradient updates
+

The typical solution is to set batch size to ensure sufficient hardware utilization.

+ + + + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/gradient_descent/gradient_descent_files/figure-html/cell-21-output-2.png b/docs/gradient_descent/gradient_descent_files/figure-html/cell-21-output-2.png new file mode 100644 index 000000000..6d65a353e Binary files /dev/null and b/docs/gradient_descent/gradient_descent_files/figure-html/cell-21-output-2.png differ diff --git a/docs/gradient_descent/images/arbitrary.png b/docs/gradient_descent/images/arbitrary.png new file mode 100644 index 000000000..06bb4fb69 Binary files /dev/null and b/docs/gradient_descent/images/arbitrary.png differ diff --git a/docs/gradient_descent/images/convex.png b/docs/gradient_descent/images/convex.png new file mode 100644 index 000000000..72bf6a47a Binary files /dev/null and b/docs/gradient_descent/images/convex.png differ diff --git a/docs/gradient_descent/images/grad_descent_1.png b/docs/gradient_descent/images/grad_descent_1.png new file mode 100644 index 000000000..8361821fe Binary files /dev/null and b/docs/gradient_descent/images/grad_descent_1.png differ diff --git a/docs/gradient_descent/images/grad_descent_2.png b/docs/gradient_descent/images/grad_descent_2.png new file mode 100644 index 000000000..9c320b2a8 Binary files /dev/null and b/docs/gradient_descent/images/grad_descent_2.png differ diff --git a/docs/gradient_descent/images/grad_descent_3.png b/docs/gradient_descent/images/grad_descent_3.png new file mode 100644 index 000000000..a93a9f67a Binary files /dev/null and b/docs/gradient_descent/images/grad_descent_3.png differ diff --git a/docs/gradient_descent/images/local.png b/docs/gradient_descent/images/local.png new file mode 100644 index 000000000..d753299ad Binary files /dev/null and b/docs/gradient_descent/images/local.png differ diff --git a/docs/gradient_descent/images/loss_surface.png b/docs/gradient_descent/images/loss_surface.png new file mode 100644 index 000000000..47fdc3089 Binary files /dev/null and b/docs/gradient_descent/images/loss_surface.png differ diff --git a/docs/gradient_descent/images/neg_step.png b/docs/gradient_descent/images/neg_step.png new file mode 100644 index 000000000..92b4d0e6c Binary files /dev/null and b/docs/gradient_descent/images/neg_step.png differ diff --git a/docs/gradient_descent/images/pos_step.png b/docs/gradient_descent/images/pos_step.png new file mode 100644 index 000000000..61f9ccd84 Binary files /dev/null and b/docs/gradient_descent/images/pos_step.png differ diff --git a/docs/gradient_descent/images/step.png b/docs/gradient_descent/images/step.png new file mode 100644 index 000000000..712933064 Binary files /dev/null and b/docs/gradient_descent/images/step.png differ diff --git a/docs/gradient_descent/images/stochastic.png b/docs/gradient_descent/images/stochastic.png new file mode 100644 index 000000000..122862722 Binary files /dev/null and b/docs/gradient_descent/images/stochastic.png differ diff --git a/docs/index.html b/docs/index.html index 70659ad0a..a6e30b14f 100644 --- a/docs/index.html +++ b/docs/index.html @@ -132,6 +132,156 @@ 1  Introduction + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/inference_causality/images/bootstrap.png b/docs/inference_causality/images/bootstrap.png new file mode 100644 index 000000000..64330be7e Binary files /dev/null and b/docs/inference_causality/images/bootstrap.png differ diff --git a/docs/inference_causality/images/bootstrapped_samples.png b/docs/inference_causality/images/bootstrapped_samples.png new file mode 100644 index 000000000..7faccee50 Binary files /dev/null and b/docs/inference_causality/images/bootstrapped_samples.png differ diff --git a/docs/inference_causality/images/confounder.png b/docs/inference_causality/images/confounder.png new file mode 100644 index 000000000..acc3b1b59 Binary files /dev/null and b/docs/inference_causality/images/confounder.png differ diff --git a/docs/inference_causality/images/experiment.png b/docs/inference_causality/images/experiment.png new file mode 100644 index 000000000..735d58d0c Binary files /dev/null and b/docs/inference_causality/images/experiment.png differ diff --git a/docs/inference_causality/images/observational.png b/docs/inference_causality/images/observational.png new file mode 100644 index 000000000..5d1ae856d Binary files /dev/null and b/docs/inference_causality/images/observational.png differ diff --git a/docs/inference_causality/images/plover_eggs.jpg b/docs/inference_causality/images/plover_eggs.jpg new file mode 100644 index 000000000..eb957e921 Binary files /dev/null and b/docs/inference_causality/images/plover_eggs.jpg differ diff --git a/docs/inference_causality/images/population_samples.png b/docs/inference_causality/images/population_samples.png new file mode 100644 index 000000000..594a34dbf Binary files /dev/null and b/docs/inference_causality/images/population_samples.png differ diff --git a/docs/inference_causality/inference_causality.html b/docs/inference_causality/inference_causality.html new file mode 100644 index 000000000..51c107f7e --- /dev/null +++ b/docs/inference_causality/inference_causality.html @@ -0,0 +1,2261 @@ + + + + + + + + + +19  Causal Inference and Confounding – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

19  Causal Inference and Confounding

+
+ + + +
+ + + + +
+ + + +
+ + + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Construct confidence intervals for hypothesis testing using bootstrapping
  • +
  • Understand the assumptions we make and their impact on our regression inference
  • +
  • Explore ways to overcome issues of multicollinearity
  • +
  • Compare regression correlation and causation
  • +
+
+
+
+

Last time, we introduced the idea of random variables and how they affect the data and model we construct. We also demonstrated the decomposition of model risk from a fitted model and dived into the bias-variance tradeoff.

+

In this lecture, we will explore regression inference via hypothesis testing, understand how to use bootstrapping under the right assumptions, and consider the environment of understanding causality in theory and in practice.

+
+

19.1 Parameter Inference: Interpreting Regression Coefficients

+

There are two main reasons why we build models:

+
    +
  1. Prediction: using our model to make accurate predictions about unseen data
  2. +
  3. Inference: using our model to draw conclusions about the underlying relationship(s) between our features and response. We want to understand the complex phenomena occurring in the world we live in. While training is the process of fitting a model, inference is the process of making predictions.
  4. +
+

Recall the framework we established in the last lecture. The relationship between datapoints is given by \(Y = g(x) + \epsilon\), where \(g(x)\) is the true underlying relationship, and \(\epsilon\) represents randomness. If we assume \(g(x)\) is linear, we can express this relationship in terms of the unknown, true model parameters \(\theta\).

+

\[f_{\theta}(x) = g(x) + \epsilon = \theta_0 + \theta_1 x_1 + \ldots + \theta_p x_p + \epsilon\]

+

Our model attempts to estimate each true population parameter \(\theta_i\) using the sample estimates \(\hat{\theta}_i\) calculated from the design matrix \(\Bbb{X}\) and response vector \(\Bbb{Y}\).

+

\[f_{\hat{\theta}}(x) = \hat{\theta}_0 + \hat{\theta}_1 x_1 + \ldots + \hat{\theta}_p x_p\]

+

Let’s pause for a moment. At this point, we’re very used to working with the idea of a model parameter. But what exactly does each coefficient \(\theta_i\) actually mean? We can think of each \(\theta_i\) as a slope of the linear model. If all other variables are held constant, a unit change in \(x_i\) will result in a \(\theta_i\) change in \(f_{\theta}(x)\). Broadly speaking, a large value of \(\theta_i\) means that the feature \(x_i\) has a large effect on the response; conversely, a small value of \(\theta_i\) means that \(x_i\) has little effect on the response. In the extreme case, if the true parameter \(\theta_i\) is 0, then the feature \(x_i\) has no effect on \(Y(x)\).

+

If the true parameter \(\theta_i\) for a particular feature is 0, this tells us something pretty significant about the world: there is no underlying relationship between \(x_i\) and \(Y(x)\)! But how can we test if a parameter is actually 0? As a baseline, we go through our usual process of drawing a sample, using this data to fit a model, and computing an estimate \(\hat{\theta}_i\). However, we also need to consider that if our random sample comes out differently, we may find a different result for \(\hat{\theta}_i\). To infer if the true parameter \(\theta_i\) is 0, we want to draw our conclusion from the distribution of \(\hat{\theta}_i\) estimates we could have drawn across all other random samples. This is where hypothesis testing comes in handy!

+

To test if the true parameter \(\theta_i\) is 0, we construct a hypothesis test where our null hypothesis states that the true parameter \(\theta_i\) is 0, and the alternative hypothesis states that the true parameter \(\theta_i\) is not 0. If our p-value is smaller than our cutoff value (usually p = 0.05), we reject the null hypothesis in favor of the alternative hypothesis.

+
+
+

19.2 Review: Bootstrap Resampling

+

To determine the properties (e.g., variance) of the sampling distribution of an estimator, we’d need access to the population. Ideally, we’d want to consider all possible samples in the population, compute an estimate for each sample, and study the distribution of those estimates.

+

+y_hat +

+

However, this can be quite expensive and time-consuming. Even more importantly, we don’t have access to the population —— we only have one random sample from the population. How can we consider all possible samples if we only have one?

+

Bootstrapping comes in handy here! With bootstrapping, we treat our random sample as a “population” and resample from it with replacement. Intuitively, a random sample resembles the population (if it is big enough), so a random resample also resembles a random sample of the population. When sampling, there are a couple things to keep in mind:

+
    +
  • We need to sample the same way we constructed the original sample. Typically, this involves taking a simple random sample with replacement.
  • +
  • New samples must be the same size as the original sample. We need to accurately model the variability of our estimates.
  • +
+
+
+
+ +
+
+Why must we resample with replacement? +
+
+
+
+
+

Given an original sample of size \(n\), we want a resample that has the same size \(n\) as the original. Sampling without replacement will give us the original sample with shuffled rows. Hence, when we calculate summary statistics like the average, our sample without replacement will always have the same average as the original sample, defeating the purpose of a bootstrap.

+
+
+
+

+y_hat +

+

Bootstrap resampling is a technique for estimating the sampling distribution of an estimator. To execute it, we can follow the pseudocode below:

+
collect a random sample of size n (called the bootstrap population)
+
+initiate a list of estimates
+
+repeat 10,000 times:
+    resample with replacement from the bootstrap population
+    apply estimator f to the resample
+    store in list
+
+list of estimates is the bootstrapped sampling distribution of f
+

How well does bootstrapping actually represent our population? The bootstrapped sampling distribution of an estimator does not exactly match the sampling distribution of that estimator, but it is often close. Similarly, the variance of the bootstrapped distribution is often close to the true variance of the estimator. The example below displays the results of different bootstraps from a known population using a sample size of \(n=50\).

+

+y_hat +

+

In the real world, we don’t know the population distribution. The center of the bootstrapped distribution is the estimator applied to our original sample, so we have no way of understanding the estimator’s true expected value; the center and spread of our bootstrap are approximations. The quality of our bootstrapped distribution also depends on the quality of our original sample. If our original sample was not representative of the population (like Sample 5 in the image above), then the bootstrap is next to useless. In general, bootstrapping works better for large samples, when the population distribution is not heavily skewed (no outliers), and when the estimator is “low variance” (insensitive to extreme values).

+ +

Although our bootstrapped sample distribution does not exactly match the sampling distribution of the population, we can see that it is relatively close. This demonstrates the benefit of bootstrapping —— without knowing the actual population distribution, we can still roughly approximate the true slope for the model by using only a single random sample of 20 cars.

+ +
+
+

19.3 Collinearity

+
+

19.3.1 Hypothesis Testing Through Bootstrap: Snowy Plover Demo

+

We can conduct the hypothesis testing described earlier through bootstrapping (this equivalence can be proven through the duality argument, which is out of scope for this class). We use bootstrapping to compute approximate 95% confidence intervals for each \(\theta_i\). If the interval doesn’t contain 0, we reject the null hypothesis at the p=5% level. Otherwise, the data is consistent with the null, as the true parameter could possibly be 0.

+

To show an example of this hypothesis testing process, we’ll work with the snowy plover dataset throughout this section. The data are about the eggs and newly hatched chicks of the Snowy Plover. The data were collected at the Point Reyes National Seashore by a former student at Berkeley. Here’s a parent bird and some eggs.

+

+bvt +

+

Note that Egg Length and Egg Breadth (widest diameter) are measured in millimeters, and Egg Weight and Bird Weight are measured in grams. For reference, a standard paper clip weighs about one gram.

+
+
+Code +
import pandas as pd
+eggs = pd.read_csv("data/snowy_plover.csv")
+eggs.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
egg_weightegg_lengthegg_breadthbird_weight
07.428.8021.845.2
17.729.0422.455.4
27.929.3622.485.6
37.530.1021.715.3
48.330.1722.755.9
+ +
+
+
+

Our goal will be to predict the weight of a newborn plover chick, which we assume follows the true relationship \(Y = f_{\theta}(x)\) below.

+

\[\text{bird\_weight} = \theta_0 + \theta_1 \text{egg\_weight} + \theta_2 \text{egg\_length} + \theta_3 \text{egg\_breadth} + \epsilon\]

+

Note that for each \(i\), the parameter \(\theta_i\) is a fixed number, but it is unobservable. We can only estimate it. The random error \(\epsilon\) is also unobservable, but it is assumed to have expectation 0 and be independent and identically distributed across eggs.

+

Say we wish to determine if the egg_weight impacts the bird_weight of a chick – we want to infer if \(\theta_1\) is equal to 0.

+

First, we define our hypotheses:

+
    +
  • Null hypothesis: the true parameter \(\theta_1\) is 0; any variation is due to random chance.
  • +
  • Alternative hypothesis: the true parameter \(\theta_1\) is not 0.
  • +
+

Next, we use our data to fit a model \(\hat{Y} = f_{\hat{\theta}}(x)\) that approximates the relationship above. This gives us the observed value of \(\hat{\theta}_1\) from our data.

+
+
from sklearn.linear_model import LinearRegression
+import numpy as np
+
+X = eggs[["egg_weight", "egg_length", "egg_breadth"]]
+Y = eggs["bird_weight"]
+
+model = LinearRegression()
+model.fit(X, Y)
+
+# This gives an array containing the fitted model parameter estimates
+thetas = model.coef_
+
+# Put the parameter estimates in a nice table for viewing
+display(pd.DataFrame(
+  [model.intercept_] + list(model.coef_),
+  columns=['theta_hat'],
+  index=['intercept', 'egg_weight', 'egg_length', 'egg_breadth']
+))
+
+print("RMSE", np.mean((Y - model.predict(X)) ** 2))
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
theta_hat
intercept-4.605670
egg_weight0.431229
egg_length0.066570
egg_breadth0.215914
+ +
+
+
+
RMSE 0.045470853802757547
+
+
+

Our single sample of data gives us the value of \(\hat{\theta}_1=0.431\). To get a sense of how this estimate might vary if we were to draw different random samples, we will use bootstrapping. As a refresher, to construct a bootstrap sample, we will draw a resample from the collected data that:

+
    +
  • Has the same sample size as the collected data
  • +
  • Is drawn with replacement (this ensures that we don’t draw the exact same sample every time!)
  • +
+

We draw a bootstrap sample, use this sample to fit a model, and record the result for \(\hat{\theta}_1\) on this bootstrapped sample. We then repeat this process many times to generate a bootstrapped empirical distribution of \(\hat{\theta}_1\). This gives us an estimate of what the true distribution of \(\hat{\theta}_1\) across all possible samples might look like.

+
+
# Set a random seed so you generate the same random sample as staff
+# In the "real world", we wouldn't do this
+import numpy as np
+np.random.seed(1337)
+
+# Set the sample size of each bootstrap sample
+n = len(eggs)
+
+# Create a list to store all the bootstrapped estimates
+estimates = []
+
+# Generate a bootstrap resample from `eggs` and find an estimate for theta_1 using this sample. 
+# Repeat 10000 times.
+for i in range(10000):
+    # draw a bootstrap sample
+    bootstrap_resample = eggs.sample(n, replace=True)
+    X_bootstrap = bootstrap_resample[["egg_weight", "egg_length", "egg_breadth"]]
+    Y_bootstrap = bootstrap_resample["bird_weight"]
+    
+    # use bootstrapped sample to fit a model
+    bootstrap_model = LinearRegression()
+    bootstrap_model.fit(X_bootstrap, Y_bootstrap)
+    bootstrap_thetas = bootstrap_model.coef_
+    
+    # record the result for theta_1
+    estimates.append(bootstrap_thetas[0])
+    
+# calculate the 95% confidence interval 
+lower = np.percentile(estimates, 2.5, axis=0)
+upper = np.percentile(estimates, 97.5, axis=0)
+conf_interval = (lower, upper)
+conf_interval
+
+
(np.float64(-0.2586481195684874), np.float64(1.103424385420405))
+
+
+

Our bootstrapped 95% confidence interval for \(\theta_1\) is \([-0.259, 1.103]\). Immediately, we can see that 0 is indeed contained in this interval – this means that we cannot conclude that \(\theta_1\) is non-zero! More formally, we fail to reject the null hypothesis (that \(\theta_1\) is 0) under a 5% p-value cutoff.

+

We can repeat this process to construct 95% confidence intervals for the other parameters of the model.

+
+
+Code +
np.random.seed(1337)
+
+theta_0_estimates = []
+theta_1_estimates = []
+theta_2_estimates = []
+theta_3_estimates = []
+
+
+for i in range(10000):
+    bootstrap_resample = eggs.sample(n, replace=True)
+    X_bootstrap = bootstrap_resample[["egg_weight", "egg_length", "egg_breadth"]]
+    Y_bootstrap = bootstrap_resample["bird_weight"]
+    
+    bootstrap_model = LinearRegression()
+    bootstrap_model.fit(X_bootstrap, Y_bootstrap)
+    bootstrap_theta_0 = bootstrap_model.intercept_
+    bootstrap_theta_1, bootstrap_theta_2, bootstrap_theta_3 = bootstrap_model.coef_
+    
+    theta_0_estimates.append(bootstrap_theta_0)
+    theta_1_estimates.append(bootstrap_theta_1)
+    theta_2_estimates.append(bootstrap_theta_2)
+    theta_3_estimates.append(bootstrap_theta_3)
+    
+theta_0_lower, theta_0_upper = np.percentile(theta_0_estimates, 2.5), np.percentile(theta_0_estimates, 97.5)
+theta_1_lower, theta_1_upper = np.percentile(theta_1_estimates, 2.5), np.percentile(theta_1_estimates, 97.5)
+theta_2_lower, theta_2_upper = np.percentile(theta_2_estimates, 2.5), np.percentile(theta_2_estimates, 97.5)
+theta_3_lower, theta_3_upper = np.percentile(theta_3_estimates, 2.5), np.percentile(theta_3_estimates, 97.5)
+
+# Make a nice table to view results
+pd.DataFrame({"lower":[theta_0_lower, theta_1_lower, theta_2_lower, theta_3_lower], "upper":[theta_0_upper, \
+                theta_1_upper, theta_2_upper, theta_3_upper]}, index=["theta_0", "theta_1", "theta_2", "theta_3"])
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
lowerupper
theta_0-15.2785425.161473
theta_1-0.2586481.103424
theta_2-0.0991380.208557
theta_3-0.2571410.758155
+ +
+
+
+

Something’s off here. Notice that 0 is included in the 95% confidence interval for every parameter of the model. Using the interpretation we outlined above, this would suggest that we can’t say for certain that any of the input variables impact the response variable! This makes it seem like our model can’t make any predictions – and yet, each model we fit in our bootstrap experiment above could very much make predictions of \(Y\).

+

How can we explain this result? Think back to how we first interpreted the parameters of a linear model. We treated each \(\theta_i\) as a slope, where a unit increase in \(x_i\) leads to a \(\theta_i\) increase in \(Y\), if all other variables are held constant. It turns out that this last assumption is very important. If variables in our model are somehow related to one another, then it might not be possible to have a change in one of them while holding the others constant. This means that our interpretation framework is no longer valid! In the models we fit above, we incorporated egg_length, egg_breadth, and egg_weight as input variables. These variables are very likely related to one another – an egg with large egg_length and egg_breadth will likely be heavy in egg_weight. This means that the model parameters cannot be meaningfully interpreted as slopes.

+

To support this conclusion, we can visualize the relationships between our feature variables. Notice the strong positive association between the features.

+
+
+Code +
import seaborn as sns
+sns.pairplot(eggs[["egg_length", "egg_breadth", "egg_weight", 'bird_weight']]);
+
+
+
+
+

+
+
+
+
+

This issue is known as collinearity, sometimes also called multicollinearity. Collinearity occurs when one feature can be predicted fairly accurately by a linear combination of the other features, which happens when one feature is highly correlated with the others.

+

Why is collinearity a problem? Its consequences span several aspects of the modeling process:

+
    +
  • Inference: Slopes can’t be interpreted for an inference task.
  • +
  • Model Variance: If features strongly influence one another, even small changes in the sampled data can lead to large changes in the estimated slopes.
  • +
  • Unique Solution: If one feature is a linear combination of the other features, the design matrix will not be full rank, and \(\mathbb{X}^{\top}\mathbb{X}\) is not invertible. This means that least squares does not have a unique solution. See this section of Course Note 12 for more on this.
  • +
+

The take-home point is that we need to be careful with what features we select for modeling. If two features likely encode similar information, it is often a good idea to choose only one of them as an input variable.

+
+
+

19.3.2 A Simpler Model

+

Let us now consider a more interpretable model: we instead assume a true relationship using only egg weight:

+

\[f_\theta(x) = \theta_0 + \theta_1 \text{egg\_weight} + \epsilon\]

+
+
+Code +
from sklearn.linear_model import LinearRegression
+X_int = eggs[["egg_weight"]]
+Y_int = eggs["bird_weight"]
+
+model_int = LinearRegression()
+
+model_int.fit(X_int, Y_int)
+
+# This gives an array containing the fitted model parameter estimates
+thetas_int = model_int.coef_
+
+# Put the parameter estimates in a nice table for viewing
+pd.DataFrame({"theta_hat":[model_int.intercept_, thetas_int[0]]}, index=["theta_0", "theta_1"])
+
+
+
+ + + + + + + + + + + + + + + + + + + +
theta_hat
theta_0-0.058272
theta_10.718515
+ +
+
+
+
+
+Code +
import matplotlib.pyplot as plt
+
+# Set a random seed so you generate the same random sample as staff
+# In the "real world", we wouldn't do this
+np.random.seed(1337)
+
+# Set the sample size of each bootstrap sample
+n = len(eggs)
+
+# Create a list to store all the bootstrapped estimates
+estimates_int = []
+
+# Generate a bootstrap resample from `eggs` and find an estimate for theta_1 using this sample. 
+# Repeat 10000 times.
+for i in range(10000):
+    bootstrap_resample_int = eggs.sample(n, replace=True)
+    X_bootstrap_int = bootstrap_resample_int[["egg_weight"]]
+    Y_bootstrap_int = bootstrap_resample_int["bird_weight"]
+    
+    bootstrap_model_int = LinearRegression()
+    bootstrap_model_int.fit(X_bootstrap_int, Y_bootstrap_int)
+    bootstrap_thetas_int = bootstrap_model_int.coef_
+    
+    estimates_int.append(bootstrap_thetas_int[0])
+
+plt.figure(dpi=120)
+sns.histplot(estimates_int, stat="density")
+plt.xlabel(r"$\hat{\theta}_1$")
+plt.title(r"Bootstrapped estimates $\hat{\theta}_1$ Under the Interpretable Model");
+
+
+
+
+

+
+
+
+
+

Notice how the interpretable model performs almost as well as our other model:

+
+
+Code +
from sklearn.metrics import mean_squared_error
+
+rmse = mean_squared_error(Y, model.predict(X))
+rmse_int = mean_squared_error(Y_int, model_int.predict(X_int))
+print(f'RMSE of Original Model: {rmse}')
+print(f'RMSE of Interpretable Model: {rmse_int}')
+
+
+
RMSE of Original Model: 0.045470853802757547
+RMSE of Interpretable Model: 0.04649394137555684
+
+
+

Yet, the confidence interval for the true parameter \(\theta_{1}\) does not contain zero.

+
+
+Code +
lower_int = np.percentile(estimates_int, 2.5)
+upper_int = np.percentile(estimates_int, 97.5)
+
+conf_interval_int = (lower_int, upper_int)
+conf_interval_int
+
+
+
(np.float64(0.6029335250209632), np.float64(0.8208401738546208))
+
+
+

In retrospect, it’s no surprise that the weight of an egg best predicts the weight of a newly-hatched chick.

+

A model with highly correlated variables prevents us from interpreting how the variables are related to the prediction.

+
+
+

19.3.3 Reminder: Assumptions Matter

+

Keep the following in mind: All inference assumes that the regression model holds.

+
    +
  • If the model doesn’t hold, the inference might not be valid.
  • +
  • If the assumptions of the bootstrap don’t hold… +
      +
    • Sample size n is large
    • +
    • Sample is representative of population distribution (drawn i.i.d., unbiased)
    • +
    +…then the results of the bootstrap might not be valid.
  • +
+
+
+
+

19.4 [Bonus Content]

+

Note: the content in this section is out of scope.

+ +
+

19.4.1 Prediction vs Causation

+

The difference between correlation/prediction vs. causation is best illustrated through examples.

+

Some questions about correlation / prediction include:

+
    +
  • Are homes with granite countertops worth more money?
  • +
  • Is college GPA higher for students who win a certain scholarship?
  • +
  • Are breastfed babies less likely to develop asthma?
  • +
  • Do cancer patients given some aggressive treatment have a higher 5-year survival rate?
  • +
  • Are people who smoke more likely to get cancer?
  • +
+

While these may sound like causal questions, they are not! Questions about causality are about the effects of interventions (not just passive observation). For example:

+
    +
  • How much do granite countertops raise the value of a house?
  • +
  • Does getting the scholarship improve students’ GPAs?
  • +
  • Does breastfeeding protect babies against asthma?
  • +
  • Does the treatment improve cancer survival?
  • +
  • Does smoking cause cancer?
  • +
+

Note, however, that regression coefficients are sometimes called “effects”, which can be deceptive!

+

When using data alone, predictive questions (i.e., are breastfed babies healthier?) can be answered, but causal questions (i.e., does breastfeeding improve babies’ health?) cannot. The reason for this is that there are many possible causes for our predictive question. For example, possible explanations for why breastfed babies are healthier on average include:

+
    +
  • Causal effect: breastfeeding makes babies healthier
  • +
  • Reverse causality: healthier babies more likely to successfully breastfeed
  • +
  • Common cause: healthier / richer parents have healthier babies and are more likely to breastfeed
  • +
+

We cannot tell which explanations are true (or to what extent) just by observing (\(x\),\(y\)) pairs. Additionally, causal questions implicitly involve counterfactuals, events that didn’t happen. For example, we could ask, would the same breastfed babies have been less healthy if they hadn’t been breastfed? Explanation 1 from above implies they would be, but explanations 2 and 3 do not.

+
+
+

19.4.2 Confounders

+

Let T represent a treatment (for example, alcohol use) and Y represent an outcome (for example, lung cancer).

+

confounder

+

A confounder is a variable that affects both T and Y, distorting the correlation between them. Using the example above, rich parents could be a confounder for breastfeeding and a baby’s health. Confounders can be a measured covariate (a feature) or an unmeasured variable we don’t know about, and they generally cause problems, as the relationship between T and Y is affected by data we cannot see. We commonly assume that all confounders are observed (this is also called ignorability).

+
+
+

19.4.3 How to perform causal inference?

+

In a randomized experiment, participants are randomly assigned into two groups: treatment and control. A treatment is applied only to the treatment group. We assume ignorability and gather as many measurements as possible so that we can compare them between the control and treatment groups to determine whether or not the treatment has a true effect or is just a confounding factor.

+

experiment

+

However, often, randomly assigning treatments is impractical or unethical. For example, assigning a treatment of cigarettes to test the effect of smoking on the lungs would not only be impractical but also unethical.

+

An alternative to bypass this issue is to utilize observational studies. This can be done by obtaining two participant groups separated based on some identified treatment variable. Unlike randomized experiments, however, we cannot assume ignorability here: the participants could have separated into two groups based on other covariates! In addition, there could also be unmeasured confounders.

+

observational

+ + + + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/inference_causality/inference_causality_files/figure-html/cell-6-output-1.png b/docs/inference_causality/inference_causality_files/figure-html/cell-6-output-1.png new file mode 100644 index 000000000..b65533584 Binary files /dev/null and b/docs/inference_causality/inference_causality_files/figure-html/cell-6-output-1.png differ diff --git a/docs/inference_causality/inference_causality_files/figure-html/cell-8-output-1.png b/docs/inference_causality/inference_causality_files/figure-html/cell-8-output-1.png new file mode 100644 index 000000000..659f736a6 Binary files /dev/null and b/docs/inference_causality/inference_causality_files/figure-html/cell-8-output-1.png differ diff --git a/docs/intro_lec/introduction.html b/docs/intro_lec/introduction.html index 9765b7773..9e123bb15 100644 --- a/docs/intro_lec/introduction.html +++ b/docs/intro_lec/introduction.html @@ -30,6 +30,7 @@ + @@ -122,6 +123,156 @@ 1  Introduction + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -812,6 +963,9 @@

+ + 2  Pandas I + diff --git a/docs/intro_to_modeling/images/reg_line_1.png b/docs/intro_to_modeling/images/reg_line_1.png new file mode 100644 index 000000000..f85fd0635 Binary files /dev/null and b/docs/intro_to_modeling/images/reg_line_1.png differ diff --git a/docs/intro_to_modeling/images/reg_line_2.png b/docs/intro_to_modeling/images/reg_line_2.png new file mode 100644 index 000000000..10f5246c1 Binary files /dev/null and b/docs/intro_to_modeling/images/reg_line_2.png differ diff --git a/docs/intro_to_modeling/intro_to_modeling.html b/docs/intro_to_modeling/intro_to_modeling.html new file mode 100644 index 000000000..3c515d514 --- /dev/null +++ b/docs/intro_to_modeling/intro_to_modeling.html @@ -0,0 +1,1615 @@ + + + + + + + + + +10  Introduction to Modeling – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

10  Introduction to Modeling

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Understand what models are and how to carry out the four-step modeling process.
  • +
  • Define the concept of loss and gain familiarity with \(L_1\) and \(L_2\) loss.
  • +
  • Fit the Simple Linear Regression model using minimization techniques.
  • +
+
+
+
+

Up until this point in the semester, we’ve focused on analyzing datasets. We’ve looked into the early stages of the data science lifecycle, focusing on the programming tools, visualization techniques, and data cleaning methods needed for data analysis.

+

This lecture marks a shift in focus. We will move away from examining datasets to actually using our data to better understand the world. Specifically, the next sequence of lectures will explore predictive modeling: generating models to make some predictions about the world around us. In this lecture, we’ll introduce the conceptual framework for setting up a modeling task. In the next few lectures, we’ll put this framework into practice by implementing various kinds of models.

+
+

10.1 What is a Model?

+

A model is an idealized representation of a system. A system is a set of principles or procedures according to which something functions. We live in a world full of systems: the procedure of turning on a light happens according to a specific set of rules dictating the flow of electricity. The truth behind how any event occurs is usually complex, and many times the specifics are unknown. The workings of the world can be viewed as its own giant procedure. Models seek to simplify the world and distill them into workable pieces.

+

Example: We model the fall of an object on Earth as subject to a constant acceleration of \(9.81 m/s^2\) due to gravity.

+
    +
  • While this describes the behavior of our system, it is merely an approximation.
  • +
  • It doesn’t account for the effects of air resistance, local variations in gravity, etc.
  • +
  • In practice, it’s accurate enough to be useful!
  • +
+
+

10.1.1 Reasons for Building Models

+

Why do we want to build models? As far as data scientists and statisticians are concerned, there are three reasons, and each implies a different focus on modeling.

+
    +
  1. To explain complex phenomena occurring in the world we live in. Examples of this might be:

    +
      +
    • How are the parents’ average height related to their children’s average height?
    • +
    • How does an object’s velocity and acceleration impact how far it travels? (Physics: \(d = d_0 + vt + \frac{1}{2}at^2\))
    • +
    +

    In these cases, we care about creating models that are simple and interpretable, allowing us to understand what the relationships between our variables are.

  2. +
  3. To make accurate predictions about unseen data. Some examples include:

    +
      +
    • Can we predict if an email is spam or not?
    • +
    • Can we generate a one-sentence summary of this 10-page long article?
    • +
    +

    When making predictions, we care more about making extremely accurate predictions, at the cost of having an uninterpretable model. These are sometimes called black-box models and are common in fields like deep learning.

  4. +
  5. To measure the causal effects of one event on some other event. For example,

    +
      +
    • Does smoking cause lung cancer?
    • +
    • Does a job training program cause increases in employment and wages?
    • +
    +

    This is a much harder question because most statistical tools are designed to infer association, not causation. We will not focus on this task in Data 100, but you can take other advanced classes on causal inference (e.g., Stat 156, Data 102) if you are intrigued!

  6. +
+

Most of the time, we aim to strike a balance between building interpretable models and building accurate models.

+
+
+

10.1.2 Common Types of Models

+

In general, models can be split into two categories:

+
    +
  1. Deterministic physical (mechanistic) models: Laws that govern how the world works.

    +
  2. +
  3. Probabilistic models: Models that attempt to understand how random processes evolve. These are more general and can be used to describe many phenomena in the real world. These models commonly make simplifying assumptions about the nature of the world.

    +
      +
    • Poisson Process models: Used to model random events that happen with some probability at any point in time and are strictly increasing in count, such as the arrival of customers at a store.
    • +
  4. +
+

Note: These specific models are not in the scope of Data 100 and exist to serve as motivation.

+
+
+
+

10.2 Simple Linear Regression

+

The regression line is the unique straight line that minimizes the mean squared error of estimation among all straight lines. As with any straight line, it can be defined by a slope and a y-intercept:

+
    +
  • \(\text{slope} = r \cdot \frac{\text{Standard Deviation of } y}{\text{Standard Deviation of }x}\)
  • +
  • \(y\text{-intercept} = \text{average of }y - \text{slope}\cdot\text{average of }x\)
  • +
  • \(\text{regression estimate} = y\text{-intercept} + \text{slope}\cdot\text{}x\)
  • +
  • \(\text{residual} =\text{observed }y - \text{regression estimate}\)
  • +
+
+
+Code +
import pandas as pd
+import numpy as np
+import matplotlib.pyplot as plt
+import seaborn as sns
+# Set random seed for consistency 
+np.random.seed(43)
+plt.style.use('default') 
+
+#Generate random noise for plotting
+x = np.linspace(-3, 3, 100)
+y = x * 0.5 - 1 + np.random.randn(100) * 0.3
+
+#plot regression line
+sns.regplot(x=x,y=y);
+
+
+
+
+

+
+
+
+
+
+

10.2.1 Notations and Definitions

+

For a pair of variables \(x\) and \(y\) representing our data \(\mathcal{D} = \{(x_1, y_1), (x_2, y_2), \dots, (x_n, y_n)\}\), we denote their means/averages as \(\bar x\) and \(\bar y\) and standard deviations as \(\sigma_x\) and \(\sigma_y\).

+
+

10.2.1.1 Standard Units

+

A variable is represented in standard units if the following are true:

+
    +
  1. 0 in standard units is equal to the mean (\(\bar{x}\)) in the original variable’s units.
  2. +
  3. An increase of 1 standard unit is an increase of 1 standard deviation (\(\sigma_x\)) in the original variable’s units.
  4. +
+

To convert a variable \(x_i\) into standard units, we subtract its mean from it and divide it by its standard deviation. For example, \(x_i\) in standard units is \(\frac{x_i - \bar x}{\sigma_x}\).

+
+
+

10.2.1.2 Correlation

+

The correlation (\(r\)) is the average of the product of \(x\) and \(y\), both measured in standard units.

+

\[r = \frac{1}{n} \sum_{i=1}^n (\frac{x_i - \bar{x}}{\sigma_x})(\frac{y_i - \bar{y}}{\sigma_y})\]

+
    +
  1. Correlation measures the strength of a linear association between two variables.
  2. +
  3. Correlations range between -1 and 1: \(|r| \leq 1\), with \(r=1\) indicating perfect linear association, and \(r=-1\) indicating perfect negative association. The closer \(r\) is to \(0\), the weaker the linear association is.
  4. +
  5. Correlation says nothing about causation and non-linear association. Correlation does not imply causation. When \(r = 0\), the two variables are uncorrelated. However, they could still be related through some non-linear relationship.
  6. +
+
+
+Code +
def plot_and_get_corr(ax, x, y, title):
+    ax.set_xlim(-3, 3)
+    ax.set_ylim(-3, 3)
+    ax.set_xticks([])
+    ax.set_yticks([])
+    ax.scatter(x, y, alpha = 0.73)
+    r = np.corrcoef(x, y)[0, 1]
+    ax.set_title(title + " (corr: {})".format(r.round(2)))
+    return r
+
+fig, axs = plt.subplots(2, 2, figsize = (10, 10))
+
+# Just noise
+x1, y1 = np.random.randn(2, 100)
+corr1 = plot_and_get_corr(axs[0, 0], x1, y1, title = "noise")
+
+# Strong linear
+x2 = np.linspace(-3, 3, 100)
+y2 = x2 * 0.5 - 1 + np.random.randn(100) * 0.3
+corr2 = plot_and_get_corr(axs[0, 1], x2, y2, title = "strong linear")
+
+# Unequal spread
+x3 = np.linspace(-3, 3, 100)
+y3 = - x3/3 + np.random.randn(100)*(x3)/2.5
+corr3 = plot_and_get_corr(axs[1, 0], x3, y3, title = "strong linear")
+extent = axs[1, 0].get_window_extent().transformed(fig.dpi_scale_trans.inverted())
+
+# Strong non-linear
+x4 = np.linspace(-3, 3, 100)
+y4 = 2*np.sin(x3 - 1.5) + np.random.randn(100) * 0.3
+corr4 = plot_and_get_corr(axs[1, 1], x4, y4, title = "strong non-linear")
+
+plt.show()
+
+
+
+
+

+
+
+
+
+
+
+
+

10.2.2 Alternate Form

+

When the variables \(y\) and \(x\) are measured in standard units, the regression line for predicting \(y\) based on \(x\) has slope \(r\) and passes through the origin.

+

\[\hat{y}_{su} = r \cdot x_{su}\]

+

+
    +
  • In the original units, this becomes
  • +
+

\[\frac{\hat{y} - \bar{y}}{\sigma_y} = r \cdot \frac{x - \bar{x}}{\sigma_x}\]

+

+
+
+

10.2.3 Derivation

+

Starting from the top, we have our claimed form of the regression line, and we want to show that it is equivalent to the optimal linear regression line: \(\hat{y} = \hat{a} + \hat{b}x\).

+

Recall:

+
    +
  • \(\hat{b} = r \cdot \frac{\text{Standard Deviation of }y}{\text{Standard Deviation of }x}\)
  • +
  • \(\hat{a} = \text{average of }y - \text{slope}\cdot\text{average of }x\)
  • +
+
+
+
+ +
+
+

Proof:

+

\[\frac{\hat{y} - \bar{y}}{\sigma_y} = r \cdot \frac{x - \bar{x}}{\sigma_x}\]

+

Multiply by \(\sigma_y\), and add \(\bar{y}\) on both sides.

+

\[\hat{y} = \sigma_y \cdot r \cdot \frac{x - \bar{x}}{\sigma_x} + \bar{y}\]

+

Distribute coefficient \(\sigma_{y}\cdot r\) to the \(\frac{x - \bar{x}}{\sigma_x}\) term

+

\[\hat{y} = (\frac{r\sigma_y}{\sigma_x} ) \cdot x + (\bar{y} - (\frac{r\sigma_y}{\sigma_x} ) \bar{x})\]

+

We now see that we have a line that matches our claim:

+
    +
  • slope: \(r\cdot\frac{\text{SD of y}}{\text{SD of x}} = r\cdot\frac{\sigma_y}{\sigma_x}\)
  • +
  • intercept: \(\bar{y} - \text{slope}\cdot \bar{x}\)
  • +
+

Note that the error for the i-th datapoint is: \(e_i = y_i - \hat{y_i}\)

+
+
+
+
+
+
+

10.3 The Modeling Process

+

At a high level, a model is a way of representing a system. In Data 100, we’ll treat a model as some mathematical rule we use to describe the relationship between variables.

+

What variables are we modeling? Typically, we use a subset of the variables in our sample of collected data to model another variable in this data. To put this more formally, say we have the following dataset \(\mathcal{D}\):

+

\[\mathcal{D} = \{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\}\]

+

Each pair of values \((x_i, y_i)\) represents a datapoint. In a modeling setting, we call these observations. \(y_i\) is the dependent variable we are trying to model, also called an output or response. \(x_i\) is the independent variable inputted into the model to make predictions, also known as a feature.

+

Our goal in modeling is to use the observed data \(\mathcal{D}\) to predict the output variable \(y_i\). We denote each prediction as \(\hat{y}_i\) (read: “y hat sub i”).

+

How do we generate these predictions? Some examples of models we’ll encounter in the next few lectures are given below:

+

\[\hat{y}_i = \theta\] \[\hat{y}_i = \theta_0 + \theta_1 x_i\]

+

The examples above are known as parametric models. They relate the collected data, \(x_i\), to the prediction we make, \(\hat{y}_i\). A few parameters (\(\theta\), \(\theta_0\), \(\theta_1\)) are used to describe the relationship between \(x_i\) and \(\hat{y}_i\).

+

Notice that we don’t immediately know the values of these parameters. While the features, \(x_i\), are taken from our observed data, we need to decide what values to give \(\theta\), \(\theta_0\), and \(\theta_1\) ourselves. This is the heart of parametric modeling: what parameter values should we choose so our model makes the best possible predictions?

+

To choose our model parameters, we’ll work through the modeling process.

+
    +
  1. Choose a model: how should we represent the world?
  2. +
  3. Choose a loss function: how do we quantify prediction error?
  4. +
  5. Fit the model: how do we choose the best parameters of our model given our data?
  6. +
  7. Evaluate model performance: how do we evaluate whether this process gave rise to a good model?
  8. +
+
+
+

10.4 Choosing a Model

+

Our first step is choosing a model: defining the mathematical rule that describes the relationship between the features, \(x_i\), and predictions \(\hat{y}_i\).

+

In Data 8, you learned about the Simple Linear Regression (SLR) model. You learned that the model takes the form: \[\hat{y}_i = a + bx_i\]

+

In Data 100, we’ll use slightly different notation: we will replace \(a\) with \(\theta_0\) and \(b\) with \(\theta_1\). This will allow us to use the same notation when we explore more complex models later on in the course.

+

\[\hat{y}_i = \theta_0 + \theta_1 x_i\]

+

The parameters of the SLR model are \(\theta_0\), also called the intercept term, and \(\theta_1\), also called the slope term. To create an effective model, we want to choose values for \(\theta_0\) and \(\theta_1\) that most accurately predict the output variable. The “best” fitting model parameters are given the special names: \(\hat{\theta}_0\) and \(\hat{\theta}_1\); they are the specific parameter values that allow our model to generate the best possible predictions.

+

In Data 8, you learned that the best SLR model parameters are: \[\hat{\theta}_0 = \bar{y} - \hat{\theta}_1\bar{x} \qquad \qquad \hat{\theta}_1 = r \frac{\sigma_y}{\sigma_x}\]

+

A quick reminder on notation:

+
    +
  • \(\bar{y}\) and \(\bar{x}\) indicate the mean value of \(y\) and \(x\), respectively
  • +
  • \(\sigma_y\) and \(\sigma_x\) indicate the standard deviations of \(y\) and \(x\)
  • +
  • \(r\) is the correlation coefficient, defined as the average of the product of \(x\) and \(y\) measured in standard units: \(\frac{1}{n} \sum_{i=1}^n (\frac{x_i-\bar{x}}{\sigma_x})(\frac{y_i-\bar{y}}{\sigma_y})\)
  • +
+

In Data 100, we want to understand how to derive these best model coefficients. To do so, we’ll introduce the concept of a loss function.

+
+
+

10.5 Choosing a Loss Function

+

We’ve talked about the idea of creating the “best” possible predictions. This begs the question: how do we decide how “good” or “bad” our model’s predictions are?

+

A loss function characterizes the cost, error, or fit resulting from a particular choice of model or model parameters. This function, \(L(y, \hat{y})\), quantifies how “bad” or “far off” a single prediction by our model is from a true, observed value in our collected data.

+

The choice of loss function for a particular model will affect the accuracy and computational cost of estimation, and it’ll also depend on the estimation task at hand. For example,

+
    +
  • Are outputs quantitative or qualitative?
  • +
  • Do outliers matter?
  • +
  • Are all errors equally costly? (e.g., a false negative on a cancer test is arguably more dangerous than a false positive)
  • +
+

Regardless of the specific function used, a loss function should follow two basic principles:

+
    +
  • If the prediction \(\hat{y}_i\) is close to the actual value \(y_i\), loss should be low.
  • +
  • If the prediction \(\hat{y}_i\) is far from the actual value \(y_i\), loss should be high.
  • +
+

Two common choices of loss function are squared loss and absolute loss.

+

Squared loss, also known as L2 loss, computes loss as the square of the difference between the observed \(y_i\) and predicted \(\hat{y}_i\): \[L(y_i, \hat{y}_i) = (y_i - \hat{y}_i)^2\]

+

Absolute loss, also known as L1 loss, computes loss as the absolute difference between the observed \(y_i\) and predicted \(\hat{y}_i\): \[L(y_i, \hat{y}_i) = |y_i - \hat{y}_i|\]

+

L1 and L2 loss give us a tool for quantifying our model’s performance on a single data point. This is a good start, but ideally, we want to understand how our model performs across our entire dataset. A natural way to do this is to compute the average loss across all data points in the dataset. This is known as the cost function, \(\hat{R}(\theta)\): \[\hat{R}(\theta) = \frac{1}{n} \sum^n_{i=1} L(y_i, \hat{y}_i)\]

+

The cost function has many names in the statistics literature. You may also encounter the terms:

+
    +
  • Empirical risk (this is why we give the cost function the name \(R\))
  • +
  • Error function
  • +
  • Average loss
  • +
+

We can substitute our L1 and L2 loss into the cost function definition. The Mean Squared Error (MSE) is the average squared loss across a dataset: \[\text{MSE} = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2\]

+

The Mean Absolute Error (MAE) is the average absolute loss across a dataset: \[\text{MAE}= \frac{1}{n} \sum_{i=1}^n |y_i - \hat{y}_i|\]

+
+
+

10.6 Fitting the Model

+

Now that we’ve established the concept of a loss function, we can return to our original goal of choosing model parameters. Specifically, we want to choose the best set of model parameters that will minimize the model’s cost on our dataset. This process is called fitting the model.

+

We know from calculus that a function is minimized when (1) its first derivative is equal to zero and (2) its second derivative is positive. We often call the function being minimized the objective function (our objective is to find its minimum).

+

To find the optimal model parameter, we:

+
    +
  1. Take the derivative of the cost function with respect to that parameter
  2. +
  3. Set the derivative equal to 0
  4. +
  5. Solve for the parameter
  6. +
+

We repeat this process for each parameter present in the model. For now, we’ll disregard the second derivative condition.

+

To help us make sense of this process, let’s put it into action by deriving the optimal model parameters for simple linear regression using the mean squared error as our cost function. Remember: although the notation may look tricky, all we are doing is following the three steps above!

+

Step 1: take the derivative of the cost function with respect to each model parameter. We substitute the SLR model, \(\hat{y}_i = \theta_0+\theta_1 x_i\), into the definition of MSE above and differentiate with respect to \(\theta_0\) and \(\theta_1\). \[\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 = \frac{1}{n} \sum_{i=1}^{n} (y_i - \theta_0 - \theta_1 x_i)^2\]

+

\[\frac{\partial}{\partial \theta_0} \text{MSE} = \frac{-2}{n} \sum_{i=1}^{n} y_i - \theta_0 - \theta_1 x_i\]

+

\[\frac{\partial}{\partial \theta_1} \text{MSE} = \frac{-2}{n} \sum_{i=1}^{n} (y_i - \theta_0 - \theta_1 x_i)x_i\]

+

Let’s walk through these derivations in more depth, starting with the derivative of MSE with respect to \(\theta_0\).

+

Given our MSE above, we know that: \[\frac{\partial}{\partial \theta_0} \text{MSE} = \frac{\partial}{\partial \theta_0} \frac{1}{n} \sum_{i=1}^{n} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\]

+

Noting that the derivative of sum is equivalent to the sum of derivatives, this then becomes: \[ = \frac{1}{n} \sum_{i=1}^{n} \frac{\partial}{\partial \theta_0} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\]

+

We can then apply the chain rule.

+

\[ = \frac{1}{n} \sum_{i=1}^{n} 2 \cdot{(y_i - \theta_0 - \theta_1 x_i)}\dot(-1)\]

+

Finally, we can simplify the constants, leaving us with our answer.

+

\[\frac{\partial}{\partial \theta_0} \text{MSE} = \frac{-2}{n} \sum_{i=1}^{n}{(y_i - \theta_0 - \theta_1 x_i)}\]

+

Following the same procedure, we can take the derivative of MSE with respect to \(\theta_1\).

+

\[\frac{\partial}{\partial \theta_1} \text{MSE} = \frac{\partial}{\partial \theta_1} \frac{1}{n} \sum_{i=1}^{n} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\]

+

\[ = \frac{1}{n} \sum_{i=1}^{n} \frac{\partial}{\partial \theta_1} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\]

+

\[ = \frac{1}{n} \sum_{i=1}^{n} 2 \dot{(y_i - \theta_0 - \theta_1 x_i)}\dot(-x_i)\]

+

\[= \frac{-2}{n} \sum_{i=1}^{n} {(y_i - \theta_0 - \theta_1 x_i)}x_i\]

+

Step 2: set the derivatives equal to 0. After simplifying terms, this produces two estimating equations. The best set of model parameters \((\hat{\theta}_0, \hat{\theta}_1)\) must satisfy these two optimality conditions. \[0 = \frac{-2}{n} \sum_{i=1}^{n} y_i - \hat{\theta}_0 - \hat{\theta}_1 x_i \Longleftrightarrow \frac{1}{n}\sum_{i=1}^{n} y_i - \hat{y}_i = 0\] \[0 = \frac{-2}{n} \sum_{i=1}^{n} (y_i - \hat{\theta}_0 - \hat{\theta}_1 x_i)x_i \Longleftrightarrow \frac{1}{n}\sum_{i=1}^{n} (y_i - \hat{y}_i)x_i = 0\]

+

Step 3: solve the estimating equations to compute estimates for \(\hat{\theta}_0\) and \(\hat{\theta}_1\).

+

Taking the first equation gives the estimate of \(\hat{\theta}_0\): \[\frac{1}{n} \sum_{i=1}^n y_i - \hat{\theta}_0 - \hat{\theta}_1 x_i = 0 \]

+

\[\left(\frac{1}{n} \sum_{i=1}^n y_i \right) - \hat{\theta}_0 - \hat{\theta}_1\left(\frac{1}{n} \sum_{i=1}^n x_i \right) = 0\]

+

\[ \hat{\theta}_0 = \bar{y} - \hat{\theta}_1 \bar{x}\]

+

With a bit more maneuvering, the second equation gives the estimate of \(\hat{\theta}_1\). Start by multiplying the first estimating equation by \(\bar{x}\), then subtracting the result from the second estimating equation.

+

\[\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)x_i - \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)\bar{x} = 0 \]

+

\[\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)(x_i - \bar{x}) = 0 \]

+

Next, plug in \(\hat{y}_i = \hat{\theta}_0 + \hat{\theta}_1 x_i = \bar{y} + \hat{\theta}_1(x_i - \bar{x})\):

+

\[\frac{1}{n} \sum_{i=1}^n (y_i - \bar{y} - \hat{\theta}_1(x - \bar{x}))(x_i - \bar{x}) = 0 \]

+

\[\frac{1}{n} \sum_{i=1}^n (y_i - \bar{y})(x_i - \bar{x}) = \hat{\theta}_1 \times \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2 +\]

+

By using the definition of correlation \(\left(r = \frac{1}{n} \sum_{i=1}^n (\frac{x_i-\bar{x}}{\sigma_x})(\frac{y_i-\bar{y}}{\sigma_y}) \right)\) and standard deviation \(\left(\sigma_x = \sqrt{\frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2} \right)\), we can conclude: \[r \sigma_x \sigma_y = \hat{\theta}_1 \times \sigma_x^2\] \[\hat{\theta}_1 = r \frac{\sigma_y}{\sigma_x}\]

+

Just as was given in Data 8!

+

Remember, this derivation found the optimal model parameters for SLR when using the MSE cost function. If we had used a different model or different loss function, we likely would have found different values for the best model parameters. However, regardless of the model and loss used, we can always follow these three steps to fit the model.

+ + + + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/intro_to_modeling/intro_to_modeling_files/figure-html/cell-2-output-1.png b/docs/intro_to_modeling/intro_to_modeling_files/figure-html/cell-2-output-1.png new file mode 100644 index 000000000..b6767dada Binary files /dev/null and b/docs/intro_to_modeling/intro_to_modeling_files/figure-html/cell-2-output-1.png differ diff --git a/docs/intro_to_modeling/intro_to_modeling_files/figure-html/cell-3-output-1.png b/docs/intro_to_modeling/intro_to_modeling_files/figure-html/cell-3-output-1.png new file mode 100644 index 000000000..77b58e065 Binary files /dev/null and b/docs/intro_to_modeling/intro_to_modeling_files/figure-html/cell-3-output-1.png differ diff --git a/docs/logistic_regression_1/images/class.png b/docs/logistic_regression_1/images/class.png new file mode 100644 index 000000000..789cb6cee Binary files /dev/null and b/docs/logistic_regression_1/images/class.png differ diff --git a/docs/logistic_regression_1/images/global_local_min.png b/docs/logistic_regression_1/images/global_local_min.png new file mode 100644 index 000000000..60ed16231 Binary files /dev/null and b/docs/logistic_regression_1/images/global_local_min.png differ diff --git a/docs/logistic_regression_1/images/log_reg.png b/docs/logistic_regression_1/images/log_reg.png new file mode 100644 index 000000000..5b2696358 Binary files /dev/null and b/docs/logistic_regression_1/images/log_reg.png differ diff --git a/docs/logistic_regression_1/images/reg.png b/docs/logistic_regression_1/images/reg.png new file mode 100644 index 000000000..b243065b2 Binary files /dev/null and b/docs/logistic_regression_1/images/reg.png differ diff --git a/docs/logistic_regression_1/images/squared_loss.png b/docs/logistic_regression_1/images/squared_loss.png new file mode 100644 index 000000000..2f3fc075b Binary files /dev/null and b/docs/logistic_regression_1/images/squared_loss.png differ diff --git a/docs/logistic_regression_1/images/y=0.png b/docs/logistic_regression_1/images/y=0.png new file mode 100644 index 000000000..3671d0062 Binary files /dev/null and b/docs/logistic_regression_1/images/y=0.png differ diff --git a/docs/logistic_regression_1/images/y=1.png b/docs/logistic_regression_1/images/y=1.png new file mode 100644 index 000000000..c883d2fbe Binary files /dev/null and b/docs/logistic_regression_1/images/y=1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1.html b/docs/logistic_regression_1/logistic_reg_1.html new file mode 100644 index 000000000..c56b793d6 --- /dev/null +++ b/docs/logistic_regression_1/logistic_reg_1.html @@ -0,0 +1,1566 @@ + + + + + + + + + +22  Logistic Regression I – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

22  Logistic Regression I

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Understand the difference between regression and classification
  • +
  • Derive the logistic regression model for classifying data
  • +
  • Quantify the error of our logistic regression model with cross-entropy loss
  • +
+
+
+
+

Up until this point in the class , we’ve focused on regression tasks - that is, predicting an unbounded numerical quantity from a given dataset. We discussed optimization, feature engineering, and regularization all in the context of performing regression to predict some quantity.

+

Now that we have this deep understanding of the modeling process, let’s expand our knowledge of possible modeling tasks.

+
+

22.1 Classification

+

In the next two lectures, we’ll tackle the task of classification. A classification problem aims to classify data into categories. Unlike in regression, where we predicted a numeric output, classification involves predicting some categorical variable, or response, \(y\). Examples of classification tasks include:

+
    +
  • Predicting which team won from its turnover percentage
  • +
  • Predicting the day of the week of a meal from the total restaurant bill
  • +
  • Predicting the model of car from its horsepower
  • +
+

There are a couple of different types of classification:

+
    +
  • Binary classification: classify data into two classes, and responses \(y\) are either 0 or 1
  • +
  • Multiclass classification: classify data into multiple classes (e.g., image labeling, next word in a sentence, etc.)
  • +
+

We can further combine multiple related classfication predictions (e.g., translation, voice recognition, etc.) to tackle complex problems through structured prediction tasks.

+

In Data 100, we will mostly deal with binary classification, where we are attempting to classify data into one of two classes.

+
+

22.1.1 Modeling Process

+

To build a classification model, we need to modify our modeling workflow slightly. Recall that in regression we:

+
    +
  1. Created a design matrix of numeric features
  2. +
  3. Defined our model as a linear combination of these numeric features
  4. +
  5. Used the model to output numeric predictions
  6. +
+

In classification, however, we no longer want to output numeric predictions; instead, we want to predict the class to which a datapoint belongs. This means that we need to update our workflow. To build a classification model, we will:

+
    +
  1. Create a design matrix of numeric features.
  2. +
  3. Define our model as a linear combination of these numeric features, transformed by a non-linear sigmoid function. This outputs a numeric quantity.
  4. +
  5. Apply a decision rule to interpret the outputted quantity and decide a classification.
  6. +
  7. Output a predicted class.
  8. +
+

There are two key differences: as we’ll soon see, we need to incorporate a non-linear transformation to capture the non-linear relationships hidden in our data. We do so by applying the sigmoid function to a linear combination of the features. Secondly, we must apply a decision rule to convert the numeric quantities computed by our model into an actual class prediction. This can be as simple as saying that any datapoint with a feature greater than some number \(x\) belongs to Class 1.

+

Regression:

+
+reg +
+

Classification:

+
+class +
+

This was a very high-level overview. Let’s walk through the process in detail to clarify what we mean.

+
+
+
+

22.2 Deriving the Logistic Regression Model

+

Throughout this lecture, we will work with the games dataset, which contains information about games played in the NBA basketball league. Our goal will be to use a basketball team’s "GOAL_DIFF" to predict whether or not a given team won their game ("WON"). If a team wins their game, we’ll say they belong to Class 1. If they lose, they belong to Class 0.

+

For those who are curious, "GOAL_DIFF" represents the difference in successful field goal percentages between the two competing teams.

+
+
+Code +
import warnings
+warnings.filterwarnings("ignore")
+
+import pandas as pd
+import numpy as np
+np.seterr(divide='ignore')
+
+games = pd.read_csv("data/games").dropna()
+games.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
GAME_IDTEAM_NAMEMATCHUPWONGOAL_DIFFAST
021701216Dallas MavericksDAL vs. PHX0-0.25120
121700846Phoenix SunsPHX @ GSW0-0.23713
221700071San Antonio SpursSAS @ ORL0-0.23419
321700221New York KnicksNYK @ TOR0-0.23417
421700306Miami HeatMIA @ NYK0-0.22221
+ +
+
+
+

Let’s visualize the relationship between "GOAL_DIFF" and "WON" using the Seaborn function sns.stripplot. A strip plot automatically introduces a small amount of random noise to jitter the data. Recall that all values in the "WON" column are either 1 (won) or 0 (lost) – if we were to directly plot them without jittering, we would see severe overplotting.

+
+
+Code +
import seaborn as sns
+import matplotlib.pyplot as plt
+
+sns.stripplot(data=games, x="GOAL_DIFF", y="WON", orient="h", hue='WON', alpha=0.7)
+# By default, sns.stripplot plots 0, then 1. We invert the y axis to reverse this behavior
+plt.gca().invert_yaxis();
+
+
+
+
+

+
+
+
+
+

This dataset is unlike anything we’ve seen before – our target variable contains only two unique values! (Remember that each y value is either 0 or 1; the plot above jitters the y data slightly for ease of reading.)

+

The regression models we have worked with always assumed that we were attempting to predict a continuous target. If we apply a linear regression model to this dataset, something strange happens.

+
+
+Code +
import sklearn.linear_model as lm
+
+X, Y = games[["GOAL_DIFF"]], games["WON"]
+regression_model = lm.LinearRegression()
+regression_model.fit(X, Y)
+
+plt.plot(X.squeeze(), regression_model.predict(X), "k")
+sns.stripplot(data=games, x="GOAL_DIFF", y="WON", orient="h", hue='WON', alpha=0.7)
+plt.gca().invert_yaxis();
+
+
+
+
+

+
+
+
+
+

The linear regression fit follows the data as closely as it can. However, this approach has a key flaw - the predicted output, \(\hat{y}\), can be outside the range of possible classes (there are predictions above 1 and below 0). This means that the output can’t always be interpreted (what does it mean to predict a class of -2.3?).

+

Our usual linear regression framework won’t work here. Instead, we’ll need to get more creative.

+
+

22.2.1 Graph of Averages

+

Back in Data 8, you gradually built up to the concept of linear regression by using the graph of averages. Before you knew the mathematical underpinnings of the regression line, you took a more intuitive approach: you bucketed the \(x\) data into bins of common values, then computed the average \(y\) for all datapoints in the same bin. The result gave you the insight needed to derive the regression fit.

+

Let’s take the same approach as we grapple with our new classification task. In the cell below, we 1) bucket the "GOAL_DIFF" data into bins of similar values and 2) compute the average "WON" value of all datapoints in a bin.

+
+
# bucket the GOAL_DIFF data into 20 bins
+bins = pd.cut(games["GOAL_DIFF"], 20)
+games["bin"] = [(b.left + b.right) / 2 for b in bins]
+win_rates_by_bin = games.groupby("bin")["WON"].mean()
+
+# plot the graph of averages
+sns.stripplot(data=games, x="GOAL_DIFF", y="WON", orient="h", alpha=0.5, hue='WON') # alpha makes the points transparent
+plt.plot(win_rates_by_bin.index, win_rates_by_bin, c="tab:red")
+plt.gca().invert_yaxis();
+
+
+
+

+
+
+
+
+

Interesting: our result is certainly not like the straight line produced by finding the graph of averages for a linear relationship. We can make two observations:

+
    +
  • All predictions on our line are between 0 and 1
  • +
  • The predictions are non-linear, following a rough “S” shape
  • +
+

Let’s think more about what we’ve just done.

+

To find the average \(y\) value for each bin, we computed:

+

\[\frac{1 \text{(\# Y = 1 in bin)} + 0 \text{(\# Y = 0 in bin)}}{\text{\# datapoints in bin}} = \frac{\text{\# Y = 1 in bin}}{\text{\# datapoints in bin}} = P(\text{Y = 1} | \text{bin})\]

+

This is simply the probability of a datapoint in that bin belonging to Class 1! This aligns with our observation from earlier: all of our predictions lie between 0 and 1, just as we would expect for a probability.

+

Our graph of averages was really modeling the probability, \(p\), that a datapoint belongs to Class 1, or essentially that \(\text{Y = 1}\) for a particular value of \(\text{x}\).

+

\[ p = P(Y = 1 | \text{ x} )\]

+

In logistic regression, we have a new modeling goal. We want to model the probability that a particular datapoint belongs to Class 1 by approximating the S-shaped curve we plotted above. However, we’ve only learned about linear modeling techniques like Linear Regression and OLS.

+
+
+

22.2.2 Handling Non-Linear Output

+

Fortunately for us, we’re already well-versed with a technique to model non-linear relationships – we can apply non-linear transformations like log or exponents to make a non-linear relationship more linear. Recall the steps we’ve applied previously:

+
    +
  • Transform the variables until we linearize their relationship
  • +
  • Fit a linear model to the transformed variables
  • +
  • “Undo” our transformations to identify the underlying relationship between the original variables
  • +
+

In past examples, we used the bulge diagram to help us decide what transformations may be useful. The S-shaped curve we saw above, however, looks nothing like any relationship we’ve seen in the past. We’ll need to think carefully about what transformations will linearize this curve.

+
+

22.2.2.1 1. Odds

+

Let’s consider our eventual goal: determining if we should predict a Class of 0 or 1 for each datapoint. Rephrased, we want to decide if it seems more “likely” that the datapoint belongs to Class 0 or to Class 1. One way of deciding this is to see which class has the higher predicted probability for a given datapoint. The odds is defined as the probability of a datapoint belonging to Class 1 divided by the probability of it belonging to Class 0.

+

\[\text{odds} = \frac{P(Y=1|x)}{P(Y=0|x)} = \frac{p}{1-p}\]

+

If we plot the odds for each input "GOAL_DIFF" (\(x\)), we see something that looks more promising.

+
+
+Code +
p = win_rates_by_bin
+odds = p/(1-p) 
+
+plt.plot(odds.index, odds)
+plt.xlabel("x")
+plt.ylabel(r"Odds $= \frac{p}{1-p}$");
+
+
+
+
+

+
+
+
+
+
+
+

22.2.2.2 2. Log

+

It turns out that the relationship between our input "GOAL_DIFF" and the odds is roughly exponential! Let’s linearize the exponential by taking the logarithm (as suggested by the Tukey-Mosteller Bulge Diagram). As a reminder, you should assume that any logarithm in Data 100 is the base \(e\) natural logarithm unless told otherwise.

+
+
+Code +
import numpy as np
+log_odds = np.log(odds)
+plt.plot(odds.index, log_odds, c="tab:green")
+plt.xlabel("x")
+plt.ylabel(r"Log-Odds $= \log{\frac{p}{1-p}}$");
+
+
+
+
+

+
+
+
+
+
+
+

22.2.2.3 3. Putting it Together

+

We see something promising – the relationship between the log-odds and our input feature is approximately linear. This means that we can use a linear model to describe the relationship between the log-odds and \(x\). In other words:

+

\[\begin{align} +\log{(\frac{p}{1-p})} &= \theta_0 + \theta_1 x_1 + ... + \theta_p x_p\\ +&= x^{\top} \theta +\end{align}\]

+

Here, we use \(x^{\top}\) to represent an observation in our dataset, stored as a row vector. You can imagine it as a single row in our design matrix. \(x^{\top} \theta\) indicates a linear combination of the features for this observation (just as we used in multiple linear regression).

+

We’re in good shape! We have now derived the following relationship:

+

\[\log{(\frac{p}{1-p})} = x^{\top} \theta\]

+

Remember that our goal is to predict the probability of a datapoint belonging to Class 1, \(p\). Let’s rearrange this relationship to uncover the original relationship between \(p\) and our input data, \(x^{\top}\).

+

\[\begin{align} +\log{(\frac{p}{1-p})} &= x^T \theta\\ +\frac{p}{1-p} &= e^{x^T \theta}\\ +p &= (1-p)e^{x^T \theta}\\ +p &= e^{x^T \theta}- p e^{x^T \theta}\\ +p(1 + e^{x^T \theta}) &= e^{x^T \theta} \\ +p &= \frac{e^{x^T \theta}}{1+e^{x^T \theta}}\\ +p &= \frac{1}{1+e^{-x^T \theta}}\\ +\end{align}\]

+

Phew, that was a lot of algebra. What we’ve uncovered is the logistic regression model to predict the probability of a datapoint \(x^{\top}\) belonging to Class 1. If we plot this relationship for our data, we see the S-shaped curve from earlier!

+
+
+Code +
# We'll discuss the `LogisticRegression` class next time
+xs = np.linspace(-0.3, 0.3)
+
+logistic_model = lm.LogisticRegression(C=20)
+logistic_model.fit(X, Y)
+predicted_prob = logistic_model.predict_proba(xs[:, np.newaxis])[:, 1]
+
+sns.stripplot(data=games, x="GOAL_DIFF", y="WON", orient="h", alpha=0.5)
+plt.plot(xs, predicted_prob, c="k", lw=3, label="Logistic regression model")
+plt.plot(win_rates_by_bin.index, win_rates_by_bin, lw=2, c="tab:red", label="Graph of averages")
+plt.legend(loc="upper left")
+plt.gca().invert_yaxis();
+
+
+
+
+

+
+
+
+
+

The S-shaped curve is formally known as the sigmoid function and is typically denoted by \(\sigma\).

+

\[\sigma(t) = \frac{1}{1+e^{-t}}\]

+
+
+
+ +
+
+Properties of the Sigmoid +
+
+
+
    +
  • Reflection/Symmetry: \[1-\sigma(t) = \frac{e^{-t}}{1+e^{-t}}=\sigma(-t)\]
  • +
  • Inverse: \[t=\sigma^{-1}(p)=\log{(\frac{p}{1-p})}\]
  • +
  • Derivative: \[\frac{d}{dz} \sigma(t) = \sigma(t) (1-\sigma(t))=\sigma(t)\sigma(-t)\]
  • +
  • Domain: \(-\infty < t < \infty\)
  • +
  • Range: \(0 < \sigma(t) < 1\)
  • +
+
+
+

In the context of our modeling process, the sigmoid is considered an activation function. It takes in a linear combination of the features and applies a non-linear transformation.

+
+
+
+
+

22.3 The Logistic Regression Model

+

To predict a probability using the logistic regression model, we:

+
    +
  1. Compute a linear combination of the features, \(x^{\top}\theta\)
  2. +
  3. Apply the sigmoid activation function, \(\sigma(x^{\top} \theta)\).
  4. +
+

Our predicted probabilities are of the form \(P(Y=1|x) = p = \frac{1}{1+e^{-x^T \theta}} = \frac{1}{1+e^{-(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \ldots + \theta_p x_p)}}\)

+

An important note: despite its name, logistic regression is used for classification tasks, not regression tasks. In Data 100, we always apply logistic regression with the goal of classifying data.

+

Let’s summarize our logistic regression modeling workflow:

+
+log_reg +
+

Our main takeaways from this section are:

+
    +
  • Assume log-odds is a linear combination of \(x\) and \(\theta\)
  • +
  • Fit the “S” curve as best as possible
  • +
  • The curve models the probability: \(P = (Y=1 | x)\)
  • +
+

Putting this together, we know that the estimated probability that response is 1 given the features \(x\) is equal to the logistic function \(\sigma()\) at the value \(x^{\top}\theta\):

+

\[\begin{align} +\hat{P}_{\theta}(Y = 1 | x) = \frac{1}{1 + e^{-x^{\top}\theta}} +\end{align}\]

+

More commonly, the logistic regression model is written as:

+

\[\begin{align} +\hat{P}_{\theta}(Y = 1 | x) = \sigma(x^{\top}\theta) +\end{align}\]

+
+
+
+ +
+
+Properties of the Logistic Model +
+
+
+

Consider a logistic regression model with one feature and an intercept term:

+

\[\begin{align} +p = P(Y = 1 | x) = \frac{1}{1+e^{-(\theta_0 + \theta_1 x)}} +\end{align}\]

+

Properties:

+
    +
  • \(\theta_0\) controls the position of the curve along the horizontal axis
  • +
  • The magnitude of \(\theta_1\) controls the “steepness” of the sigmoid
  • +
  • The sign of \(\theta_1\) controls the orientation of the curve
  • +
+
+
+
+
+
+ +
+
+Example Calculation +
+
+
+

Suppose we want to predict the probability that a team wins a game, given "GOAL_DIFF" (first feature) and the number of free throws (second feature). Let’s say we fit a logistic regression model (with no intercept) using the training data and estimate the optimal parameters. Now we want to predict the probability that a new team will win their game.

+

\[\begin{align} +\hat{\theta}^{\top} = \begin{matrix}[0.1 & -0.5]\end{matrix} +\\x^{\top} = \begin{matrix}[15 & 1]\end{matrix} +\end{align}\]

+

\[\begin{align} +\hat{P}_{\hat{\theta}}(Y = 1 | x) = \sigma(x^{\top}\hat{\theta}) = \sigma(0.1 \cdot 15 + (-0.5) \cdot 1) = \sigma(1) = \frac{1}{1+e^{-1}} \approx 0.7311 +\end{align}\]

+

We see that the response is more likely to be 1 than 0, indicating that a reasonable prediction is \(\hat{y} = 1\). We’ll dive deeper into this in the next lecture.

+
+
+
+
+

22.4 Cross-Entropy Loss

+

To quantify the error of our logistic regression model, we’ll need to define a new loss function.

+
+

22.4.1 Why Not MSE?

+

You may wonder: why not use our familiar mean squared error? It turns out that the MSE is not well suited for logistic regression. To see why, let’s consider a simple, artificially generated toy dataset with just one feature (this will be easier to work with than the more complicated games data).

+
+
+Code +
toy_df = pd.DataFrame({
+        "x": [-4, -2, -0.5, 1, 3, 5],
+        "y": [0, 0, 1, 0, 1, 1]})
+toy_df.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
xy
0-4.00
1-2.00
2-0.51
31.00
43.01
+ +
+
+
+

We’ll construct a basic logistic regression model with only one feature and no intercept term. Our predicted probabilities take the form:

+

\[p=P(Y=1|x)=\frac{1}{1+e^{-\theta_1 x}}\]

+

In the cell below, we plot the MSE for our model on the data.

+
+
+Code +
def sigmoid(z):
+    return 1/(1+np.e**(-z))
+    
+def mse_on_toy_data(theta):
+    p_hat = sigmoid(toy_df['x'] * theta)
+    return np.mean((toy_df['y'] - p_hat)**2)
+
+thetas = np.linspace(-15, 5, 100)
+plt.plot(thetas, [mse_on_toy_data(theta) for theta in thetas])
+plt.title("MSE on toy classification data")
+plt.xlabel(r'$\theta_1$')
+plt.ylabel('MSE');
+
+
+
+
+

+
+
+
+
+

This looks nothing like the parabola we found when plotting the MSE of a linear regression model! In particular, we can identify two flaws with using the MSE for logistic regression:

+
    +
  1. The MSE loss surface is non-convex. There is both a global minimum and a (barely perceptible) local minimum in the loss surface above. This means that there is the risk of gradient descent converging on the local minimum of the loss surface, missing the true optimum parameter \(\theta_1\). +
    +reg +
  2. +
  3. Squared loss is bounded for a classification task. Recall that each true \(y\) has a value of either 0 or 1. This means that even if our model makes the worst possible prediction (e.g. predicting \(p=0\) for \(y=1\)), the squared loss for an observation will be no greater than 1: \[(y-p)^2=(1-0)^2=1\] The MSE does not strongly penalize poor predictions. +
    +reg +
  4. +
+
+
+

22.4.2 Motivating Cross-Entropy Loss

+

Suffice to say, we don’t want to use the MSE when working with logistic regression. Instead, we’ll consider what kind of behavior we would like to see in a loss function.

+

Let \(y\) be the binary label (it can either be 0 or 1), and \(p\) be the model’s predicted probability of the label \(y\) being 1.

+
    +
  • When the true \(y\) is 1, we should incur low loss when the model predicts large \(p\)
  • +
  • When the true \(y\) is 0, we should incur high loss when the model predicts large \(p\)
  • +
+

In other words, our loss function should behave differently depending on the value of the true class, \(y\).

+

The cross-entropy loss incorporates this changing behavior. We will use it throughout our work on logistic regression. Below, we write out the cross-entropy loss for a single datapoint (no averages just yet).

+

\[\text{Cross-Entropy Loss} = \begin{cases} + -\log{(p)} & \text{if } y=1 \\ + -\log{(1-p)} & \text{if } y=0 +\end{cases}\]

+

Why does this (seemingly convoluted) loss function “work”? Let’s break it down.

+ ++++ + + + + + + + + + + + + + + + + + + + + +
When \(y=1\)When \(y=0\)
cross-entropy loss when Y=1cross-entropy loss when Y=0
As \(p \rightarrow 0\), loss approches \(\infty\)As \(p \rightarrow 0\), loss approches 0
As \(p \rightarrow 1\), loss approaches 0As \(p \rightarrow 1\), loss approaches \(\infty\)
+ +

All good – we are seeing the behavior we want for our logistic regression model.

+

The piecewise function we outlined above is difficult to optimize: we don’t want to constantly “check” which form of the loss function we should be using at each step of choosing the optimal model parameters. We can re-express cross-entropy loss in a more convenient way:

+

\[\text{Cross-Entropy Loss} = -\left(y\log{(p)}+(1-y)\log{(1-p)}\right)\]

+

By setting \(y\) to 0 or 1, we see that this new form of cross-entropy loss gives us the same behavior as the original formulation. Another way to think about this is that in either scenario (y being equal to 0 or 1), only one of the cross-entropy loss terms is activated, which gives us a convenient way to combine the two independent loss functions.

+
+
+

When \(y=1\):

+

\[\begin{align} +\text{CE} &= -\left((1)\log{(p)}+(1-1)\log{(1-p)}\right)\\ +&= -\log{(p)} +\end{align}\]

+
+ +
+

When \(y=0\):

+

\[\begin{align} +\text{CE} &= -\left((0)\log{(p)}+(1-0)\log{(1-p)}\right)\\ +&= -\log{(1-p)} +\end{align}\]

+
+
+

The empirical risk of the logistic regression model is then the mean cross-entropy loss across all datapoints in the dataset. When fitting the model, we want to determine the model parameter \(\theta\) that leads to the lowest mean cross-entropy loss possible.

+

\[ +\begin{align} +R(\theta) &= - \frac{1}{n} \sum_{i=1}^n \left(y_i\log{(p_i)}+(1-y_i)\log{(1-p_i)}\right) \\ +&= - \frac{1}{n} \sum_{i=1}^n \left(y_i\log{\sigma(X_i^{\top}\theta)}+(1-y_i)\log{(1-\sigma(X_i^{\top}\theta))}\right) +\end{align} +\]

+

The optimization problem is therefore to find the estimate \(\hat{\theta}\) that minimizes \(R(\theta)\):

+

\[ +\hat{\theta} = \underset{\theta}{\arg\min} - \frac{1}{n} \sum_{i=1}^n \left(y_i\log{(\sigma(X_i^{\top}\theta))}+(1-y_i)\log{(1-\sigma(X_i^{\top}\theta))}\right) +\]

+

Plotting the cross-entropy loss surface for our toy dataset gives us a more encouraging result – our loss function is now convex. This means we can optimize it using gradient descent. Computing the gradient of the logistic model is fairly challenging, so we’ll let sklearn take care of this for us. You won’t need to compute the gradient of the logistic model in Data 100.

+
+
+Code +
def cross_entropy(y, p_hat):
+    return - y * np.log(p_hat) - (1 - y) * np.log(1 - p_hat)
+
+def mean_cross_entropy_on_toy_data(theta):
+    p_hat = sigmoid(toy_df['x'] * theta)
+    return np.mean(cross_entropy(toy_df['y'], p_hat))
+
+plt.plot(thetas, [mean_cross_entropy_on_toy_data(theta) for theta in thetas], color = 'green')
+plt.ylabel(r'Mean Cross-Entropy Loss($\theta$)')
+plt.xlabel(r'$\theta$');
+
+
+
+
+

+
+
+
+
+
+
+
+

22.5 Maximum Likelihood Estimation

+

It may have seemed like we pulled cross-entropy loss out of thin air. How did we know that taking the negative logarithms of our probabilities would work so well? It turns out that cross-entropy loss is justified by probability theory.

+

The following section is out of scope, but is certainly an interesting read!

+
+

22.5.1 Building Intuition: The Coin Flip

+

To build some intuition for logistic regression, let’s look at an introductory example to classification: the coin flip. Suppose we observe some outcomes of a coin flip (1 = Heads, 0 = Tails).

+
+
flips = [0, 0, 1, 1, 1, 1, 0, 0, 0, 0]
+flips
+
+
[0, 0, 1, 1, 1, 1, 0, 0, 0, 0]
+
+
+

A reasonable model is to assume all flips are IID (independent and identically distributed). In other words, each flip has the same probability of returning a 1 (or heads). Let’s define a parameter \(\theta\), the probability that the next flip is a heads. We will use this parameter to inform our decision for \(\hat y\) (predicting either 0 or 1) of the next flip. If \(\theta \ge 0.5, \hat y = 1, \text{else } \hat y = 0\).

+

You may be inclined to say \(0.5\) is the best choice for \(\theta\). However, notice that we made no assumption about the coin itself. The coin may be biased, so we should make our decision based only on the data. We know that exactly \(\frac{4}{10}\) of the flips were heads, so we might guess \(\hat \theta = 0.4\). In the next section, we will mathematically prove why this is the best possible estimate.

+
+
+

22.5.2 Likelihood of Data

+

Let’s call the result of the coin flip a random variable \(Y\). This is a Bernoulli random variable with two outcomes. \(Y\) has the following distribution:

+

\[P(Y = y) = \begin{cases} + p, \text{if } y=1\\ + 1 - p, \text{if } y=0 + \end{cases} \]

+

\(p\) is unknown to us. But we can find the \(p\) that makes the data we observed the most likely.

+

The probability of observing 4 heads and 6 tails follows the binomial distribution.

+

\[\binom{10}{4} (p)^4 (1-p)^6\]

+

We define the likelihood of obtaining our observed data as a quantity proportional to the probability above. To find it, simply multiply the probabilities of obtaining each coin flip.

+

\[(p)^{4} (1-p)^6\]

+

The technique known as maximum likelihood estimation finds the \(p\) that maximizes the above likelihood. You can find this maximum by taking the derivative of the likelihood, but we’ll provide a more intuitive graphical solution.

+
+
thetas = np.linspace(0, 1)
+plt.plot(thetas, (thetas**4)*(1-thetas)**6)
+plt.xlabel(r"$\theta$")
+plt.ylabel("Likelihood");
+
+
+
+

+
+
+
+
+

More generally, the likelihood for some Bernoulli(\(p\)) random variable \(Y\) is:

+

\[P(Y = y) = \begin{cases} + 1, \text{with probability } p\\ + 0, \text{with probability } 1 - p + \end{cases} \]

+

Equivalently, this can be written in a compact way:

+

\[P(Y=y) = p^y(1-p)^{1-y}\]

+
    +
  • When \(y = 1\), this reads \(P(Y=y) = p\)
  • +
  • When \(y = 0\), this reads \(P(Y=y) = (1-p)\)
  • +
+

In our example, a Bernoulli random variable is analogous to a single data point (e.g., one instance of a basketball team winning or losing a game). All together, our games data consists of many IID Bernoulli(\(p\)) random variables. To find the likelihood of independent events in succession, simply multiply their likelihoods.

+

\[\prod_{i=1}^{n} p^{y_i} (1-p)^{1-y_i}\]

+

As with the coin example, we want to find the parameter \(p\) that maximizes this likelihood. Earlier, we gave an intuitive graphical solution, but let’s take the derivative of the likelihood to find this maximum.

+

At a first glance, this derivative will be complicated! We will have to use the product rule, followed by the chain rule. Instead, we can make an observation that simplifies the problem.

+

Finding the \(p\) that maximizes \[\prod_{i=1}^{n} p^{y_i} (1-p)^{1-y_i}\] is equivalent to the \(p\) that maximizes \[\text{log}(\prod_{i=1}^{n} p^{y_i} (1-p)^{1-y_i})\]

+

This is because \(\text{log}\) is a strictly increasing function. It won’t change the maximum or minimum of the function it was applied to. From \(\text{log}\) properties, \(\text{log}(a*b)\) = \(\text{log}(a) + \text{log}(b)\). We can apply this to our equation above to get:

+

\[\underset{p}{\text{argmax}} \sum_{i=1}^{n} \text{log}(p^{y_i} (1-p)^{1-y_i})\]

+

\[= \underset{p}{\text{argmax}} \sum_{i=1}^{n} (\text{log}(p^{y_i}) + \text{log}((1-p)^{1-y_i}))\]

+

\[= \underset{p}{\text{argmax}} \sum_{i=1}^{n} (y_i\text{log}(p) + (1-y_i)\text{log}(1-p))\]

+

We can add a constant factor of \(\frac{1}{n}\) out front. It won’t affect the \(p\) that maximizes our likelihood.

+

\[=\underset{p}{\text{argmax}} \frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(p) + (1-y_i)\text{log}(1-p)\]

+

One last “trick” we can do is change this to a minimization problem by negating the result. This works because we are dealing with a concave function, which can be made convex.

+

\[= \underset{p}{\text{argmin}} -\frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(p) + (1-y_i)\text{log}(1-p)\]

+

Now let’s say that we have data that are independent with different probability \(p_i\). Then, we would want to find the \(p_1, p_2, \dots, p_n\) that maximize \[\prod_{i=1}^{n} p_i^{y_i} (1-p_i)^{1-y_i}\]

+

Setting up and simplifying the optimization problems as we did above, we ultimately want to find:

+

\[= \underset{p}{\text{argmin}} -\frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(p_i) + (1-y_i)\text{log}(1-p_i)\]

+

For logistic regression, \(p_i = \sigma(x^{\top}\theta)\). Plugging that in, we get:

+

\[= \underset{p}{\text{argmin}} -\frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(\sigma(x^{\top}\theta)) + (1-y_i)\text{log}(1-\sigma(x^{\top}\theta))\]

+

This is exactly our average cross-entropy loss minimization problem from before!

+

Why did we do all this complicated math? We have shown that minimizing cross-entropy loss is equivalent to maximizing the likelihood of the training data.

+
    +
  • By minimizing cross-entropy loss, we are choosing the model parameters that are “most likely” for the data we observed.
  • +
+

Note that this is under the assumption that all data is drawn independently from the same logistic regression model with parameter \(\theta\). In fact, many of the model + loss combinations we’ve seen can be motivated using MLE (e.g., OLS, Ridge Regression, etc.). In probability and ML classes, you’ll get the chance to explore MLE further.

+ + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-10-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-10-output-1.png new file mode 100644 index 000000000..cb138d627 Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-10-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-11-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-11-output-1.png new file mode 100644 index 000000000..b40928a13 Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-11-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-13-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-13-output-1.png new file mode 100644 index 000000000..19904c8ee Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-13-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-3-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-3-output-1.png new file mode 100644 index 000000000..12bcebf36 Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-3-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-4-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-4-output-1.png new file mode 100644 index 000000000..85112db7f Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-4-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-5-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-5-output-1.png new file mode 100644 index 000000000..7a64687f4 Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-5-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-6-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-6-output-1.png new file mode 100644 index 000000000..988b2fd9a Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-6-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-7-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-7-output-1.png new file mode 100644 index 000000000..4cdeeec53 Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-7-output-1.png differ diff --git a/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-8-output-1.png b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-8-output-1.png new file mode 100644 index 000000000..f7463e092 Binary files /dev/null and b/docs/logistic_regression_1/logistic_reg_1_files/figure-html/cell-8-output-1.png differ diff --git a/docs/logistic_regression_2/images/confusion_matrix.png b/docs/logistic_regression_2/images/confusion_matrix.png new file mode 100644 index 000000000..75fff830a Binary files /dev/null and b/docs/logistic_regression_2/images/confusion_matrix.png differ diff --git a/docs/logistic_regression_2/images/confusion_matrix_sklearn.png b/docs/logistic_regression_2/images/confusion_matrix_sklearn.png new file mode 100644 index 000000000..8126cd8d2 Binary files /dev/null and b/docs/logistic_regression_2/images/confusion_matrix_sklearn.png differ diff --git a/docs/logistic_regression_2/images/decision_boundary.png b/docs/logistic_regression_2/images/decision_boundary.png new file mode 100644 index 000000000..df94c58eb Binary files /dev/null and b/docs/logistic_regression_2/images/decision_boundary.png differ diff --git a/docs/logistic_regression_2/images/decision_boundary_true.png b/docs/logistic_regression_2/images/decision_boundary_true.png new file mode 100644 index 000000000..d3b39b6d6 Binary files /dev/null and b/docs/logistic_regression_2/images/decision_boundary_true.png differ diff --git a/docs/logistic_regression_2/images/linear_separability_1D.png b/docs/logistic_regression_2/images/linear_separability_1D.png new file mode 100644 index 000000000..98586398e Binary files /dev/null and b/docs/logistic_regression_2/images/linear_separability_1D.png differ diff --git a/docs/logistic_regression_2/images/linear_separability_2D.png b/docs/logistic_regression_2/images/linear_separability_2D.png new file mode 100644 index 000000000..6b7af88c4 Binary files /dev/null and b/docs/logistic_regression_2/images/linear_separability_2D.png differ diff --git a/docs/logistic_regression_2/images/log_reg_summary.png b/docs/logistic_regression_2/images/log_reg_summary.png new file mode 100644 index 000000000..f7671b509 Binary files /dev/null and b/docs/logistic_regression_2/images/log_reg_summary.png differ diff --git a/docs/logistic_regression_2/images/mean_cross_entropy_loss_plot.png b/docs/logistic_regression_2/images/mean_cross_entropy_loss_plot.png new file mode 100644 index 000000000..4e8f9a1d8 Binary files /dev/null and b/docs/logistic_regression_2/images/mean_cross_entropy_loss_plot.png differ diff --git a/docs/logistic_regression_2/images/pr_curve_perfect.png b/docs/logistic_regression_2/images/pr_curve_perfect.png new file mode 100644 index 000000000..cfb5f2d92 Binary files /dev/null and b/docs/logistic_regression_2/images/pr_curve_perfect.png differ diff --git a/docs/logistic_regression_2/images/pr_curve_thresholds.png b/docs/logistic_regression_2/images/pr_curve_thresholds.png new file mode 100644 index 000000000..c01f478d7 Binary files /dev/null and b/docs/logistic_regression_2/images/pr_curve_thresholds.png differ diff --git a/docs/logistic_regression_2/images/precision-recall-thresh.png b/docs/logistic_regression_2/images/precision-recall-thresh.png new file mode 100644 index 000000000..c1dc555af Binary files /dev/null and b/docs/logistic_regression_2/images/precision-recall-thresh.png differ diff --git a/docs/logistic_regression_2/images/precision_recall_graphic.png b/docs/logistic_regression_2/images/precision_recall_graphic.png new file mode 100644 index 000000000..241c8fc4b Binary files /dev/null and b/docs/logistic_regression_2/images/precision_recall_graphic.png differ diff --git a/docs/logistic_regression_2/images/reg_loss_finite_argmin.png b/docs/logistic_regression_2/images/reg_loss_finite_argmin.png new file mode 100644 index 000000000..68c670dfe Binary files /dev/null and b/docs/logistic_regression_2/images/reg_loss_finite_argmin.png differ diff --git a/docs/logistic_regression_2/images/roc_curve.png b/docs/logistic_regression_2/images/roc_curve.png new file mode 100644 index 000000000..273b0b557 Binary files /dev/null and b/docs/logistic_regression_2/images/roc_curve.png differ diff --git a/docs/logistic_regression_2/images/roc_curve_perfect.png b/docs/logistic_regression_2/images/roc_curve_perfect.png new file mode 100644 index 000000000..42a9d8488 Binary files /dev/null and b/docs/logistic_regression_2/images/roc_curve_perfect.png differ diff --git a/docs/logistic_regression_2/images/roc_curve_worse_predictor_differing_T.png b/docs/logistic_regression_2/images/roc_curve_worse_predictor_differing_T.png new file mode 100644 index 000000000..1180046cb Binary files /dev/null and b/docs/logistic_regression_2/images/roc_curve_worse_predictor_differing_T.png differ diff --git a/docs/logistic_regression_2/images/roc_curve_worst_predictor.png b/docs/logistic_regression_2/images/roc_curve_worst_predictor.png new file mode 100644 index 000000000..d2b478771 Binary files /dev/null and b/docs/logistic_regression_2/images/roc_curve_worst_predictor.png differ diff --git a/docs/logistic_regression_2/images/toy_linear_separable_dataset.png b/docs/logistic_regression_2/images/toy_linear_separable_dataset.png new file mode 100644 index 000000000..316f271a0 Binary files /dev/null and b/docs/logistic_regression_2/images/toy_linear_separable_dataset.png differ diff --git a/docs/logistic_regression_2/images/toy_linear_separable_dataset_2.png b/docs/logistic_regression_2/images/toy_linear_separable_dataset_2.png new file mode 100644 index 000000000..3e60a7c93 Binary files /dev/null and b/docs/logistic_regression_2/images/toy_linear_separable_dataset_2.png differ diff --git a/docs/logistic_regression_2/images/tpr_fpr.png b/docs/logistic_regression_2/images/tpr_fpr.png new file mode 100644 index 000000000..69d8df649 Binary files /dev/null and b/docs/logistic_regression_2/images/tpr_fpr.png differ diff --git a/docs/logistic_regression_2/images/unreg_loss_infinite_argmin.png b/docs/logistic_regression_2/images/unreg_loss_infinite_argmin.png new file mode 100644 index 000000000..550015129 Binary files /dev/null and b/docs/logistic_regression_2/images/unreg_loss_infinite_argmin.png differ diff --git a/docs/logistic_regression_2/images/varying_threshold.png b/docs/logistic_regression_2/images/varying_threshold.png new file mode 100644 index 000000000..fd146550a Binary files /dev/null and b/docs/logistic_regression_2/images/varying_threshold.png differ diff --git a/docs/logistic_regression_2/logistic_reg_2.html b/docs/logistic_regression_2/logistic_reg_2.html new file mode 100644 index 000000000..ad63ed2fe --- /dev/null +++ b/docs/logistic_regression_2/logistic_reg_2.html @@ -0,0 +1,1186 @@ + + + + + + + + + +23  Logistic Regression II – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

23  Logistic Regression II

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Apply decision rules to make a classification
  • +
  • Learn when logistic regression works well and when it does not
  • +
  • Introduce new metrics for model performance
  • +
+
+
+
+

Today, we will continue studying the Logistic Regression model and discuss decision boundaries that help inform the classification of a particular prediction and learn about linear separability. Picking up from last lecture’s discussion of cross-entropy loss, we’ll study a few of its pitfalls, and learn potential remedies. We will also provide an implementation of sklearn’s logistic regression model. Lastly, we’ll return to decision rules and discuss metrics that allow us to determine our model’s performance in different scenarios.

+

This will introduce us to the process of thresholding – a technique used to classify data from our model’s predicted probabilities, or \(P(Y=1|x)\). In doing so, we’ll focus on how these thresholding decisions affect the behavior of our model and learn various evaluation metrics useful for binary classification, and apply them to our study of logistic regression.

+
+

23.1 Decision Boundaries

+

In logistic regression, we model the probability that a datapoint belongs to Class 1.

+
+tpr_fpr +
+


Last week, we developed the logistic regression model to predict that probability, but we never actually made any classifications for whether our prediction \(y\) belongs in Class 0 or Class 1.

+

\[ p = P(Y=1 | x) = \frac{1}{1 + e^{-x^{\top}\theta}}\]

+

A decision rule tells us how to interpret the output of the model to make a decision on how to classify a datapoint. We commonly make decision rules by specifying a threshold, \(T\). If the predicted probability is greater than or equal to \(T\), predict Class 1. Otherwise, predict Class 0.

+

\[\hat y = \text{classify}(x) = \begin{cases} + 1, & P(Y=1|x) \ge T\\ + 0, & \text{otherwise } + \end{cases}\]

+

The threshold is often set to \(T = 0.5\), but not always. We’ll discuss why we might want to use other thresholds \(T \neq 0.5\) later in this lecture.

+

Using our decision rule, we can define a decision boundary as the “line” that splits the data into classes based on its features. For logistic regression, since we are working in \(p\) dimensions, the decision boundary is a hyperplane – a linear combination of the features in \(p\)-dimensions – and we can recover it from the final logistic regression model. For example, if we have a model with 2 features (2D), we have \(\theta = [\theta_0, \theta_1, \theta_2]\) including the intercept term, and we can solve for the decision boundary like so:

+

\[ +\begin{align} +T &= \frac{1}{1 + e^{-(\theta_0 + \theta_1 * \text{feature1} + \theta_2 * \text{feature2})}} \\ +1 + e^{-(\theta_0 + \theta_1 \cdot \text{feature1} + \theta_2 \cdot \text{feature2})} &= \frac{1}{T} \\ +e^{-(\theta_0 + \theta_1 \cdot \text{feature1} + \theta_2 \cdot \text{feature2})} &= \frac{1}{T} - 1 \\ +\theta_0 + \theta_1 \cdot \text{feature1} + \theta_2 \cdot \text{feature2} &= -\log(\frac{1}{T} - 1) +\end{align} +\]

+

For a model with 2 features, the decision boundary is a line in terms of its features. To make it easier to visualize, we’ve included an example of a 1-dimensional and a 2-dimensional decision boundary below. Notice how the decision boundary predicted by our logistic regression model perfectly separates the points into two classes. Here the color is the predicted class, rather than the true class.

+
+varying_threshold +
+

In real life, however, that is often not the case, and we often see some overlap between points of different classes across the decision boundary. The true classes of the 2D data are shown below:

+
+varying_threshold +
+

As you can see, the decision boundary predicted by our logistic regression does not perfectly separate the two classes. There’s a “muddled” region near the decision boundary where our classifier predicts the wrong class. What would the data have to look like for the classifier to make perfect predictions?

+
+
+

23.2 Linear Separability and Regularization

+

A classification dataset is said to be linearly separable if there exists a hyperplane among input features \(x\) that separates the two classes \(y\).

+

Linear separability in 1D can be found with a rugplot of a single feature where a point perfectly separates the classes (Remember that in 1D, our decision boundary is just a point). For example, notice how the plot on the bottom left is linearly separable along the vertical line \(x=0\). However, no such line perfectly separates the two classes on the bottom right.

+
+linear_separability_1D +
+

This same definition holds in higher dimensions. If there are two features, the separating hyperplane must exist in two dimensions (any line of the form \(y=mx+b\)). We can visualize this using a scatter plot.

+
+linear_separability_1D +
+

This sounds great! When the dataset is linearly separable, a logistic regression classifier can perfectly assign datapoints into classes. Can it achieve 0 cross-entropy loss?

+

\[-(y \log(p) + (1 - y) \log(1 - p))\]

+

Cross-entropy loss is 0 if \(p = 1\) when \(y = 1\), and \(p = 0\) when \(y = 0\). Consider a simple model with one feature and no intercept.

+

\[P_{\theta}(Y = 1|x) = \sigma(\theta x) = \frac{1}{1 + e^{-\theta x}}\]

+

What \(\theta\) will achieve 0 loss if we train on the datapoint \(x = 1, y = 1\)? We would want \(p = 1\) which occurs when \(\theta \rightarrow \infty\).

+

However, (unexpected) complications may arise. When data is linearly separable, the optimal model parameters diverge to \(\pm \infty\). The sigmoid can never output exactly 0 or 1, so no finite optimal \(\theta\) exists. This can be a problem when using gradient descent to fit the model. Consider a simple, linearly separable “toy” dataset with two datapoints.

+
+toy_linear_separability +
+

Let’s also visualize the mean cross entropy loss along with the direction of the gradient (how this loss surface is calculated is out of scope).

+
+mean_cross_entropy_loss_plot +
+

It’s nearly impossible to see, but the plateau to the right is slightly tilted. Because gradient descent follows the tilted loss surface downwards, it never converges.

+

The diverging weights cause the model to be overconfident. Say we add a new point \((x, y) = (-0.5, 1)\). Following the behavior above, our model will incorrectly predict \(p=0\), and thus, \(\hat y = 0\).

+
+toy_linear_separability +
+


The loss incurred by this misclassified point is infinite.

+

\[-(y\text{ log}(p) + (1-y)\text{ log}(1-p))=1 * \text{log}(0)\]

+

Thus, diverging weights (\(|\theta| \rightarrow \infty\)) occur with linearly separable data. “Overconfidence”, as shown here, is a particularly dangerous version of overfitting.

+
+

23.2.1 Regularized Logistic Regression

+

To avoid large weights and infinite loss (particularly on linearly separable data), we use regularization. The same principles apply as with linear regression - make sure to standardize your features first.

+

For example, \(L2\) (Ridge) Logistic Regression takes on the form:

+

\[\min_{\theta} -\frac{1}{n} \sum_{i=1}^{n} (y_i \text{log}(\sigma(X_i^T\theta)) + (1-y_i)\text{log}(1-\sigma(X_i^T\theta))) + \lambda \sum_{j=1}^{d} \theta_j^2\]

+

Now, let us compare the loss functions of un-regularized and regularized logistic regression.

+
+unreg_loss +
+
+reg_loss +
+

As we can see, \(L2\) regularization helps us prevent diverging weights and deters against “overconfidence.”

+

sklearn’s logistic regression defaults to \(L2\) regularization and C=1.0; C is the inverse of \(\lambda\): \[C = \frac{1}{\lambda}\] Setting C to a large value, for example, C=300.0, results in minimal regularization.

+
# sklearn defaults
+model = LogisticRegression(penalty = 'l2', C = 1.0, ...)
+model.fit()
+

Note that in Data 100, we only use sklearn to fit logistic regression models. There is no closed-form solution to the optimal theta vector, and the gradient is a little messy (see the bonus section below for details).

+

From here, the .predict function returns the predicted class \(\hat y\) of the point. In the simple binary case where the threshold is 0.5,

+

\[\hat y = \begin{cases} + 1, & P(Y=1|x) \ge 0.5\\ + 0, & \text{otherwise } + \end{cases}\]

+
+
+
+

23.3 Performance Metrics

+

You might be thinking, if we’ve already introduced cross-entropy loss, why do we need additional ways of assessing how well our models perform? In linear regression, we made numerical predictions and used a loss function to determine how “good” these predictions were. In logistic regression, our ultimate goal is to classify data – we are much more concerned with whether or not each datapoint was assigned the correct class using the decision rule. As such, we are interested in the quality of classifications, not the predicted probabilities.

+

The most basic evaluation metric is accuracy, that is, the proportion of correctly classified points.

+

\[\text{accuracy} = \frac{\# \text{ of points classified correctly}}{\# \text{ of total points}}\]

+

Translated to code:

+
def accuracy(X, Y):
+    return np.mean(model.predict(X) == Y)
+    
+model.score(X, y) # built-in accuracy function
+

You can find the sklearn documentation here.

+

However, accuracy is not always a great metric for classification. To understand why, let’s consider a classification problem with 100 emails where only 5 are truly spam, and the remaining 95 are truly ham. We’ll investigate two models where accuracy is a poor metric.

+
    +
  • Model 1: Our first model classifies every email as non-spam. The model’s accuracy is high (\(\frac{95}{100} = 0.95\)), but it doesn’t detect any spam emails. Despite the high accuracy, this is a bad model.
  • +
  • Model 2: The second model classifies every email as spam. The accuracy is low (\(\frac{5}{100} = 0.05\)), but the model correctly labels every spam email. Unfortunately, it also misclassifies every non-spam email.
  • +
+

As this example illustrates, accuracy is not always a good metric for classification, particularly when your data could exhibit class imbalance (e.g., very few 1’s compared to 0’s).

+
+

23.3.1 Types of Classification

+

There are 4 different different classifications that our model might make:

+
    +
  1. True positive: correctly classify a positive point as being positive (\(y=1\) and \(\hat{y}=1\))
  2. +
  3. True negative: correctly classify a negative point as being negative (\(y=0\) and \(\hat{y}=0\))
  4. +
  5. False positive: incorrectly classify a negative point as being positive (\(y=0\) and \(\hat{y}=1\))
  6. +
  7. False negative: incorrectly classify a positive point as being negative (\(y=1\) and \(\hat{y}=0\))
  8. +
+

These classifications can be concisely summarized in a confusion matrix.

+
+confusion_matrix +
+

An easy way to remember this terminology is as follows:

+
    +
  1. Look at the second word in the phrase. Positive means a prediction of 1. Negative means a prediction of 0.
  2. +
  3. Look at the first word in the phrase. True means our prediction was correct. False means it was incorrect.
  4. +
+

We can now write the accuracy calculation as \[\text{accuracy} = \frac{TP + TN}{n}\]

+

In sklearn, we use the following syntax to plot a confusion matrix:

+
from sklearn.metrics import confusion_matrix
+cm = confusion_matrix(Y_true, Y_pred)
+
+confusion_matrix +
+
+
+

23.3.2 Accuracy, Precision, and Recall

+

The purpose of our discussion of the confusion matrix was to motivate better performance metrics for classification problems with class imbalance - namely, precision and recall.

+

Precision is defined as

+

\[\text{precision} = \frac{\text{TP}}{\text{TP + FP}}\]

+

Precision answers the question: “Of all observations that were predicted to be \(1\), what proportion was actually \(1\)?” It measures how accurate the classifier is when its predictions are positive.

+

Recall (or sensitivity) is defined as

+

\[\text{recall} = \frac{\text{TP}}{\text{TP + FN}}\]

+

Recall aims to answer: “Of all observations that were actually \(1\), what proportion was predicted to be \(1\)?” It measures how many positive predictions were missed.

+

Here’s a helpful graphic that summarizes our discussion above.

+
+confusion_matrix +
+
+
+

23.3.3 Example Calculation

+

In this section, we will calculate the accuracy, precision, and recall performance metrics for our earlier spam classification example. As a reminder, we had 100 emails, 5 of which were spam. We designed two models:

+
    +
  • Model 1: Predict that every email is non-spam
  • +
  • Model 2: Predict that every email is spam
  • +
+
+

23.3.3.1 Model 1

+

First, let’s begin by creating the confusion matrix.

+ +++++ + + + + + + + + + + + + + + + + + + + +
01
0True Negative: 95False Positive: 0
1False Negative: 5True Positive: 0
+

\[\text{accuracy} = \frac{95}{100} = 0.95\] \[\text{precision} = \frac{0}{0 + 0} = \text{undefined}\] \[\text{recall} = \frac{0}{0 + 5} = 0\]

+

Notice how our precision is undefined because we never predicted class \(1\). Our recall is 0 for the same reason – the numerator is 0 (we had no positive predictions).

+
+
+

23.3.3.2 Model 2

+

The confusion matrix for Model 2 is:

+ +++++ + + + + + + + + + + + + + + + + + + + +
01
0True Negative: 0False Positive: 95
1False Negative: 0True Positive: 5
+

\[\text{accuracy} = \frac{5}{100} = 0.05\] \[\text{precision} = \frac{5}{5 + 95} = 0.05\] \[\text{recall} = \frac{5}{5 + 0} = 1\]

+

Our precision is low because we have many false positives, and our recall is perfect - we correctly classified all spam emails (we never predicted class \(0\)).

+
+
+
+

23.3.4 Precision vs. Recall

+

Precision (\(\frac{\text{TP}}{\text{TP} + \textbf{ FP}}\)) penalizes false positives, while recall (\(\frac{\text{TP}}{\text{TP} + \textbf{ FN}}\)) penalizes false negatives. In fact, precision and recall are inversely related. This is evident in our second model – we observed a high recall and low precision. Usually, there is a tradeoff in these two (most models can either minimize the number of FP or FN; and in rare cases, both).

+

The specific performance metric(s) to prioritize depends on the context. In many medical settings, there might be a much higher cost to missing positive cases. For instance, in our breast cancer example, it is more costly to misclassify malignant tumors (false negatives) than it is to incorrectly classify a benign tumor as malignant (false positives). In the case of the latter, pathologists can conduct further studies to verify malignant tumors. As such, we should minimize the number of false negatives. This is equivalent to maximizing recall.

+
+
+

23.3.5 Three More Metrics

+

The True Positive Rate (TPR) is defined as

+

\[\text{true positive rate} = \frac{\text{TP}}{\text{TP + FN}}\]

+

You’ll notice this is equivalent to recall. In the context of our spam email classifier, it answers the question: “What proportion of spam did I mark correctly?”. We’d like this to be close to \(1\).

+

The True Negative Rate (TNR) is defined as

+

\[\text{true negative rate} = \frac{\text{TN}}{\text{TN + FP}}\]

+

Another word for TNR is specificity. This answers the question: “What proportion of ham did I mark correctly?”. We’d like this to be close to \(1\).

+

The False Positive Rate (FPR) is defined as

+

\[\text{false positive rate} = \frac{\text{FP}}{\text{FP + TN}}\]

+

FPR is equal to 1 - specificity, or 1 - TNR. This answers the question: “What proportion of regular email did I mark as spam?”. We’d like this to be close to \(0\).

+

As we increase threshold \(T\), both TPR and FPR decrease. We’ve plotted this relationship below for some model on a toy dataset.

+
+tpr_fpr +
+
+
+
+

23.4 Adjusting the Classification Threshold

+

One way to minimize the number of FP vs. FN (equivalently, maximizing precision vs. recall) is by adjusting the classification threshold \(T\).

+

\[\hat y = \begin{cases} + 1, & P(Y=1|x) \ge T\\ + 0, & \text{otherwise } + \end{cases}\]

+

The default threshold in sklearn is \(T = 0.5\). As we increase the threshold \(T\), we “raise the standard” of how confident our classifier needs to be to predict 1 (i.e., “positive”).

+
+varying_threshold +
+

As you may notice, the choice of threshold \(T\) impacts our classifier’s performance.

+
    +
  • High \(T\): Most predictions are \(0\). +
      +
    • Lots of false negatives
    • +
    • Fewer false positives
    • +
  • +
  • Low \(T\): Most predictions are \(1\). +
      +
    • Lots of false positives
    • +
    • Fewer false negatives
    • +
  • +
+

In fact, we can choose a threshold \(T\) based on our desired number, or proportion, of false positives and false negatives. We can do so using a few different tools. We’ll touch on two of the most important ones in Data 100.

+
    +
  1. Precision-Recall Curve (PR Curve)
  2. +
  3. “Receiver Operating Characteristic” Curve (ROC Curve)
  4. +
+
+

23.4.1 Precision-Recall Curves

+

A Precision-Recall Curve (PR Curve) is an alternative to the ROC curve that displays the relationship between precision and recall for various threshold values. In this curve, we test out many different possible thresholds and for each one we compute the precision and recall of the classifier.

+

Let’s first consider how precision and recall change as a function of the threshold \(T\). We know this quite well from earlier – precision will generally increase, and recall will decrease.

+
+precision-recall-thresh +
+

Displayed below is the PR Curve for the same toy dataset. Notice how threshold values increase as we move to the left.

+
+pr_curve_thresholds +
+

Once again, the perfect classifier will resemble the orange curve, this time, facing the opposite direction.

+
+pr_curve_perfect +
+

We want our PR curve to be as close to the “top right” of this graph as possible. Again, we use the AUC to determine “closeness”, with the perfect classifier exhibiting an AUC = 1 (and the worst with an AUC = 0.5).

+
+
+

23.4.2 The ROC Curve

+

The “Receiver Operating Characteristic” Curve (ROC Curve) plots the tradeoff between FPR and TPR. Notice how the far-left of the curve corresponds to higher threshold \(T\) values. At lower thresholds, the FPR and TPR are both high as there are many positive predictions while at higher thresholds the FPR and TPR are both low as there are fewer positive predictions.

+
+roc_curve +
+

The “perfect” classifier is the one that has a TPR of 1, and FPR of 0. This is achieved at the top-left of the plot below. More generally, it’s ROC curve resembles the curve in orange.

+
+roc_curve_perfect +
+

We want our model to be as close to this orange curve as possible. How do we quantify “closeness”?

+

We can compute the area under curve (AUC) of the ROC curve. Notice how the perfect classifier has an AUC = 1. The closer our model’s AUC is to 1, the better it is.

+
+

23.4.2.1 (Extra) What is the “worst” AUC, and why is it 0.5?

+

On the other hand, a terrible model will have an AUC closer to 0.5. Random predictors randomly predict \(P(Y = 1 | x)\) to be uniformly between 0 and 1. This indicates the classifier is not able to distinguish between positive and negative classes, and thus, randomly predicts one of the two.

+
+roc_curve_worst_predictor +
+

We can also illustrate this by comparing different thresholds and seeing their points on the ROC curve.

+
+roc_curve_worse_predictor_differing_T +
+
+
+
+
+

23.5 (Bonus) Gradient Descent for Logistic Regression

+

Let’s define the following terms: \[ +\begin{align} +t_i &= \phi(x_i)^T \theta \\ +p_i &= \sigma(t_i) \\ +t_i &= \log(\frac{p_i}{1 - p_i}) \\ +1 - \sigma(t_i) &= \sigma(-t_i) \\ +\frac{d}{dt} \sigma(t) &= \sigma(t) \sigma(-t) +\end{align} +\]

+

Now, we can simplify the cross-entropy loss \[ +\begin{align} +y_i \log(p_i) + (1 - y_i) \log(1 - p_i) &= y_i \log(\frac{p_i}{1 - p_i}) + \log(1 - p_i) \\ +&= y_i \phi(x_i)^T + \log(\sigma(-\phi(x_i)^T \theta)) +\end{align} +\]

+

Hence, the optimal \(\hat{\theta}\) is \[\text{argmin}_{\theta} - \frac{1}{n} \sum_{i=1}^n (y_i \phi(x_i)^T + \log(\sigma(-\phi(x_i)^T \theta)))\]

+

We want to minimize \[L(\theta) = - \frac{1}{n} \sum_{i=1}^n (y_i \phi(x_i)^T + \log(\sigma(-\phi(x_i)^T \theta)))\]

+

So we take the derivative \[ +\begin{align} +\triangledown_{\theta} L(\theta) &= - \frac{1}{n} \sum_{i=1}^n \triangledown_{\theta} y_i \phi(x_i)^T + \triangledown_{\theta} \log(\sigma(-\phi(x_i)^T \theta)) \\ +&= - \frac{1}{n} \sum_{i=1}^n y_i \phi(x_i) + \triangledown_{\theta} \log(\sigma(-\phi(x_i)^T \theta)) \\ +&= - \frac{1}{n} \sum_{i=1}^n y_i \phi(x_i) + \frac{1}{\sigma(-\phi(x_i)^T \theta)} \triangledown_{\theta} \sigma(-\phi(x_i)^T \theta) \\ +&= - \frac{1}{n} \sum_{i=1}^n y_i \phi(x_i) + \frac{\sigma(-\phi(x_i)^T \theta)}{\sigma(-\phi(x_i)^T \theta)} \sigma(\phi(x_i)^T \theta)\triangledown_{\theta} \sigma(-\phi(x_i)^T \theta) \\ +&= - \frac{1}{n} \sum_{i=1}^n (y_i - \sigma(\phi(x_i)^T \theta)\phi(x_i)) +\end{align} +\]

+

Setting the derivative equal to 0 and solving for \(\hat{\theta}\), we find that there’s no general analytic solution. Therefore, we must solve using numeric methods.

+
+

23.5.1 Gradient Descent Update Rule

+

\[\theta^{(0)} \leftarrow \text{initial vector (random, zeros, ...)} \]

+

For \(\tau\) from 0 to convergence: \[ \theta^{(\tau + 1)} \leftarrow \theta^{(\tau)} - \rho(\tau)\left( \frac{1}{n} \sum_{i=1}^n \triangledown_{\theta} L_i(\theta) \mid_{\theta = \theta^{(\tau)}}\right) \]

+
+
+

23.5.2 Stochastic Gradient Descent Update Rule

+

\[\theta^{(0)} \leftarrow \text{initial vector (random, zeros, ...)} \]

+

For \(\tau\) from 0 to convergence, let \(B\) ~ \(\text{Random subset of indices}\). \[ \theta^{(\tau + 1)} \leftarrow \theta^{(\tau)} - \rho(\tau)\left( \frac{1}{|B|} \sum_{i \in B} \triangledown_{\theta} L_i(\theta) \mid_{\theta = \theta^{(\tau)}}\right) \]

+ + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/ols/images/columns.png b/docs/ols/images/columns.png new file mode 100644 index 000000000..1bbb36d1d Binary files /dev/null and b/docs/ols/images/columns.png differ diff --git a/docs/ols/images/design_matrix.png b/docs/ols/images/design_matrix.png new file mode 100644 index 000000000..2f098eca5 Binary files /dev/null and b/docs/ols/images/design_matrix.png differ diff --git a/docs/ols/images/matmul1.png b/docs/ols/images/matmul1.png new file mode 100644 index 000000000..9443c4cca Binary files /dev/null and b/docs/ols/images/matmul1.png differ diff --git a/docs/ols/images/matmul2.png b/docs/ols/images/matmul2.png new file mode 100644 index 000000000..ac184baee Binary files /dev/null and b/docs/ols/images/matmul2.png differ diff --git a/docs/ols/images/observation.png b/docs/ols/images/observation.png new file mode 100644 index 000000000..c943fc80c Binary files /dev/null and b/docs/ols/images/observation.png differ diff --git a/docs/ols/images/residual.png b/docs/ols/images/residual.png new file mode 100644 index 000000000..c35b336e0 Binary files /dev/null and b/docs/ols/images/residual.png differ diff --git a/docs/ols/images/residual_plot.png b/docs/ols/images/residual_plot.png new file mode 100644 index 000000000..9a54148fa Binary files /dev/null and b/docs/ols/images/residual_plot.png differ diff --git a/docs/ols/images/row_col.png b/docs/ols/images/row_col.png new file mode 100644 index 000000000..4a387f5ee Binary files /dev/null and b/docs/ols/images/row_col.png differ diff --git a/docs/ols/images/span.png b/docs/ols/images/span.png new file mode 100644 index 000000000..876e08337 Binary files /dev/null and b/docs/ols/images/span.png differ diff --git a/docs/ols/ols.html b/docs/ols/ols.html new file mode 100644 index 000000000..5164dfb0a --- /dev/null +++ b/docs/ols/ols.html @@ -0,0 +1,2157 @@ + + + + + + + + + +12  Ordinary Least Squares – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

12  Ordinary Least Squares

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Define linearity with respect to a vector of parameters \(\theta\).
  • +
  • Understand the use of matrix notation to express multiple linear regression.
  • +
  • Interpret ordinary least squares as the minimization of the norm of the residual vector.
  • +
  • Compute performance metrics for multiple linear regression.
  • +
+
+
+
+

We’ve now spent a number of lectures exploring how to build effective models – we introduced the SLR and constant models, selected cost functions to suit our modeling task, and applied transformations to improve the linear fit.

+

Throughout all of this, we considered models of one feature (\(\hat{y}_i = \theta_0 + \theta_1 x_i\)) or zero features (\(\hat{y}_i = \theta_0\)). As data scientists, we usually have access to datasets containing many features. To make the best models we can, it will be beneficial to consider all of the variables available to us as inputs to a model, rather than just one. In today’s lecture, we’ll introduce multiple linear regression as a framework to incorporate multiple features into a model. We will also learn how to accelerate the modeling process – specifically, we’ll see how linear algebra offers us a powerful set of tools for understanding model performance.

+
+

12.1 OLS Problem Formulation

+
+

12.1.1 Multiple Linear Regression

+

Multiple linear regression is an extension of simple linear regression that adds additional features to the model. The multiple linear regression model takes the form:

+

\[\hat{y} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:...\:+\:\theta_p x_{p}\]

+

Our predicted value of \(y\), \(\hat{y}\), is a linear combination of the single observations (features), \(x_i\), and the parameters, \(\theta_i\).

+

We can explore this idea further by looking at a dataset containing aggregate per-player data from the 2018-19 NBA season, downloaded from Kaggle.

+
+
+Code +
import pandas as pd
+nba = pd.read_csv('data/nba18-19.csv', index_col=0)
+nba.index.name = None # Drops name of index (players are ordered by rank)
+
+
+
+
+Code +
nba.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PlayerPosAgeTmGGSMPFGFGAFG%...FT%ORBDRBTRBASTSTLBLKTOVPFPTS
1Álex Abrines\abrinal01SG25OKC31219.01.85.10.357...0.9230.21.41.50.60.50.20.51.75.3
2Quincy Acy\acyqu01PF28PHO10012.30.41.80.222...0.7000.32.22.50.80.10.40.42.41.7
3Jaylen Adams\adamsja01PG22ATL34112.61.13.20.345...0.7780.31.41.81.90.40.10.81.33.2
4Steven Adams\adamsst01C25OKC808033.46.010.10.595...0.5004.94.69.51.61.51.01.72.613.9
5Bam Adebayo\adebaba01C21MIA822823.33.45.90.576...0.7352.05.37.32.20.90.81.52.58.9
+ +

5 rows × 29 columns

+
+
+
+

Let’s say we are interested in predicting the number of points (PTS) an athlete will score in a basketball game this season.

+

Suppose we want to fit a linear model by using some characteristics, or features of a player. Specifically, we’ll focus on field goals, assists, and 3-point attempts.

+
    +
  • FG, the average number of (2-point) field goals per game
  • +
  • AST, the average number of assists per game
  • +
  • 3PA, the average number of 3-point field goals attempted per game
  • +
+
+
+Code +
nba[['FG', 'AST', '3PA', 'PTS']].head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FGAST3PAPTS
11.80.64.15.3
20.40.81.51.7
31.11.92.23.2
46.01.60.013.9
53.42.20.28.9
+ +
+
+
+

Because we are now dealing with many parameter values, we’ve collected them all into a parameter vector with dimensions \((p+1) \times 1\) to keep things tidy. Remember that \(p\) represents the number of features we have (in this case, 3).

+

\[\theta = \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \vdots \\ + \theta_{p} + \end{bmatrix}\]

+

We are working with two vectors here: a row vector representing the observed data, and a column vector containing the model parameters. The multiple linear regression model is equivalent to the dot (scalar) product of the observation vector and parameter vector.

+

\[[1,\:x_{1},\:x_{2},\:x_{3},\:...,\:x_{p}] \theta = [1,\:x_{1},\:x_{2},\:x_{3},\:...,\:x_{p}] \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \vdots \\ + \theta_{p} + \end{bmatrix} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:...\:+\:\theta_p x_{p}\]

+

Notice that we have inserted 1 as the first value in the observation vector. When the dot product is computed, this 1 will be multiplied with \(\theta_0\) to give the intercept of the regression model. We call this 1 entry the intercept or bias term.

+

Given that we have three features here, we can express this model as: \[\hat{y} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:\theta_3 x_{3}\]

+

Our features are represented by \(x_1\) (FG), \(x_2\) (AST), and \(x_3\) (3PA) with each having correpsonding parameters, \(\theta_1\), \(\theta_2\), and \(\theta_3\).

+

In statistics, this model + loss is called Ordinary Least Squares (OLS). The solution to OLS is the minimizing loss for parameters \(\hat{\theta}\), also called the least squares estimate.

+
+
+

12.1.2 Linear Algebra Approach

+
+
+
+ +
+
+Linear Algebra Review: Vector Dot Product +
+
+
+
+
+

The dot product (or inner product) is a vector operation that:

+
    +
  • Can only be carried out on two vectors of the same length
  • +
  • Sums up the products of the corresponding entries of the two vectors
  • +
  • Returns a single number
  • +
+

For example, let \[ +\begin{align} +\vec{u} = \begin{bmatrix}1 \\ 2 \\ 3\end{bmatrix}, \vec{v} = \begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix} +\end{align} +\]

+

The dot product between \(\vec{u}\) and \(\vec{v}\) is \[ +\begin{align} +\vec{u} \cdot \vec{v} &= \vec{u}^T \vec{v} = \vec{v}^T \vec{u} \\ + &= 1 \cdot 1 + 2 \cdot 1 + 3 \cdot 1 \\ + &= 6 +\end{align} +\]

+

While not in scope, note that we can also interpret the dot product geometrically:

+
    +
  • It is the product of three things: the magnitude of both vectors, and the cosine of the angles between them: \[\vec{u} \cdot \vec{v} = ||\vec{u}|| \cdot ||\vec{v}|| \cdot {cos \theta}\]
  • +
+
+
+
+

We now know how to generate a single prediction from multiple observed features. Data scientists usually work at scale – that is, they want to build models that can produce many predictions, all at once. The vector notation we introduced above gives us a hint on how we can expedite multiple linear regression. We want to use the tools of linear algebra.

+

Let’s think about how we can apply what we did above. To accommodate for the fact that we’re considering several feature variables, we’ll adjust our notation slightly. Each observation can now be thought of as a row vector with an entry for each of \(p\) features.

+
+ + + + +
+observation +
+
+

To make a prediction from the first observation in the data, we take the dot product of the parameter vector and first observation vector. To make a prediction from the second observation, we would repeat this process to find the dot product of the parameter vector and the second observation vector. If we wanted to find the model predictions for each observation in the dataset, we’d repeat this process for all \(n\) observations in the data.

+

\[\hat{y}_1 = \theta_0 + \theta_1 x_{11} + \theta_2 x_{12} + ... + \theta_p x_{1p} = [1,\:x_{11},\:x_{12},\:x_{13},\:...,\:x_{1p}] \theta\] \[\hat{y}_2 = \theta_0 + \theta_1 x_{21} + \theta_2 x_{22} + ... + \theta_p x_{2p} = [1,\:x_{21},\:x_{22},\:x_{23},\:...,\:x_{2p}] \theta\] \[\vdots\] \[\hat{y}_n = \theta_0 + \theta_1 x_{n1} + \theta_2 x_{n2} + ... + \theta_p x_{np} = [1,\:x_{n1},\:x_{n2},\:x_{n3},\:...,\:x_{np}] \theta\]

+

Our observed data is represented by \(n\) row vectors, each with dimension \((p+1)\). We can collect them all into a single matrix, which we call \(\mathbb{X}\).

+
+ + + + +
+design_matrix +
+
+

The matrix \(\mathbb{X}\) is known as the design matrix. It contains all observed data for each of our \(p\) features, where each row corresponds to one observation, and each column corresponds to a feature. It often (but not always) contains an additional column of all ones to represent the intercept or bias column.

+

To review what is happening in the design matrix: each row represents a single observation. For example, a student in Data 100. Each column represents a feature. For example, the ages of students in Data 100. This convention allows us to easily transfer our previous work in DataFrames over to this new linear algebra perspective.

+
+ + + + +
+row_col +
+
+

The multiple linear regression model can then be restated in terms of matrices: \[ +\Large +\mathbb{\hat{Y}} = \mathbb{X} \theta +\]

+

Here, \(\mathbb{\hat{Y}}\) is the prediction vector with \(n\) elements (\(\mathbb{\hat{Y}} \in \mathbb{R}^{n}\)); it contains the prediction made by the model for each of the \(n\) input observations. \(\mathbb{X}\) is the design matrix with dimensions \(\mathbb{X} \in \mathbb{R}^{n \times (p + 1)}\), and \(\theta\) is the parameter vector with dimensions \(\theta \in \mathbb{R}^{(p + 1)}\). Note that our true output \(\mathbb{Y}\) is also a vector with \(n\) elements (\(\mathbb{Y} \in \mathbb{R}^{n}\)).

+
+
+
+ +
+
+Linear Algebra Review: Linearity +
+
+
+

An expression is linear in \(\theta\) (a set of parameters) if it is a linear combination of the elements of the set. Checking if an expression can separate into a matrix product of two terms – a vector of \(\theta\) s, and a matrix/vector not involving \(\theta\) – is a good indicator of linearity.

+

For example, consider the vector \(\theta = [\theta_0, \theta_1, \theta_2]\)

+
    +
  • \(\hat{y} = \theta_0 + 2\theta_1 + 3\theta_2\) is linear in theta, and we can separate it into a matrix product of two terms:
  • +
+

\[\hat{y} = \begin{bmatrix} 1 \space 2 \space 3 \end{bmatrix} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix}\]

+
    +
  • \(\hat{y} = \theta_0\theta_1 + 2\theta_1^2 + 3log(\theta_2)\) is not linear in theta, as the \(\theta_1\) term is squared, and the \(\theta_2\) term is logged. We cannot separate it into a matrix product of two terms.
  • +
+
+
+
+
+

12.1.3 Mean Squared Error

+

We now have a new approach to understanding models in terms of vectors and matrices. To accompany this new convention, we should update our understanding of risk functions and model fitting.

+

Recall our definition of MSE: \[R(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2\]

+

At its heart, the MSE is a measure of distance – it gives an indication of how “far away” the predictions are from the true values, on average.

+
+
+
+ +
+
+Linear Algebra: L2 Norm +
+
+
+

When working with vectors, this idea of “distance” or the vector’s size/length is represented by the norm. More precisely, the distance between two vectors \(\vec{a}\) and \(\vec{b}\) can be expressed as: \[||\vec{a} - \vec{b}||_2 = \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2 + \ldots + (a_n - b_n)^2} = \sqrt{\sum_{i=1}^n (a_i - b_i)^2}\]

+

The double bars are mathematical notation for the norm. The subscript 2 indicates that we are computing the L2, or squared norm.

+

The two norms we need to know for Data 100 are the L1 and L2 norms (sound familiar?). In this note, we’ll focus on L2 norm. We’ll dive into L1 norm in future lectures.

+

For the n-dimensional vector \[\vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}\] its L2 vector norm is

+

\[||\vec{x}||_2 = \sqrt{(x_1)^2 + (x_2)^2 + \ldots + (x_n)^2} = \sqrt{\sum_{i=1}^n (x_i)^2}\]

+

The L2 vector norm is a generalization of the Pythagorean theorem in \(n\) dimensions. Thus, it can be used as a measure of the length of a vector or even as a measure of the distance between two vectors.

+
+
+

We can express the MSE as a squared L2 norm if we rewrite it in terms of the prediction vector, \(\hat{\mathbb{Y}}\), and true target vector, \(\mathbb{Y}\):

+

\[R(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2 = \frac{1}{n} (||\mathbb{Y} - \hat{\mathbb{Y}}||_2)^2\]

+

Here, the superscript 2 outside of the parentheses means that we are squaring the norm. If we plug in our linear model \(\hat{\mathbb{Y}} = \mathbb{X} \theta\), we find the MSE cost function in vector notation:

+

\[R(\theta) = \frac{1}{n} (||\mathbb{Y} - \mathbb{X} \theta||_2)^2\]

+

Under the linear algebra perspective, our new task is to fit the optimal parameter vector \(\theta\) such that the cost function is minimized. Equivalently, we wish to minimize the norm \[||\mathbb{Y} - \mathbb{X} \theta||_2 = ||\mathbb{Y} - \hat{\mathbb{Y}}||_2.\]

+

We can restate this goal in two ways:

+
    +
  • Minimize the distance between the vector of true values, \(\mathbb{Y}\), and the vector of predicted values, \(\mathbb{\hat{Y}}\)
  • +
  • Minimize the length of the residual vector, defined as: \[e = \mathbb{Y} - \mathbb{\hat{Y}} = \begin{bmatrix} + y_1 - \hat{y}_1 \\ + y_2 - \hat{y}_2 \\ + \vdots \\ + y_n - \hat{y}_n + \end{bmatrix}\]
  • +
+
+
+

12.1.4 A Note on Terminology for Multiple Linear Regression

+

There are several equivalent terms in the context of regression. The ones we use most often for this course are bolded.

+
    +
  • \(x\) can be called a +
      +
    • Feature(s)
    • +
    • Covariate(s)
    • +
    • Independent variable(s)
    • +
    • Explanatory variable(s)
    • +
    • Predictor(s)
    • +
    • Input(s)
    • +
    • Regressor(s)
    • +
  • +
  • \(y\) can be called an +
      +
    • Output
    • +
    • Outcome
    • +
    • Response
    • +
    • Dependent variable
    • +
  • +
  • \(\hat{y}\) can be called a +
      +
    • Prediction
    • +
    • Predicted response
    • +
    • Estimated value
    • +
  • +
  • \(\theta\) can be called a +
      +
    • Weight(s)
    • +
    • Parameter(s)
    • +
    • Coefficient(s)
    • +
  • +
  • \(\hat{\theta}\) can be called a +
      +
    • Estimator(s)
    • +
    • Optimal parameter(s)
    • +
  • +
  • A datapoint \((x, y)\) is also called an observation.
  • +
+
+
+
+

12.2 Geometric Derivation

+
+
+
+ +
+
+Linear Algebra: Span +
+
+
+

Recall that the span or column space of a matrix \(\mathbb{X}\) (denoted \(span(\mathbb{X})\)) is the set of all possible linear combinations of the matrix’s columns. In other words, the span represents every point in space that could possibly be reached by adding and scaling some combination of the matrix columns. Additionally, if each column of \(\mathbb{X}\) has length \(n\), \(span(\mathbb{X})\) is a subspace of \(\mathbb{R}^{n}\).

+
+
+
+
+
+ +
+
+Linear Algebra: Matrix-Vector Multiplication +
+
+
+

There are 2 ways we can think about matrix-vector multiplication

+
    +
  1. So far, we’ve thought of our model as horizontally stacked predictions per datapoint +
    + + + + +
    +row_col +
    +
  2. +
  3. However, it is helpful sometimes to think of matrix-vector multiplication as performed by columns. We can also think of \(\mathbb{Y}\) as a linear combination of feature vectors, scaled by parameters. +
    + + + + +
    +row_col +
    +
  4. +
+
+
+

Up until now, we’ve mostly thought of our model as a scalar product between horizontally stacked observations and the parameter vector. We can also think of \(\hat{\mathbb{Y}}\) as a linear combination of feature vectors, scaled by the parameters. We use the notation \(\mathbb{X}_{:, i}\) to denote the \(i\)th column of the design matrix. You can think of this as following the same convention as used when calling .iloc and .loc. “:” means that we are taking all entries in the \(i\)th column.

+
+ + + + +
+columns +
+
+

\[ +\hat{\mathbb{Y}} = +\theta_0 \begin{bmatrix} + 1 \\ + 1 \\ + \vdots \\ + 1 + \end{bmatrix} + \theta_1 \begin{bmatrix} + x_{11} \\ + x_{21} \\ + \vdots \\ + x_{n1} + \end{bmatrix} + \ldots + \theta_p \begin{bmatrix} + x_{1p} \\ + x_{2p} \\ + \vdots \\ + x_{np} + \end{bmatrix} + = \theta_0 \mathbb{X}_{:,\:1} + \theta_1 \mathbb{X}_{:,\:2} + \ldots + \theta_p \mathbb{X}_{:,\:p+1}\]

+

This new approach is useful because it allows us to take advantage of the properties of linear combinations.

+

Because the prediction vector, \(\hat{\mathbb{Y}} = \mathbb{X} \theta\), is a linear combination of the columns of \(\mathbb{X}\), we know that the predictions are contained in the span of \(\mathbb{X}\). That is, we know that \(\mathbb{\hat{Y}} \in \text{Span}(\mathbb{X})\).

+

The diagram below is a simplified view of \(\text{Span}(\mathbb{X})\), assuming that each column of \(\mathbb{X}\) has length \(n\). Notice that the columns of \(\mathbb{X}\) define a subspace of \(\mathbb{R}^n\), where each point in the subspace can be reached by a linear combination of \(\mathbb{X}\)’s columns. The prediction vector \(\mathbb{\hat{Y}}\) lies somewhere in this subspace.

+
+ + + + +
+span +
+
+

Examining this diagram, we find a problem. The vector of true values, \(\mathbb{Y}\), could theoretically lie anywhere in \(\mathbb{R}^n\) space – its exact location depends on the data we collect out in the real world. However, our multiple linear regression model can only make predictions in the subspace of \(\mathbb{R}^n\) spanned by \(\mathbb{X}\). Remember the model fitting goal we established in the previous section: we want to generate predictions such that the distance between the vector of true values, \(\mathbb{Y}\), and the vector of predicted values, \(\mathbb{\hat{Y}}\), is minimized. This means that we want \(\mathbb{\hat{Y}}\) to be the vector in \(\text{Span}(\mathbb{X})\) that is closest to \(\mathbb{Y}\).

+

Another way of rephrasing this goal is to say that we wish to minimize the length of the residual vector \(e\), as measured by its \(L_2\) norm.

+
+ + + + +
+residual +
+
+

The vector in \(\text{Span}(\mathbb{X})\) that is closest to \(\mathbb{Y}\) is always the orthogonal projection of \(\mathbb{Y}\) onto \(\text{Span}(\mathbb{X}).\) Thus, we should choose the parameter vector \(\theta\) that makes the residual vector orthogonal to any vector in \(\text{Span}(\mathbb{X})\). You can visualize this as the vector created by dropping a perpendicular line from \(\mathbb{Y}\) onto the span of \(\mathbb{X}\).

+
+
+
+ +
+
+Linear Algebra: Orthogonality +
+
+
+

Recall that two vectors \(\vec{a}\) and \(\vec{b}\) are orthogonal if their dot product is zero: \(\vec{a}^{T}\vec{b} = 0\).

+

A vector \(v\) is orthogonal to the span of a matrix \(M\) if and only if \(v\) is orthogonal to each column in \(M\). Put together, a vector \(v\) is orthogonal to \(\text{Span}(M)\) if:

+

\[M^Tv = \vec{0}\]

+

Note that \(\vec{0}\) represents the zero vector, a \(d\)-length vector full of 0s.

+
+
+

Remember our goal is to find \(\hat{\theta}\) such that we minimize the objective function \(R(\theta)\). Equivalently, this is the \(\hat{\theta}\) such that the residual vector \(e = \mathbb{Y} - \mathbb{X} \hat{\theta}\) is orthogonal to \(\text{Span}(\mathbb{X})\).

+

Looking at the definition of orthogonality of \(\mathbb{Y} - \mathbb{X}\hat{\theta}\) to \(span(\mathbb{X})\), we can write: \[\mathbb{X}^T (\mathbb{Y} - \mathbb{X}\hat{\theta}) = \vec{0}\]

+

Let’s then rearrange the terms: \[\mathbb{X}^T \mathbb{Y} - \mathbb{X}^T \mathbb{X} \hat{\theta} = \vec{0}\]

+

And finally, we end up with the normal equation: \[\mathbb{X}^T \mathbb{X} \hat{\theta} = \mathbb{X}^T \mathbb{Y}\]

+

Any vector \(\theta\) that minimizes MSE on a dataset must satisfy this equation.

+

If \(\mathbb{X}^T \mathbb{X}\) is invertible, we can conclude: \[\hat{\theta} = (\mathbb{X}^T \mathbb{X})^{-1} \mathbb{X}^T \mathbb{Y}\]

+

This is called the least squares estimate of \(\theta\): it is the value of \(\theta\) that minimizes the squared loss.

+

Note that the least squares estimate was derived under the assumption that \(\mathbb{X}^T \mathbb{X}\) is invertible. This condition holds true when \(\mathbb{X}^T \mathbb{X}\) is full column rank, which, in turn, happens when \(\mathbb{X}\) is full column rank. The proof for why \(\mathbb{X}\) needs to be full column rank is optional and in the Bonus section at the end.

+
+
+

12.3 Evaluating Model Performance

+

Our geometric view of multiple linear regression has taken us far! We have identified the optimal set of parameter values to minimize MSE in a model of multiple features. Now, we want to understand how well our fitted model performs.

+
+

12.3.1 RMSE

+

One measure of model performance is the Root Mean Squared Error, or RMSE. The RMSE is simply the square root of MSE. Taking the square root converts the value back into the original, non-squared units of \(y_i\), which is useful for understanding the model’s performance. A low RMSE indicates more “accurate” predictions – that there is a lower average loss across the dataset.

+

\[\text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2}\]

+
+
+

12.3.2 Residual Plots

+

When working with SLR, we generated plots of the residuals against a single feature to understand the behavior of residuals. When working with several features in multiple linear regression, it no longer makes sense to consider a single feature in our residual plots. Instead, multiple linear regression is evaluated by making plots of the residuals against the predicted values. As was the case with SLR, a multiple linear model performs well if its residual plot shows no patterns.

+
+ + + + +
+residual_plot +
+
+
+
+

12.3.3 Multiple \(R^2\)

+

For SLR, we used the correlation coefficient to capture the association between the target variable and a single feature variable. In a multiple linear model setting, we will need a performance metric that can account for multiple features at once. Multiple \(R^2\), also called the coefficient of determination, is the proportion of variance of our fitted values (predictions) \(\hat{y}_i\) to our true values \(y_i\). It ranges from 0 to 1 and is effectively the proportion of variance in the observations that the model explains.

+

\[R^2 = \frac{\text{variance of } \hat{y}_i}{\text{variance of } y_i} = \frac{\sigma^2_{\hat{y}}}{\sigma^2_y}\]

+

Note that for OLS with an intercept term, for example \(\hat{y} = \theta_0 + \theta_1x_1 + \theta_2x_2 + \cdots + \theta_px_p\), \(R^2\) is equal to the square of the correlation between \(y\) and \(\hat{y}\). On the other hand for SLR, \(R^2\) is equal to \(r^2\), the correlation between \(x\) and \(y\). The proof of these last two properties is out of scope for this course.

+

Additionally, as we add more features, our fitted values tend to become closer and closer to our actual values. Thus, \(R^2\) increases.

+

Adding more features doesn’t always mean our model is better though! We’ll see why later in the course.

+
+
+
+

12.4 OLS Properties

+
    +
  1. When using the optimal parameter vector, our residuals \(e = \mathbb{Y} - \hat{\mathbb{Y}}\) are orthogonal to \(span(\mathbb{X})\).
  2. +
+

\[\mathbb{X}^Te = 0 \]

+
+
+
+ +
+
+

Proof:

+
    +
  • The optimal parameter vector, \(\hat{\theta}\), solves the normal equations \(\implies \hat{\theta} = (\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y}\)
  • +
+

\[\mathbb{X}^Te = \mathbb{X}^T (\mathbb{Y} - \mathbb{\hat{Y}}) \]

+

\[\mathbb{X}^T (\mathbb{Y} - \mathbb{X}\hat{\theta}) = \mathbb{X}^T\mathbb{Y} - \mathbb{X}^T\mathbb{X}\hat{\theta}\]

+
    +
  • Any matrix multiplied with its own inverse is the identity matrix \(\mathbb{I}\)
  • +
+

\[\mathbb{X}^T\mathbb{Y} - (\mathbb{X}^T\mathbb{X})(\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y} = \mathbb{X}^T\mathbb{Y} - \mathbb{X}^T\mathbb{Y} = 0\]

+
+
+
+
    +
  1. For all linear models with an intercept term, the sum of residuals is zero.
  2. +
+

\[\sum_i^n e_i = 0\]

+
+
+
+ +
+
+

Proof:

+
    +
  • For all linear models with an intercept term, the average of the predicted \(y\) values is equal to the average of the true \(y\) values. \[\bar{y} = \bar{\hat{y}}\]
  • +
  • Rewriting the sum of residuals as two separate sums, \[\sum_i^n e_i = \sum_i^n y_i - \sum_i^n\hat{y}_i\]
  • +
  • Each respective sum is a multiple of the average of the sum. \[\sum_i^n e_i = n\bar{y} - n\bar{y} = n(\bar{y} - \bar{y}) = 0\]
  • +
+
+
+
+

To summarize:

+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelEstimateUnique?
Constant Model + MSE\(\hat{y} = \theta_0\)\(\hat{\theta}_0 = mean(y) = \bar{y}\)Yes. Any set of values has a unique mean.
Constant Model + MAE\(\hat{y} = \theta_0\)\(\hat{\theta}_0 = median(y)\)Yes, if odd. No, if even. Return the average of the middle 2 values.
Simple Linear Regression + MSE\(\hat{y} = \theta_0 + \theta_1x\)\(\hat{\theta}_0 = \bar{y} - \hat{\theta}_1\bar{x}\) \(\hat{\theta}_1 = r\frac{\sigma_y}{\sigma_x}\)Yes. Any set of non-constant* values has a unique mean, SD, and correlation coefficient.
OLS (Linear Model + MSE)\(\mathbb{\hat{Y}} = \mathbb{X}\mathbb{\theta}\)\(\hat{\theta} = (\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y}\)Yes, if \(\mathbb{X}\) is full column rank (all columns are linearly independent, # of datapoints >>> # of features).
+
+
+

12.5 Bonus: Uniqueness of the Solution

+

The Least Squares estimate \(\hat{\theta}\) is unique if and only if \(\mathbb{X}\) is full column rank.

+
+
+
+ +
+
+

Proof:

+
    +
  • We know the solution to the normal equation \(\mathbb{X}^T\mathbb{X}\hat{\theta} = \mathbb{X}^T\mathbb{Y}\) is the least square estimate that minimizes the squared loss.
  • +
  • \(\hat{\theta}\) has a unique solution \(\iff\) the square matrix \(\mathbb{X}^T\mathbb{X}\) is invertible \(\iff\) \(\mathbb{X}^T\mathbb{X}\) is full rank. +
      +
    • The column rank of a square matrix is the max number of linearly independent columns it contains.
    • +
    • An \(n\) x \(n\) square matrix is deemed full column rank when all of its columns are linearly independent. That is, its rank would be equal to \(n\).
    • +
    • \(\mathbb{X}^T\mathbb{X}\) has shape \((p + 1) \times (p + 1)\), and therefore has max rank \(p + 1\).
    • +
  • +
  • \(rank(\mathbb{X}^T\mathbb{X})\) = \(rank(\mathbb{X})\) (proof out of scope).
  • +
  • Therefore, \(\mathbb{X}^T\mathbb{X}\) has rank \(p + 1\) \(\iff\) \(\mathbb{X}\) has rank \(p + 1\) \(\iff \mathbb{X}\) is full column rank.
  • +
+
+
+
+

Therefore, if \(\mathbb{X}\) is not full column rank, we will not have unique estimates. This can happen for two major reasons.

+
    +
  1. If our design matrix \(\mathbb{X}\) is “wide”: +
      +
    • If n < p, then we have way more features (columns) than observations (rows).
    • +
    • Then \(rank(\mathbb{X})\) = min(n, p+1) < p+1, so \(\hat{\theta}\) is not unique.
    • +
    • Typically we have n >> p so this is less of an issue.
    • +
  2. +
  3. If our design matrix \(\mathbb{X}\) has features that are linear combinations of other features: +
      +
    • By definition, rank of \(\mathbb{X}\) is number of linearly independent columns in \(\mathbb{X}\).
    • +
    • Example: If “Width”, “Height”, and “Perimeter” are all columns, +
        +
      • Perimeter = 2 * Width + 2 * Height \(\rightarrow\) \(\mathbb{X}\) is not full rank.
      • +
    • +
    • Important with one-hot encoding (to discuss later).
    • +
  4. +
+ + + + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/pandas_1/images/df_elections.png b/docs/pandas_1/images/df_elections.png new file mode 100644 index 000000000..224087bf6 Binary files /dev/null and b/docs/pandas_1/images/df_elections.png differ diff --git a/docs/pandas_1/images/locgraphic.png b/docs/pandas_1/images/locgraphic.png new file mode 100644 index 000000000..b37e8422e Binary files /dev/null and b/docs/pandas_1/images/locgraphic.png differ diff --git a/docs/pandas_1/images/non-uniqueindex.png b/docs/pandas_1/images/non-uniqueindex.png new file mode 100644 index 000000000..64ab25a3e Binary files /dev/null and b/docs/pandas_1/images/non-uniqueindex.png differ diff --git a/docs/pandas_1/images/row_col.png b/docs/pandas_1/images/row_col.png new file mode 100644 index 000000000..f9e5faded Binary files /dev/null and b/docs/pandas_1/images/row_col.png differ diff --git a/docs/pandas_1/images/uniqueindex.png b/docs/pandas_1/images/uniqueindex.png new file mode 100644 index 000000000..e95341f30 Binary files /dev/null and b/docs/pandas_1/images/uniqueindex.png differ diff --git a/docs/pandas_1/pandas_1.html b/docs/pandas_1/pandas_1.html new file mode 100644 index 000000000..0b7682332 --- /dev/null +++ b/docs/pandas_1/pandas_1.html @@ -0,0 +1,2724 @@ + + + + + + + + + +2  Pandas I – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

2  Pandas I

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Build familiarity with pandas and pandas syntax.
  • +
  • Learn key data structures: DataFrame, Series, and Index.
  • +
  • Understand methods for extracting data: .loc, .iloc, and [].
  • +
+
+
+
+

In this sequence of lectures, we will dive right into things by having you explore and manipulate real-world data. We’ll first introduce pandas, a popular Python library for interacting with tabular data.

+
+

2.1 Tabular Data

+

Data scientists work with data stored in a variety of formats. This class focuses primarily on tabular data — data that is stored in a table.

+

Tabular data is one of the most common systems that data scientists use to organize data. This is in large part due to the simplicity and flexibility of tables. Tables allow us to represent each observation, or instance of collecting data from an individual, as its own row. We can record each observation’s distinct characteristics, or features, in separate columns.

+

To see this in action, we’ll explore the elections dataset, which stores information about political candidates who ran for president of the United States in previous years.

+

In the elections dataset, each row (blue box) represents one instance of a candidate running for president in a particular year. For example, the first row represents Andrew Jackson running for president in the year 1824. Each column (yellow box) represents one characteristic piece of information about each presidential candidate. For example, the column named “Result” stores whether or not the candidate won the election.

+
+ +
+

Your work in Data 8 helped you grow very familiar with using and interpreting data stored in a tabular format. Back then, you used the Table class of the datascience library, a special programming library created specifically for Data 8 students.

+

In Data 100, we will be working with the programming library pandas, which is generally accepted in the data science community as the industry- and academia-standard tool for manipulating tabular data (as well as the inspiration for Petey, our panda bear mascot).

+

Using pandas, we can

+
    +
  • Arrange data in a tabular format.
  • +
  • Extract useful information filtered by specific conditions.
  • +
  • Operate on data to gain new insights.
  • +
  • Apply NumPy functions to our data (our friends from Data 8).
  • +
  • Perform vectorized computations to speed up our analysis (Lab 1).
  • +
+
+
+

2.2 Series, DataFrames, and Indices

+

To begin our work in pandas, we must first import the library into our Python environment. This will allow us to use pandas data structures and methods in our code.

+
+
# `pd` is the conventional alias for Pandas, as `np` is for NumPy
+import pandas as pd
+
+

There are three fundamental data structures in pandas:

+
    +
  1. Series: 1D labeled array data; best thought of as columnar data.
  2. +
  3. DataFrame: 2D tabular data with rows and columns.
  4. +
  5. Index: A sequence of row/column labels.
  6. +
+

DataFrames, Series, and Indices can be represented visually in the following diagram, which considers the first few rows of the elections dataset.

+
+ +
+

Notice how the DataFrame is a two-dimensional object — it contains both rows and columns. The Series above is a singular column of this DataFrame, namely the Result column. Both contain an Index, or a shared list of row labels (the integers from 0 to 4, inclusive).

+
+

2.2.1 Series

+

A Series represents a column of a DataFrame; more generally, it can be any 1-dimensional array-like object. It contains both:

+
    +
  • A sequence of values of the same type.
  • +
  • A sequence of data labels called the index.
  • +
+

In the cell below, we create a Series named s.

+
+
s = pd.Series(["welcome", "to", "data 100"])
+s
+
+
0     welcome
+1          to
+2    data 100
+dtype: object
+
+
+
+
 # Accessing data values within the Series
+ s.values
+
+
array(['welcome', 'to', 'data 100'], dtype=object)
+
+
+
+
 # Accessing the Index of the Series
+ s.index
+
+
RangeIndex(start=0, stop=3, step=1)
+
+
+

By default, the index of a Series is a sequential list of integers beginning from 0. Optionally, a manually specified list of desired indices can be passed to the index argument.

+
+
s = pd.Series([-1, 10, 2], index = ["a", "b", "c"])
+s
+
+
a    -1
+b    10
+c     2
+dtype: int64
+
+
+
+
s.index
+
+
Index(['a', 'b', 'c'], dtype='object')
+
+
+

Indices can also be changed after initialization.

+
+
s.index = ["first", "second", "third"]
+s
+
+
first     -1
+second    10
+third      2
+dtype: int64
+
+
+
+
s.index
+
+
Index(['first', 'second', 'third'], dtype='object')
+
+
+
+

2.2.1.1 Selection in Series

+

Much like when working with NumPy arrays, we can select a single value or a set of values from a Series. To do so, there are three primary methods:

+
    +
  1. A single label.
  2. +
  3. A list of labels.
  4. +
  5. A filtering condition.
  6. +
+

To demonstrate this, let’s define the Series ser.

+
+
ser = pd.Series([4, -2, 0, 6], index = ["a", "b", "c", "d"])
+ser
+
+
a    4
+b   -2
+c    0
+d    6
+dtype: int64
+
+
+
+
2.2.1.1.1 A Single Label
+
+
# We return the value stored at the index label "a"
+ser["a"] 
+
+
np.int64(4)
+
+
+
+
+
2.2.1.1.2 A List of Labels
+
+
# We return a Series of the values stored at the index labels "a" and "c"
+ser[["a", "c"]] 
+
+
a    4
+c    0
+dtype: int64
+
+
+
+
+
2.2.1.1.3 A Filtering Condition
+

Perhaps the most interesting (and useful) method of selecting data from a Series is by using a filtering condition.

+

First, we apply a boolean operation to the Series. This creates a new Series of boolean values.

+
+
# Filter condition: select all elements greater than 0
+ser > 0 
+
+
a     True
+b    False
+c    False
+d     True
+dtype: bool
+
+
+

We then use this boolean condition to index into our original Series. pandas will select only the entries in the original Series that satisfy the condition.

+
+
ser[ser > 0] 
+
+
a    4
+d    6
+dtype: int64
+
+
+
+
+
+
+

2.2.2 DataFrames

+

Typically, we will work with Series using the perspective that they are columns in a DataFrame. We can think of a DataFrame as a collection of Series that all share the same Index.

+

In Data 8, you encountered the Table class of the datascience library, which represented tabular data. In Data 100, we’ll be using the DataFrame class of the pandas library.

+
+

2.2.2.1 Creating a DataFrame

+

There are many ways to create a DataFrame. Here, we will cover the most popular approaches:

+
    +
  1. From a CSV file.
  2. +
  3. Using a list and column name(s).
  4. +
  5. From a dictionary.
  6. +
  7. From a Series.
  8. +
+

More generally, the syntax for creating a DataFrame is:

+
 pandas.DataFrame(data, index, columns)
+
+
2.2.2.1.1 From a CSV file
+

In Data 100, our data are typically stored in a CSV (comma-separated values) file format. We can import a CSV file into a DataFrame by passing the data path as an argument to the following pandas function.
pd.read_csv("filename.csv")

+

With our new understanding of pandas in hand, let’s return to the elections dataset from before. Now, we can recognize that it is represented as a pandas DataFrame.

+
+
elections = pd.read_csv("data/elections.csv")
+elections
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.210122
11824John Quincy AdamsDemocratic-Republican113142win42.789878
21828Andrew JacksonDemocratic642806win56.203927
31828John Quincy AdamsNational Republican500897loss43.796073
41832Andrew JacksonDemocratic702735win54.574789
.....................
1772016Jill SteinGreen1457226loss1.073699
1782020Joseph BidenDemocratic81268924win51.311515
1792020Donald TrumpRepublican74216154loss46.858542
1802020Jo JorgensenLibertarian1865724loss1.177979
1812020Howard HawkinsGreen405035loss0.255731
+ +

182 rows × 6 columns

+
+
+
+

This code stores our DataFrame object in the elections variable. Upon inspection, our elections DataFrame has 182 rows and 6 columns (Year, Candidate, Party, Popular Vote, Result, %). Each row represents a single record — in our example, a presidential candidate from some particular year. Each column represents a single attribute or feature of the record.

+
+
+
2.2.2.1.2 Using a List and Column Name(s)
+

We’ll now explore creating a DataFrame with data of our own.

+

Consider the following examples. The first code cell creates a DataFrame with a single column Numbers.

+
+
df_list = pd.DataFrame([1, 2, 3], columns=["Numbers"])
+df_list
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
Numbers
01
12
23
+ +
+
+
+

The second creates a DataFrame with the columns Numbers and Description. Notice how a 2D list of values is required to initialize the second DataFrame — each nested list represents a single row of data.

+
+
df_list = pd.DataFrame([[1, "one"], [2, "two"]], columns = ["Number", "Description"])
+df_list
+
+
+ + + + + + + + + + + + + + + + + + + + + + +
NumberDescription
01one
12two
+ +
+
+
+
+
+
2.2.2.1.3 From a Dictionary
+

A third (and more common) way to create a DataFrame is with a dictionary. The dictionary keys represent the column names, and the dictionary values represent the column values.

+

Below are two ways of implementing this approach. The first is based on specifying the columns of the DataFrame, whereas the second is based on specifying the rows of the DataFrame.

+
+
df_dict = pd.DataFrame({
+    "Fruit": ["Strawberry", "Orange"], 
+    "Price": [5.49, 3.99]
+})
+df_dict
+
+
+ + + + + + + + + + + + + + + + + + + + + + +
FruitPrice
0Strawberry5.49
1Orange3.99
+ +
+
+
+
+
df_dict = pd.DataFrame(
+    [
+        {"Fruit":"Strawberry", "Price":5.49}, 
+        {"Fruit": "Orange", "Price":3.99}
+    ]
+)
+df_dict
+
+
+ + + + + + + + + + + + + + + + + + + + + + +
FruitPrice
0Strawberry5.49
1Orange3.99
+ +
+
+
+
+
+
2.2.2.1.4 From a Series
+

Earlier, we explained how a Series was synonymous to a column in a DataFrame. It follows, then, that a DataFrame is equivalent to a collection of Series, which all share the same Index.

+

In fact, we can initialize a DataFrame by merging two or more Series. Consider the Series s_a and s_b.

+
+
# Notice how our indices, or row labels, are the same
+
+s_a = pd.Series(["a1", "a2", "a3"], index = ["r1", "r2", "r3"])
+s_b = pd.Series(["b1", "b2", "b3"], index = ["r1", "r2", "r3"])
+
+

We can turn individual Series into a DataFrame using two common methods (shown below):

+
+
pd.DataFrame(s_a)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
0
r1a1
r2a2
r3a3
+ +
+
+
+
+
s_b.to_frame()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
0
r1b1
r2b2
r3b3
+ +
+
+
+

To merge the two Series and specify their column names, we use the following syntax:

+
+
pd.DataFrame({
+    "A-column": s_a, 
+    "B-column": s_b
+})
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
A-columnB-column
r1a1b1
r2a2b2
r3a3b3
+ +
+
+
+
+
+
+
+

2.2.3 Indices

+

On a more technical note, an index doesn’t have to be an integer, nor does it have to be unique. For example, we can set the index of the elections DataFrame to be the name of presidential candidates.

+
+
# Creating a DataFrame from a CSV file and specifying the index column
+elections = pd.read_csv("data/elections.csv", index_col = "Candidate")
+elections
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearPartyPopular voteResult%
Candidate
Andrew Jackson1824Democratic-Republican151271loss57.210122
John Quincy Adams1824Democratic-Republican113142win42.789878
Andrew Jackson1828Democratic642806win56.203927
John Quincy Adams1828National Republican500897loss43.796073
Andrew Jackson1832Democratic702735win54.574789
..................
Jill Stein2016Green1457226loss1.073699
Joseph Biden2020Democratic81268924win51.311515
Donald Trump2020Republican74216154loss46.858542
Jo Jorgensen2020Libertarian1865724loss1.177979
Howard Hawkins2020Green405035loss0.255731
+ +

182 rows × 5 columns

+
+
+
+

We can also select a new column and set it as the index of the DataFrame. For example, we can set the index of the elections DataFrame to represent the candidate’s party.

+
+
elections.reset_index(inplace = True) # Resetting the index so we can set it again
+# This sets the index to the "Party" column
+elections.set_index("Party")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CandidateYearPopular voteResult%
Party
Democratic-RepublicanAndrew Jackson1824151271loss57.210122
Democratic-RepublicanJohn Quincy Adams1824113142win42.789878
DemocraticAndrew Jackson1828642806win56.203927
National RepublicanJohn Quincy Adams1828500897loss43.796073
DemocraticAndrew Jackson1832702735win54.574789
..................
GreenJill Stein20161457226loss1.073699
DemocraticJoseph Biden202081268924win51.311515
RepublicanDonald Trump202074216154loss46.858542
LibertarianJo Jorgensen20201865724loss1.177979
GreenHoward Hawkins2020405035loss0.255731
+ +

182 rows × 5 columns

+
+
+
+

And, if we’d like, we can revert the index back to the default list of integers.

+
+
# This resets the index to be the default list of integer
+elections.reset_index(inplace=True) 
+elections.index
+
+
RangeIndex(start=0, stop=182, step=1)
+
+
+

It is also important to note that the row labels that constitute an index don’t have to be unique. While index values can be unique and numeric, acting as a row number, they can also be named and non-unique.

+

Here we see unique and numeric index values.

+
+ +
+

However, here the index values are not unique.

+
+ +
+
+
+
+

2.3 DataFrame Attributes: Index, Columns, and Shape

+

On the other hand, column names in a DataFrame are almost always unique. Looking back to the elections dataset, it wouldn’t make sense to have two columns named "Candidate". Sometimes, you’ll want to extract these different values, in particular, the list of row and column labels.

+

For index/row labels, use DataFrame.index:

+
+
elections.set_index("Party", inplace = True)
+elections.index
+
+
Index(['Democratic-Republican', 'Democratic-Republican', 'Democratic',
+       'National Republican', 'Democratic', 'National Republican',
+       'Anti-Masonic', 'Whig', 'Democratic', 'Whig',
+       ...
+       'Constitution', 'Republican', 'Independent', 'Libertarian',
+       'Democratic', 'Green', 'Democratic', 'Republican', 'Libertarian',
+       'Green'],
+      dtype='object', name='Party', length=182)
+
+
+

For column labels, use DataFrame.columns:

+
+
elections.columns
+
+
Index(['index', 'Candidate', 'Year', 'Popular vote', 'Result', '%'], dtype='object')
+
+
+

And for the shape of the DataFrame, we can use DataFrame.shape to get the number of rows followed by the number of columns:

+
+
elections.shape
+
+
(182, 6)
+
+
+
+
+

2.4 Slicing in DataFrames

+

Now that we’ve learned more about DataFrames, let’s dive deeper into their capabilities.

+

The API (Application Programming Interface) for the DataFrame class is enormous. In this section, we’ll discuss several methods of the DataFrame API that allow us to extract subsets of data.

+

The simplest way to manipulate a DataFrame is to extract a subset of rows and columns, known as slicing.

+

Common ways we may want to extract data are grabbing:

+
    +
  • The first or last n rows in the DataFrame.
  • +
  • Data with a certain label.
  • +
  • Data at a certain position.
  • +
+

We will do so with four primary methods of the DataFrame class:

+
    +
  1. .head and .tail
  2. +
  3. .loc
  4. +
  5. .iloc
  6. +
  7. []
  8. +
+
+

2.4.1 Extracting data with .head and .tail

+

The simplest scenario in which we want to extract data is when we simply want to select the first or last few rows of the DataFrame.

+

To extract the first n rows of a DataFrame df, we use the syntax df.head(n).

+
+
+Code +
elections = pd.read_csv("data/elections.csv")
+
+
+
+
# Extract the first 5 rows of the DataFrame
+elections.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.210122
11824John Quincy AdamsDemocratic-Republican113142win42.789878
21828Andrew JacksonDemocratic642806win56.203927
31828John Quincy AdamsNational Republican500897loss43.796073
41832Andrew JacksonDemocratic702735win54.574789
+ +
+
+
+

Similarly, calling df.tail(n) allows us to extract the last n rows of the DataFrame.

+
+
# Extract the last 5 rows of the DataFrame
+elections.tail(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
1772016Jill SteinGreen1457226loss1.073699
1782020Joseph BidenDemocratic81268924win51.311515
1792020Donald TrumpRepublican74216154loss46.858542
1802020Jo JorgensenLibertarian1865724loss1.177979
1812020Howard HawkinsGreen405035loss0.255731
+ +
+
+
+
+
+

2.4.2 Label-based Extraction: Indexing with .loc

+

For the more complex task of extracting data with specific column or index labels, we can use .loc. The .loc accessor allows us to specify the labels of rows and columns we wish to extract. The labels (commonly referred to as the indices) are the bold text on the far left of a DataFrame, while the column labels are the column names found at the top of a DataFrame.

+
+ +
+

To grab data with .loc, we must specify the row and column label(s) where the data exists. The row labels are the first argument to the .loc function; the column labels are the second.

+

Arguments to .loc can be:

+
    +
  • A single value.
  • +
  • A slice.
  • +
  • A list.
  • +
+

For example, to select a single value, we can select the row labeled 0 and the column labeled Candidate from the elections DataFrame.

+
+
elections.loc[0, 'Candidate']
+
+
'Andrew Jackson'
+
+
+

Keep in mind that passing in just one argument as a single value will produce a Series. Below, we’ve extracted a subset of the "Popular vote" column as a Series.

+
+
elections.loc[[87, 25, 179], "Popular vote"]
+
+
87     15761254
+25       848019
+179    74216154
+Name: Popular vote, dtype: int64
+
+
+

To select multiple rows and columns, we can use Python slice notation. Here, we select the rows from labels 0 to 3 and the columns from labels "Year" to "Popular vote". Notice that unlike Python slicing, .loc is inclusive of the right upper bound.

+
+
elections.loc[0:3, 'Year':'Popular vote']
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular vote
01824Andrew JacksonDemocratic-Republican151271
11824John Quincy AdamsDemocratic-Republican113142
21828Andrew JacksonDemocratic642806
31828John Quincy AdamsNational Republican500897
+ +
+
+
+

Suppose that instead, we want to extract all column values for the first four rows in the elections DataFrame. The shorthand : is useful for this.

+
+
elections.loc[0:3, :]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.210122
11824John Quincy AdamsDemocratic-Republican113142win42.789878
21828Andrew JacksonDemocratic642806win56.203927
31828John Quincy AdamsNational Republican500897loss43.796073
+ +
+
+
+

We can use the same shorthand to extract all rows.

+
+
elections.loc[:, ["Year", "Candidate", "Result"]]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidateResult
01824Andrew Jacksonloss
11824John Quincy Adamswin
21828Andrew Jacksonwin
31828John Quincy Adamsloss
41832Andrew Jacksonwin
............
1772016Jill Steinloss
1782020Joseph Bidenwin
1792020Donald Trumploss
1802020Jo Jorgensenloss
1812020Howard Hawkinsloss
+ +

182 rows × 3 columns

+
+
+
+

There are a couple of things we should note. Firstly, unlike conventional Python, pandas allows us to slice string values (in our example, the column labels). Secondly, slicing with .loc is inclusive. Notice how our resulting DataFrame includes every row and column between and including the slice labels we specified.

+

Equivalently, we can use a list to obtain multiple rows and columns in our elections DataFrame.

+
+
elections.loc[[0, 1, 2, 3], ['Year', 'Candidate', 'Party', 'Popular vote']]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular vote
01824Andrew JacksonDemocratic-Republican151271
11824John Quincy AdamsDemocratic-Republican113142
21828Andrew JacksonDemocratic642806
31828John Quincy AdamsNational Republican500897
+ +
+
+
+

Lastly, we can interchange list and slicing notation.

+
+
elections.loc[[0, 1, 2, 3], :]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.210122
11824John Quincy AdamsDemocratic-Republican113142win42.789878
21828Andrew JacksonDemocratic642806win56.203927
31828John Quincy AdamsNational Republican500897loss43.796073
+ +
+
+
+
+
+

2.4.3 Integer-based Extraction: Indexing with .iloc

+

Slicing with .iloc works similarly to .loc. However, .iloc uses the index positions of rows and columns rather than the labels (think to yourself: loc uses lables; iloc uses indices). The arguments to the .iloc function also behave similarly — single values, lists, indices, and any combination of these are permitted.

+

Let’s begin reproducing our results from above. We’ll begin by selecting the first presidential candidate in our elections DataFrame:

+
+
# elections.loc[0, "Candidate"] - Previous approach
+elections.iloc[0, 1]
+
+
'Andrew Jackson'
+
+
+

Notice how the first argument to both .loc and .iloc are the same. This is because the row with a label of 0 is conveniently in the \(0^{\text{th}}\) (equivalently, the first position) of the elections DataFrame. Generally, this is true of any DataFrame where the row labels are incremented in ascending order from 0.

+

And, as before, if we were to pass in only one single value argument, our result would be a Series.

+
+
elections.iloc[[1,2,3],1]
+
+
1    John Quincy Adams
+2       Andrew Jackson
+3    John Quincy Adams
+Name: Candidate, dtype: object
+
+
+

However, when we select the first four rows and columns using .iloc, we notice something.

+
+
# elections.loc[0:3, 'Year':'Popular vote'] - Previous approach
+elections.iloc[0:4, 0:4]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular vote
01824Andrew JacksonDemocratic-Republican151271
11824John Quincy AdamsDemocratic-Republican113142
21828Andrew JacksonDemocratic642806
31828John Quincy AdamsNational Republican500897
+ +
+
+
+

Slicing is no longer inclusive in .iloc — it’s exclusive. In other words, the right end of a slice is not included when using .iloc. This is one of the subtleties of pandas syntax; you will get used to it with practice.

+

List behavior works just as expected.

+
+
#elections.loc[[0, 1, 2, 3], ['Year', 'Candidate', 'Party', 'Popular vote']] - Previous Approach
+elections.iloc[[0, 1, 2, 3], [0, 1, 2, 3]]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular vote
01824Andrew JacksonDemocratic-Republican151271
11824John Quincy AdamsDemocratic-Republican113142
21828Andrew JacksonDemocratic642806
31828John Quincy AdamsNational Republican500897
+ +
+
+
+

And just like with .loc, we can use a colon with .iloc to extract all rows or columns.

+
+
elections.iloc[:, 0:3]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidateParty
01824Andrew JacksonDemocratic-Republican
11824John Quincy AdamsDemocratic-Republican
21828Andrew JacksonDemocratic
31828John Quincy AdamsNational Republican
41832Andrew JacksonDemocratic
............
1772016Jill SteinGreen
1782020Joseph BidenDemocratic
1792020Donald TrumpRepublican
1802020Jo JorgensenLibertarian
1812020Howard HawkinsGreen
+ +

182 rows × 3 columns

+
+
+
+

This discussion begs the question: when should we use .loc vs. .iloc? In most cases, .loc is generally safer to use. You can imagine .iloc may return incorrect values when applied to a dataset where the ordering of data can change. However, .iloc can still be useful — for example, if you are looking at a DataFrame of sorted movie earnings and want to get the median earnings for a given year, you can use .iloc to index into the middle.

+

Overall, it is important to remember that:

+
    +
  • .loc performances label-based extraction.
  • +
  • .iloc performs integer-based extraction.
  • +
+
+
+

2.4.4 Context-dependent Extraction: Indexing with []

+

The [] selection operator is the most baffling of all, yet the most commonly used. It only takes a single argument, which may be one of the following:

+
    +
  1. A slice of row numbers.
  2. +
  3. A list of column labels.
  4. +
  5. A single-column label.
  6. +
+

That is, [] is context-dependent. Let’s see some examples.

+
+

2.4.4.1 A slice of row numbers

+

Say we wanted the first four rows of our elections DataFrame.

+
+
elections[0:4]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.210122
11824John Quincy AdamsDemocratic-Republican113142win42.789878
21828Andrew JacksonDemocratic642806win56.203927
31828John Quincy AdamsNational Republican500897loss43.796073
+ +
+
+
+
+
+

2.4.4.2 A list of column labels

+

Suppose we now want the first four columns.

+
+
elections[["Year", "Candidate", "Party", "Popular vote"]]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular vote
01824Andrew JacksonDemocratic-Republican151271
11824John Quincy AdamsDemocratic-Republican113142
21828Andrew JacksonDemocratic642806
31828John Quincy AdamsNational Republican500897
41832Andrew JacksonDemocratic702735
...............
1772016Jill SteinGreen1457226
1782020Joseph BidenDemocratic81268924
1792020Donald TrumpRepublican74216154
1802020Jo JorgensenLibertarian1865724
1812020Howard HawkinsGreen405035
+ +

182 rows × 4 columns

+
+
+
+
+
+

2.4.4.3 A single-column label

+

Lastly, [] allows us to extract only the "Candidate" column.

+
+
elections["Candidate"]
+
+
0         Andrew Jackson
+1      John Quincy Adams
+2         Andrew Jackson
+3      John Quincy Adams
+4         Andrew Jackson
+             ...        
+177           Jill Stein
+178         Joseph Biden
+179         Donald Trump
+180         Jo Jorgensen
+181       Howard Hawkins
+Name: Candidate, Length: 182, dtype: object
+
+
+

The output is a Series! In this course, we’ll become very comfortable with [], especially for selecting columns. In practice, [] is much more common than .loc, especially since it is far more concise.

+
+
+
+
+

2.5 Parting Note

+

The pandas library is enormous and contains many useful functions. Here is a link to its documentation. We certainly don’t expect you to memorize each and every method of the library, and we will give you a reference sheet for exams.

+

The introductory Data 100 pandas lectures will provide a high-level view of the key data structures and methods that will form the foundation of your pandas knowledge. A goal of this course is to help you build your familiarity with the real-world programming practice of … Googling! Answers to your questions can be found in documentation, Stack Overflow, etc. Being able to search for, read, and implement documentation is an important life skill for any data scientist.

+

With that, we will move on to Pandas II!

+ + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/pandas_2/pandas_2.html b/docs/pandas_2/pandas_2.html new file mode 100644 index 000000000..4c70c8486 --- /dev/null +++ b/docs/pandas_2/pandas_2.html @@ -0,0 +1,2369 @@ + + + + + + + + + +3  Pandas II – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

3  Pandas II

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Continue building familiarity with pandas syntax.
  • +
  • Extract data from a DataFrame using conditional selection.
  • +
  • Recognize situations where aggregation is useful and identify the correct technique for performing an aggregation.
  • +
+
+
+
+

Last time, we introduced the pandas library as a toolkit for processing data. We learned the DataFrame and Series data structures, familiarized ourselves with the basic syntax for manipulating tabular data, and began writing our first lines of pandas code.

+

In this lecture, we’ll start to dive into some advanced pandas syntax. You may find it helpful to follow along with a notebook of your own as we walk through these new pieces of code.

+

We’ll start by loading the babynames dataset.

+
+
+Code +
# This code pulls census data and loads it into a DataFrame
+# We won't cover it explicitly in this class, but you are welcome to explore it on your own
+import pandas as pd
+import numpy as np
+import urllib.request
+import os.path
+import zipfile
+
+data_url = "https://www.ssa.gov/oact/babynames/state/namesbystate.zip"
+local_filename = "data/babynamesbystate.zip"
+if not os.path.exists(local_filename): # If the data exists don't download again
+    with urllib.request.urlopen(data_url) as resp, open(local_filename, 'wb') as f:
+        f.write(resp.read())
+
+zf = zipfile.ZipFile(local_filename, 'r')
+
+ca_name = 'STATE.CA.TXT'
+field_names = ['State', 'Sex', 'Year', 'Name', 'Count']
+with zf.open(ca_name) as fh:
+    babynames = pd.read_csv(fh, header=None, names=field_names)
+
+babynames.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
1CAF1910Helen239
2CAF1910Dorothy220
3CAF1910Margaret163
4CAF1910Frances134
+ +
+
+
+
+

3.1 Conditional Selection

+

Conditional selection allows us to select a subset of rows in a DataFrame that satisfy some specified condition.

+

To understand how to use conditional selection, we must look at another possible input of the .loc and [] methods – a boolean array, which is simply an array or Series where each element is either True or False. This boolean array must have a length equal to the number of rows in the DataFrame. It will return all rows that correspond to a value of True in the array. We used a very similar technique when performing conditional extraction from a Series in the last lecture.

+

To see this in action, let’s select all even-indexed rows in the first 10 rows of our DataFrame.

+
+
# Ask yourself: why is :9 is the correct slice to select the first 10 rows?
+babynames_first_10_rows = babynames.loc[:9, :]
+
+# Notice how we have exactly 10 elements in our boolean array argument
+babynames_first_10_rows[[True, False, True, False, True, False, True, False, True, False]]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
2CAF1910Dorothy220
4CAF1910Frances134
6CAF1910Evelyn126
8CAF1910Virginia101
+ +
+
+
+

We can perform a similar operation using .loc.

+
+
babynames_first_10_rows.loc[[True, False, True, False, True, False, True, False, True, False], :]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
2CAF1910Dorothy220
4CAF1910Frances134
6CAF1910Evelyn126
8CAF1910Virginia101
+ +
+
+
+

These techniques worked well in this example, but you can imagine how tedious it might be to list out True and Falsefor every row in a larger DataFrame. To make things easier, we can instead provide a logical condition as an input to .loc or [] that returns a boolean array with the necessary length.

+

For example, to return all names associated with F sex:

+
+
# First, use a logical condition to generate a boolean array
+logical_operator = (babynames["Sex"] == "F")
+
+# Then, use this boolean array to filter the DataFrame
+babynames[logical_operator].head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
1CAF1910Helen239
2CAF1910Dorothy220
3CAF1910Margaret163
4CAF1910Frances134
+ +
+
+
+

Recall from the previous lecture that .head() will return only the first few rows in the DataFrame. In reality, babynames[logical operator] contains as many rows as there are entries in the original babynames DataFrame with sex "F".

+

Here, logical_operator evaluates to a Series of boolean values with length 407428.

+
+
+Code +
print("There are a total of {} values in 'logical_operator'".format(len(logical_operator)))
+
+
+
There are a total of 407428 values in 'logical_operator'
+
+
+

Rows starting at row 0 and ending at row 239536 evaluate to True and are thus returned in the DataFrame. Rows from 239537 onwards evaluate to False and are omitted from the output.

+
+
+Code +
print("The 0th item in this 'logical_operator' is: {}".format(logical_operator.iloc[0]))
+print("The 239536th item in this 'logical_operator' is: {}".format(logical_operator.iloc[239536]))
+print("The 239537th item in this 'logical_operator' is: {}".format(logical_operator.iloc[239537]))
+
+
+
The 0th item in this 'logical_operator' is: True
+The 239536th item in this 'logical_operator' is: True
+The 239537th item in this 'logical_operator' is: False
+
+
+

Passing a Series as an argument to babynames[] has the same effect as using a boolean array. In fact, the [] selection operator can take a boolean Series, array, and list as arguments. These three are used interchangeably throughout the course.

+

We can also use .loc to achieve similar results.

+
+
babynames.loc[babynames["Sex"] == "F"].head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
1CAF1910Helen239
2CAF1910Dorothy220
3CAF1910Margaret163
4CAF1910Frances134
+ +
+
+
+

Boolean conditions can be combined using various bitwise operators, allowing us to filter results by multiple conditions. In the table below, p and q are boolean arrays or Series.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SymbolUsageMeaning
~~pReturns negation of p
|p | qp OR q
&p & qp AND q
^p ^ qp XOR q (exclusive or)
+

When combining multiple conditions with logical operators, we surround each individual condition with a set of parenthesis (). This imposes an order of operations on pandas evaluating your logic and can avoid code erroring.

+

For example, if we want to return data on all names with sex "F" born before the year 2000, we can write:

+
+
babynames[(babynames["Sex"] == "F") & (babynames["Year"] < 2000)].head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
1CAF1910Helen239
2CAF1910Dorothy220
3CAF1910Margaret163
4CAF1910Frances134
+ +
+
+
+

Note that we’re working with Series, so using and in place of &, or or in place | will error.

+
+
# This line of code will raise a ValueError
+# babynames[(babynames["Sex"] == "F") and (babynames["Year"] < 2000)].head()
+
+

If we want to return data on all names with sex "F" or all born before the year 2000, we can write:

+
+
babynames[(babynames["Sex"] == "F") | (babynames["Year"] < 2000)].head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
1CAF1910Helen239
2CAF1910Dorothy220
3CAF1910Margaret163
4CAF1910Frances134
+ +
+
+
+

Boolean array selection is a useful tool, but can lead to overly verbose code for complex conditions. In the example below, our boolean condition is long enough to extend for several lines of code.

+
+
# Note: The parentheses surrounding the code make it possible to break the code on to multiple lines for readability
+(
+    babynames[(babynames["Name"] == "Bella") | 
+              (babynames["Name"] == "Alex") |
+              (babynames["Name"] == "Ani") |
+              (babynames["Name"] == "Lisa")]
+).head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
6289CAF1923Bella5
7512CAF1925Bella8
12368CAF1932Lisa5
14741CAF1936Lisa8
17084CAF1939Lisa5
+ +
+
+
+

Fortunately, pandas provides many alternative methods for constructing boolean filters.

+

The .isin function is one such example. This method evaluates if the values in a Series are contained in a different sequence (list, array, or Series) of values. In the cell below, we achieve equivalent results to the DataFrame above with far more concise code.

+
+
names = ["Bella", "Alex", "Narges", "Lisa"]
+babynames["Name"].isin(names).head()
+
+
0    False
+1    False
+2    False
+3    False
+4    False
+Name: Name, dtype: bool
+
+
+
+
babynames[babynames["Name"].isin(names)].head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
6289CAF1923Bella5
7512CAF1925Bella8
12368CAF1932Lisa5
14741CAF1936Lisa8
17084CAF1939Lisa5
+ +
+
+
+

The function str.startswith can be used to define a filter based on string values in a Series object. It checks to see if string values in a Series start with a particular character.

+
+
# Identify whether names begin with the letter "N"
+babynames["Name"].str.startswith("N").head()
+
+
0    False
+1    False
+2    False
+3    False
+4    False
+Name: Name, dtype: bool
+
+
+
+
# Extracting names that begin with the letter "N"
+babynames[babynames["Name"].str.startswith("N")].head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
76CAF1910Norma23
83CAF1910Nellie20
127CAF1910Nina11
198CAF1910Nora6
310CAF1911Nellie23
+ +
+
+
+
+
+

3.2 Adding, Removing, and Modifying Columns

+

In many data science tasks, we may need to change the columns contained in our DataFrame in some way. Fortunately, the syntax to do so is fairly straightforward.

+

To add a new column to a DataFrame, we use a syntax similar to that used when accessing an existing column. Specify the name of the new column by writing df["column"], then assign this to a Series or array containing the values that will populate this column.

+
+
# Create a Series of the length of each name. 
+babyname_lengths = babynames["Name"].str.len()
+
+# Add a column named "name_lengths" that includes the length of each name
+babynames["name_lengths"] = babyname_lengths
+babynames.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountname_lengths
0CAF1910Mary2954
1CAF1910Helen2395
2CAF1910Dorothy2207
3CAF1910Margaret1638
4CAF1910Frances1347
+ +
+
+
+

If we need to later modify an existing column, we can do so by referencing this column again with the syntax df["column"], then re-assigning it to a new Series or array of the appropriate length.

+
+
# Modify the “name_lengths” column to be one less than its original value
+babynames["name_lengths"] = babynames["name_lengths"] - 1
+babynames.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountname_lengths
0CAF1910Mary2953
1CAF1910Helen2394
2CAF1910Dorothy2206
3CAF1910Margaret1637
4CAF1910Frances1346
+ +
+
+
+

We can rename a column using the .rename() method. It takes in a dictionary that maps old column names to their new ones.

+
+
# Rename “name_lengths” to “Length”
+babynames = babynames.rename(columns={"name_lengths":"Length"})
+babynames.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountLength
0CAF1910Mary2953
1CAF1910Helen2394
2CAF1910Dorothy2206
3CAF1910Margaret1637
4CAF1910Frances1346
+ +
+
+
+

If we want to remove a column or row of a DataFrame, we can call the .drop (documentation) method. Use the axis parameter to specify whether a column or row should be dropped. Unless otherwise specified, pandas will assume that we are dropping a row by default.

+
+
# Drop our new "Length" column from the DataFrame
+babynames = babynames.drop("Length", axis="columns")
+babynames.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
1CAF1910Helen239
2CAF1910Dorothy220
3CAF1910Margaret163
4CAF1910Frances134
+ +
+
+
+

Notice that we re-assigned babynames to the result of babynames.drop(...). This is a subtle but important point: pandas table operations do not occur in-place. Calling df.drop(...) will output a copy of df with the row/column of interest removed without modifying the original df table.

+

In other words, if we simply call:

+
+
# This creates a copy of `babynames` and removes the column "Name"...
+babynames.drop("Name", axis="columns")
+
+# ...but the original `babynames` is unchanged! 
+# Notice that the "Name" column is still present
+babynames.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
0CAF1910Mary295
1CAF1910Helen239
2CAF1910Dorothy220
3CAF1910Margaret163
4CAF1910Frances134
+ +
+
+
+
+
+

3.3 Useful Utility Functions

+

pandas contains an extensive library of functions that can help shorten the process of setting and getting information from its data structures. In the following section, we will give overviews of each of the main utility functions that will help us in Data 100.

+

Discussing all functionality offered by pandas could take an entire semester! We will walk you through the most commonly-used functions and encourage you to explore and experiment on your own.

+
    +
  • NumPy and built-in function support
  • +
  • .shape
  • +
  • .size
  • +
  • .describe()
  • +
  • .sample()
  • +
  • .value_counts()
  • +
  • .unique()
  • +
  • .sort_values()
  • +
+

The pandas documentation will be a valuable resource in Data 100 and beyond.

+
+

3.3.1 NumPy

+

pandas is designed to work well with NumPy, the framework for array computations you encountered in Data 8. Just about any NumPy function can be applied to pandas DataFrames and Series.

+
+
# Pull out the number of babies named Yash each year
+yash_count = babynames[babynames["Name"] == "Yash"]["Count"]
+yash_count.head()
+
+
331824     8
+334114     9
+336390    11
+338773    12
+341387    10
+Name: Count, dtype: int64
+
+
+
+
# Average number of babies named Yash each year
+np.mean(yash_count)
+
+
np.float64(17.142857142857142)
+
+
+
+
# Max number of babies named Yash born in any one year
+np.max(yash_count)
+
+
np.int64(29)
+
+
+
+
+

3.3.2 .shape and .size

+

.shape and .size are attributes of Series and DataFrames that measure the “amount” of data stored in the structure. Calling .shape returns a tuple containing the number of rows and columns present in the DataFrame or Series. .size is used to find the total number of elements in a structure, equivalent to the number of rows times the number of columns.

+

Many functions strictly require the dimensions of the arguments along certain axes to match. Calling these dimension-finding functions is much faster than counting all of the items by hand.

+
+
# Return the shape of the DataFrame, in the format (num_rows, num_columns)
+babynames.shape
+
+
(407428, 5)
+
+
+
+
# Return the size of the DataFrame, equal to num_rows * num_columns
+babynames.size
+
+
2037140
+
+
+
+
+

3.3.3 .describe()

+

If many statistics are required from a DataFrame (minimum value, maximum value, mean value, etc.), then .describe() (documentation) can be used to compute all of them at once.

+
+
babynames.describe()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCount
count407428.000000407428.000000
mean1985.73360979.543456
std27.007660293.698654
min1910.0000005.000000
25%1969.0000007.000000
50%1992.00000013.000000
75%2008.00000038.000000
max2022.0000008260.000000
+ +
+
+
+

A different set of statistics will be reported if .describe() is called on a Series.

+
+
babynames["Sex"].describe()
+
+
count     407428
+unique         2
+top            F
+freq      239537
+Name: Sex, dtype: object
+
+
+
+
+

3.3.4 .sample()

+

As we will see later in the semester, random processes are at the heart of many data science techniques (for example, train-test splits, bootstrapping, and cross-validation). .sample() (documentation) lets us quickly select random entries (a row if called from a DataFrame, or a value if called from a Series).

+

By default, .sample() selects entries without replacement. Pass in the argument replace=True to sample with replacement.

+
+
# Sample a single row
+babynames.sample()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
28158CAF1950Vikki14
+ +
+
+
+

Naturally, this can be chained with other methods and operators (iloc, etc.).

+
+
# Sample 5 random rows, and select all columns after column 2
+babynames.sample(5).iloc[:, 2:]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearNameCount
820581979Lakesha11
3876872016Zayn101
1059771988Cecilia213
752571976Clarice7
76851925Elia5
+ +
+
+
+
+
# Randomly sample 4 names from the year 2000, with replacement, and select all columns after column 2
+babynames[babynames["Year"] == 2000].sample(4, replace = True).iloc[:, 2:]
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearNameCount
3429732000Grayson46
1516082000Roshni8
3431722000Dwayne27
3430392000Jair38
+ +
+
+
+
+
+

3.3.5 .value_counts()

+

The Series.value_counts() (documentation) method counts the number of occurrence of each unique value in a Series. In other words, it counts the number of times each unique value appears. This is often useful for determining the most or least common entries in a Series.

+

In the example below, we can determine the name with the most years in which at least one person has taken that name by counting the number of times each name appears in the "Name" column of babynames. Note that the return value is also a Series.

+
+
babynames["Name"].value_counts().head()
+
+
Name
+Jean         223
+Francis      221
+Guadalupe    218
+Jessie       217
+Marion       214
+Name: count, dtype: int64
+
+
+
+
+

3.3.6 .unique()

+

If we have a Series with many repeated values, then .unique() (documentation) can be used to identify only the unique values. Here we return an array of all the names in babynames.

+
+
babynames["Name"].unique()
+
+
array(['Mary', 'Helen', 'Dorothy', ..., 'Zae', 'Zai', 'Zayvier'],
+      dtype=object)
+
+
+
+
+

3.3.7 .sort_values()

+

Ordering a DataFrame can be useful for isolating extreme values. For example, the first 5 entries of a row sorted in descending order (that is, from highest to lowest) are the largest 5 values. .sort_values (documentation) allows us to order a DataFrame or Series by a specified column. We can choose to either receive the rows in ascending order (default) or descending order.

+
+
# Sort the "Count" column from highest to lowest
+babynames.sort_values(by="Count", ascending=False).head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
268041CAM1957Michael8260
267017CAM1956Michael8258
317387CAM1990Michael8246
281850CAM1969Michael8245
283146CAM1970Michael8196
+ +
+
+
+

Unlike when calling .value_counts() on a DataFrame, we do not need to explicitly specify the column used for sorting when calling .value_counts() on a Series. We can still specify the ordering paradigm – that is, whether values are sorted in ascending or descending order.

+
+
# Sort the "Name" Series alphabetically
+babynames["Name"].sort_values(ascending=True).head()
+
+
366001      Aadan
+384005      Aadan
+369120      Aadan
+398211    Aadarsh
+370306      Aaden
+Name: Name, dtype: object
+
+
+
+
+
+

3.4 Parting Note

+

Manipulating DataFrames is not a skill that is mastered in just one day. Due to the flexibility of pandas, there are many different ways to get from point A to point B. We recommend trying multiple different ways to solve the same problem to gain even more practice and reach that point of mastery sooner.

+

Next, we will start digging deeper into the mechanics behind grouping data.

+ + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/pandas_3/images/agg.png b/docs/pandas_3/images/agg.png new file mode 100644 index 000000000..ec5e8e430 Binary files /dev/null and b/docs/pandas_3/images/agg.png differ diff --git a/docs/pandas_3/images/aggregation.png b/docs/pandas_3/images/aggregation.png new file mode 100644 index 000000000..7eb718c81 Binary files /dev/null and b/docs/pandas_3/images/aggregation.png differ diff --git a/docs/pandas_3/images/error.png b/docs/pandas_3/images/error.png new file mode 100644 index 000000000..fcf7f141f Binary files /dev/null and b/docs/pandas_3/images/error.png differ diff --git a/docs/pandas_3/images/filter_demo.png b/docs/pandas_3/images/filter_demo.png new file mode 100644 index 000000000..669da3257 Binary files /dev/null and b/docs/pandas_3/images/filter_demo.png differ diff --git a/docs/pandas_3/images/first.png b/docs/pandas_3/images/first.png new file mode 100644 index 000000000..f44b90d00 Binary files /dev/null and b/docs/pandas_3/images/first.png differ diff --git a/docs/pandas_3/images/gb.png b/docs/pandas_3/images/gb.png new file mode 100644 index 000000000..4c8abae60 Binary files /dev/null and b/docs/pandas_3/images/gb.png differ diff --git a/docs/pandas_3/images/groupby_demo.png b/docs/pandas_3/images/groupby_demo.png new file mode 100644 index 000000000..f87b62e82 Binary files /dev/null and b/docs/pandas_3/images/groupby_demo.png differ diff --git a/docs/pandas_3/images/pivot.png b/docs/pandas_3/images/pivot.png new file mode 100644 index 000000000..667ae45be Binary files /dev/null and b/docs/pandas_3/images/pivot.png differ diff --git a/docs/pandas_3/images/puzzle_demo.png b/docs/pandas_3/images/puzzle_demo.png new file mode 100644 index 000000000..bc21fd910 Binary files /dev/null and b/docs/pandas_3/images/puzzle_demo.png differ diff --git a/docs/pandas_3/pandas_3.html b/docs/pandas_3/pandas_3.html new file mode 100644 index 000000000..dad3756da --- /dev/null +++ b/docs/pandas_3/pandas_3.html @@ -0,0 +1,4841 @@ + + + + + + + + + +4  Pandas III – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+ + + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Perform advanced aggregation using .groupby()
  • +
  • Use the pd.pivot_table method to construct a pivot table
  • +
  • Perform simple merges between DataFrames using pd.merge()
  • +
+
+
+
+

We will introduce the concept of aggregating data – we will familiarize ourselves with GroupBy objects and used them as tools to consolidate and summarize aDataFrame. In this lecture, we will explore working with the different aggregation functions and dive into some advanced .groupby methods to show just how powerful of a resource they can be for understanding our data. We will also introduce other techniques for data aggregation to provide flexibility in how we manipulate our tables.

+
+

4.1 Custom Sorts

+

First, let’s finish our discussion about sorting. Let’s try to solve a sorting problem using different approaches. Assume we want to find the longest baby names and sort our data accordingly.

+

We’ll start by loading the babynames dataset. Note that this dataset is filtered to only contain data from California.

+
+
+Code +
# This code pulls census data and loads it into a DataFrame
+# We won't cover it explicitly in this class, but you are welcome to explore it on your own
+import pandas as pd
+import numpy as np
+import urllib.request
+import os.path
+import zipfile
+
+data_url = "https://www.ssa.gov/oact/babynames/state/namesbystate.zip"
+local_filename = "data/babynamesbystate.zip"
+if not os.path.exists(local_filename): # If the data exists don't download again
+    with urllib.request.urlopen(data_url) as resp, open(local_filename, 'wb') as f:
+        f.write(resp.read())
+
+zf = zipfile.ZipFile(local_filename, 'r')
+
+ca_name = 'STATE.CA.TXT'
+field_names = ['State', 'Sex', 'Year', 'Name', 'Count']
+with zf.open(ca_name) as fh:
+    babynames = pd.read_csv(fh, header=None, names=field_names)
+
+babynames.tail(10)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
407418CAM2022Zach5
407419CAM2022Zadkiel5
407420CAM2022Zae5
407421CAM2022Zai5
407422CAM2022Zay5
407423CAM2022Zayvier5
407424CAM2022Zia5
407425CAM2022Zora5
407426CAM2022Zuriel5
407427CAM2022Zylo5
+ +
+
+
+
+

4.1.1 Approach 1: Create a Temporary Column

+

One method to do this is to first start by creating a column that contains the lengths of the names.

+
+
# Create a Series of the length of each name
+babyname_lengths = babynames["Name"].str.len()
+
+# Add a column named "name_lengths" that includes the length of each name
+babynames["name_lengths"] = babyname_lengths
+babynames.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountname_lengths
0CAF1910Mary2954
1CAF1910Helen2395
2CAF1910Dorothy2207
3CAF1910Margaret1638
4CAF1910Frances1347
+ +
+
+
+

We can then sort the DataFrame by that column using .sort_values():

+
+
# Sort by the temporary column
+babynames = babynames.sort_values(by="name_lengths", ascending=False)
+babynames.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountname_lengths
334166CAM1996Franciscojavier815
337301CAM1997Franciscojavier515
339472CAM1998Franciscojavier615
321792CAM1991Ryanchristopher715
327358CAM1993Johnchristopher515
+ +
+
+
+

Finally, we can drop the name_length column from babynames to prevent our table from getting cluttered.

+
+
# Drop the 'name_length' column
+babynames = babynames.drop("name_lengths", axis='columns')
+babynames.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
334166CAM1996Franciscojavier8
337301CAM1997Franciscojavier5
339472CAM1998Franciscojavier6
321792CAM1991Ryanchristopher7
327358CAM1993Johnchristopher5
+ +
+
+
+
+
+

4.1.2 Approach 2: Sorting using the key Argument

+

Another way to approach this is to use the key argument of .sort_values(). Here we can specify that we want to sort "Name" values by their length.

+
+
babynames.sort_values("Name", key=lambda x: x.str.len(), ascending=False).head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
334166CAM1996Franciscojavier8
327472CAM1993Ryanchristopher5
337301CAM1997Franciscojavier5
337477CAM1997Ryanchristopher5
312543CAM1987Franciscojavier5
+ +
+
+
+
+
+

4.1.3 Approach 3: Sorting using the map Function

+

We can also use the map function on a Series to solve this. Say we want to sort the babynames table by the number of "dr"’s and "ea"’s in each "Name". We’ll define the function dr_ea_count to help us out.

+
+
# First, define a function to count the number of times "dr" or "ea" appear in each name
+def dr_ea_count(string):
+    return string.count('dr') + string.count('ea')
+
+# Then, use `map` to apply `dr_ea_count` to each name in the "Name" column
+babynames["dr_ea_count"] = babynames["Name"].map(dr_ea_count)
+
+# Sort the DataFrame by the new "dr_ea_count" column so we can see our handiwork
+babynames = babynames.sort_values(by="dr_ea_count", ascending=False)
+babynames.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountdr_ea_count
115957CAF1990Deandrea53
101976CAF1986Deandrea63
131029CAF1994Leandrea53
108731CAF1988Deandrea53
308131CAM1985Deandrea63
+ +
+
+
+

We can drop the dr_ea_count once we’re done using it to maintain a neat table.

+
+
# Drop the `dr_ea_count` column
+babynames = babynames.drop("dr_ea_count", axis = 'columns')
+babynames.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCount
115957CAF1990Deandrea5
101976CAF1986Deandrea6
131029CAF1994Leandrea5
108731CAF1988Deandrea5
308131CAM1985Deandrea6
+ +
+
+
+
+
+
+

4.2 Aggregating Data with .groupby

+

Up until this point, we have been working with individual rows of DataFrames. As data scientists, we often wish to investigate trends across a larger subset of our data. For example, we may want to compute some summary statistic (the mean, median, sum, etc.) for a group of rows in our DataFrame. To do this, we’ll use pandas GroupBy objects. Our goal is to group together rows that fall under the same category and perform an operation that aggregates across all rows in the category.

+

Let’s say we wanted to aggregate all rows in babynames for a given year.

+
+
babynames.groupby("Year")
+
+
<pandas.core.groupby.generic.DataFrameGroupBy object at 0x1037c3d90>
+
+
+

What does this strange output mean? Calling .groupby (documentation) has generated a GroupBy object. You can imagine this as a set of “mini” sub-DataFrames, where each subframe contains all of the rows from babynames that correspond to a particular year.

+

The diagram below shows a simplified view of babynames to help illustrate this idea.

+
+ +
+

We can’t work with a GroupBy object directly – that is why you saw that strange output earlier rather than a standard view of a DataFrame. To actually manipulate values within these “mini” DataFrames, we’ll need to call an aggregation method. This is a method that tells pandas how to aggregate the values within the GroupBy object. Once the aggregation is applied, pandas will return a normal (now grouped) DataFrame.

+

The first aggregation method we’ll consider is .agg. The .agg method takes in a function as its argument; this function is then applied to each column of a “mini” grouped DataFrame. We end up with a new DataFrame with one aggregated row per subframe. Let’s see this in action by finding the sum of all counts for each year in babynames – this is equivalent to finding the number of babies born in each year.

+
+
babynames[["Year", "Count"]].groupby("Year").agg(sum).head(5)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/2718070104.py:1: FutureWarning:
+
+The provided callable <built-in function sum> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Year
19109163
19119983
191217946
191322094
191426926
+ +
+
+
+

We can relate this back to the diagram we used above. Remember that the diagram uses a simplified version of babynames, which is why we see smaller values for the summed counts.

+
+
+

+
Performing an aggregation
+
+
+

Calling .agg has condensed each subframe back into a single row. This gives us our final output: a DataFrame that is now indexed by "Year", with a single row for each unique year in the original babynames DataFrame.

+

There are many different aggregation functions we can use, all of which are useful in different applications.

+
+
babynames[["Year", "Count"]].groupby("Year").agg(min).head(5)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/86785752.py:1: FutureWarning:
+
+The provided callable <built-in function min> is currently using DataFrameGroupBy.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Year
19105
19115
19125
19135
19145
+ +
+
+
+
+
babynames[["Year", "Count"]].groupby("Year").agg(max).head(5)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/3032256904.py:1: FutureWarning:
+
+The provided callable <built-in function max> is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Year
1910295
1911390
1912534
1913614
1914773
+ +
+
+
+
+
# Same result, but now we explicitly tell pandas to only consider the "Count" column when summing
+babynames.groupby("Year")[["Count"]].agg(sum).head(5)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/1958904241.py:2: FutureWarning:
+
+The provided callable <built-in function sum> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Year
19109163
19119983
191217946
191322094
191426926
+ +
+
+
+

There are many different aggregations that can be applied to the grouped data. The primary requirement is that an aggregation function must:

+
    +
  • Take in a Series of data (a single column of the grouped subframe).
  • +
  • Return a single value that aggregates this Series.
  • +
+
+

4.2.1 Aggregation Functions

+

Because of this fairly broad requirement, pandas offers many ways of computing an aggregation.

+

In-built Python operations – such as sum, max, and min – are automatically recognized by pandas.

+
+
# What is the minimum count for each name in any year?
+babynames.groupby("Name")[["Count"]].agg(min).head()
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/3244314896.py:2: FutureWarning:
+
+The provided callable <built-in function min> is currently using DataFrameGroupBy.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Name
Aadan5
Aadarsh6
Aaden10
Aadhav6
Aadhini6
+ +
+
+
+
+
# What is the largest single-year count of each name?
+babynames.groupby("Name")[["Count"]].agg(max).head()
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/3805876622.py:2: FutureWarning:
+
+The provided callable <built-in function max> is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Name
Aadan7
Aadarsh6
Aaden158
Aadhav8
Aadhini6
+ +
+
+
+

As mentioned previously, functions from the NumPy library, such as np.mean, np.max, np.min, and np.sum, are also fair game in pandas.

+
+
# What is the average count for each name across all years?
+babynames.groupby("Name")[["Count"]].agg(np.mean).head()
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/308986604.py:2: FutureWarning:
+
+The provided callable <function mean at 0x103985360> is currently using DataFrameGroupBy.mean. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "mean" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Name
Aadan6.000000
Aadarsh6.000000
Aaden46.214286
Aadhav6.750000
Aadhini6.000000
+ +
+
+
+

pandas also offers a number of in-built functions. Functions that are native to pandas can be referenced using their string name within a call to .agg. Some examples include:

+
    +
  • .agg("sum")
  • +
  • .agg("max")
  • +
  • .agg("min")
  • +
  • .agg("mean")
  • +
  • .agg("first")
  • +
  • .agg("last")
  • +
+

The latter two entries in this list – "first" and "last" – are unique to pandas. They return the first or last entry in a subframe column. Why might this be useful? Consider a case where multiple columns in a group share identical information. To represent this information in the grouped output, we can simply grab the first or last entry, which we know will be identical to all other entries.

+

Let’s illustrate this with an example. Say we add a new column to babynames that contains the first letter of each name.

+
+
# Imagine we had an additional column, "First Letter". We'll explain this code next week
+babynames["First Letter"] = babynames["Name"].str[0]
+
+# We construct a simplified DataFrame containing just a subset of columns
+babynames_new = babynames[["Name", "First Letter", "Year"]]
+babynames_new.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
NameFirst LetterYear
115957DeandreaD1990
101976DeandreaD1986
131029LeandreaL1994
108731DeandreaD1988
308131DeandreaD1985
+ +
+
+
+

If we form groups for each name in the dataset, "First Letter" will be the same for all members of the group. This means that if we simply select the first entry for "First Letter" in the group, we’ll represent all data in that group.

+

We can use a dictionary to apply different aggregation functions to each column during grouping.

+
+
+

+
Aggregating using “first”
+
+
+
+
babynames_new.groupby("Name").agg({"First Letter":"first", "Year":"max"}).head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
First LetterYear
Name
AadanA2014
AadarshA2019
AadenA2020
AadhavA2019
AadhiniA2022
+ +
+
+
+
+
+

4.2.2 Plotting Birth Counts

+

Let’s use .agg to find the total number of babies born in each year. Recall that using .agg with .groupby() follows the format: df.groupby(column_name).agg(aggregation_function). The line of code below gives us the total number of babies born in each year.

+
+
+Code +
babynames.groupby("Year")[["Count"]].agg(sum).head(5)
+# Alternative 1
+# babynames.groupby("Year")[["Count"]].sum()
+# Alternative 2
+# babynames.groupby("Year").sum(numeric_only=True)
+
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/390646742.py:1: FutureWarning:
+
+The provided callable <built-in function sum> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Year
19109163
19119983
191217946
191322094
191426926
+ +
+
+
+

Here’s an illustration of the process:

+

aggregation

+

Plotting the Dataframe we obtain tells an interesting story.

+
+
+Code +
import plotly.express as px
+puzzle2 = babynames.groupby("Year")[["Count"]].agg(sum)
+px.line(puzzle2, y = "Count")
+
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/4066413905.py:2: FutureWarning:
+
+The provided callable <built-in function sum> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
+
+
+
+
+
+
+

A word of warning: we made an enormous assumption when we decided to use this dataset to estimate birth rate. According to this article from the Legistlative Analyst Office, the true number of babies born in California in 2020 was 421,275. However, our plot shows 362,882 babies —— what happened?

+
+
+

4.2.3 Summary of the .groupby() Function

+

A groupby operation involves some combination of splitting a DataFrame into grouped subframes, applying a function, and combining the results.

+

For some arbitrary DataFrame df below, the code df.groupby("year").agg(sum) does the following:

+
    +
  • Splits the DataFrame into sub-DataFrames with rows belonging to the same year.
  • +
  • Applies the sum function to each column of each sub-DataFrame.
  • +
  • Combines the results of sum into a single DataFrame, indexed by year.
  • +
+

groupby_demo

+
+
+

4.2.4 Revisiting the .agg() Function

+

.agg() can take in any function that aggregates several values into one summary value. Some commonly-used aggregation functions can even be called directly, without explicit use of .agg(). For example, we can call .mean() on .groupby():

+
babynames.groupby("Year").mean().head()
+

We can now put this all into practice. Say we want to find the baby name with sex “F” that has fallen in popularity the most in California. To calculate this, we can first create a metric: “Ratio to Peak” (RTP). The RTP is the ratio of babies born with a given name in 2022 to the maximum number of babies born with the name in any year.

+

Let’s start with calculating this for one baby, “Jennifer”.

+
+
# We filter by babies with sex "F" and sort by "Year"
+f_babynames = babynames[babynames["Sex"] == "F"]
+f_babynames = f_babynames.sort_values(["Year"])
+
+# Determine how many Jennifers were born in CA per year
+jenn_counts_series = f_babynames[f_babynames["Name"] == "Jennifer"]["Count"]
+
+# Determine the max number of Jennifers born in a year and the number born in 2022 
+# to calculate RTP
+max_jenn = max(f_babynames[f_babynames["Name"] == "Jennifer"]["Count"])
+curr_jenn = f_babynames[f_babynames["Name"] == "Jennifer"]["Count"].iloc[-1]
+rtp = curr_jenn / max_jenn
+rtp
+
+
np.float64(0.018796372629843364)
+
+
+

By creating a function to calculate RTP and applying it to our DataFrame by using .groupby(), we can easily compute the RTP for all names at once!

+
+
def ratio_to_peak(series):
+    return series.iloc[-1] / max(series)
+
+#Using .groupby() to apply the function
+rtp_table = f_babynames.groupby("Name")[["Year", "Count"]].agg(ratio_to_peak)
+rtp_table.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCount
Name
Aadhini1.01.000000
Aadhira1.00.500000
Aadhya1.00.660000
Aadya1.00.586207
Aahana1.00.269231
+ +
+
+
+

In the rows shown above, we can see that every row shown has a Year value of 1.0.

+

This is the “pandas-ification” of logic you saw in Data 8. Much of the logic you’ve learned in Data 8 will serve you well in Data 100.

+
+
+

4.2.5 Nuisance Columns

+

Note that you must be careful with which columns you apply the .agg() function to. If we were to apply our function to the table as a whole by doing f_babynames.groupby("Name").agg(ratio_to_peak), executing our .agg() call would result in a TypeError.

+

error

+

We can avoid this issue (and prevent unintentional loss of data) by explicitly selecting column(s) we want to apply our aggregation function to BEFORE calling .agg(),

+
+
+

4.2.6 Renaming Columns After Grouping

+

By default, .groupby will not rename any aggregated columns. As we can see in the table above, the aggregated column is still named Count even though it now represents the RTP. For better readability, we can rename Count to Count RTP

+
+
rtp_table = rtp_table.rename(columns = {"Count": "Count RTP"})
+rtp_table
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCount RTP
Name
Aadhini1.01.000000
Aadhira1.00.500000
Aadhya1.00.660000
Aadya1.00.586207
Aahana1.00.269231
.........
Zyanya1.00.466667
Zyla1.01.000000
Zylah1.01.000000
Zyra1.01.000000
Zyrah1.00.833333
+ +

13782 rows × 2 columns

+
+
+
+
+
+

4.2.7 Some Data Science Payoff

+

By sorting rtp_table, we can see the names whose popularity has decreased the most.

+
+
rtp_table = rtp_table.rename(columns = {"Count": "Count RTP"})
+rtp_table.sort_values("Count RTP").head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCount RTP
Name
Debra1.00.001260
Debbie1.00.002815
Carol1.00.003180
Tammy1.00.003249
Susan1.00.003305
+ +
+
+
+

To visualize the above DataFrame, let’s look at the line plot below:

+
+
+Code +
import plotly.express as px
+px.line(f_babynames[f_babynames["Name"] == "Debra"], x = "Year", y = "Count")
+
+
+
+
+
+

We can get the list of the top 10 names and then plot popularity with the following code:

+
+
top10 = rtp_table.sort_values("Count RTP").head(10).index
+px.line(
+    f_babynames[f_babynames["Name"].isin(top10)], 
+    x = "Year", 
+    y = "Count", 
+    color = "Name"
+)
+
+
+
+
+

As a quick exercise, consider what code would compute the total number of babies with each name.

+
+
+Code +
babynames.groupby("Name")[["Count"]].agg(sum).head()
+# alternative solution: 
+# babynames.groupby("Name")[["Count"]].sum()
+
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/1912269730.py:1: FutureWarning:
+
+The provided callable <built-in function sum> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
Name
Aadan18
Aadarsh6
Aaden647
Aadhav27
Aadhini6
+ +
+
+
+
+
+
+

4.3 .groupby(), Continued

+

We’ll work with the elections DataFrame again.

+
+
+Code +
import pandas as pd
+import numpy as np
+
+elections = pd.read_csv("data/elections.csv")
+elections.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.210122
11824John Quincy AdamsDemocratic-Republican113142win42.789878
21828Andrew JacksonDemocratic642806win56.203927
31828John Quincy AdamsNational Republican500897loss43.796073
41832Andrew JacksonDemocratic702735win54.574789
+ +
+
+
+
+

4.3.1 Raw GroupBy Objects

+

The result of groupby applied to a DataFrame is a DataFrameGroupBy object, not a DataFrame.

+
+
grouped_by_year = elections.groupby("Year")
+type(grouped_by_year)
+
+
pandas.core.groupby.generic.DataFrameGroupBy
+
+
+

There are several ways to look into DataFrameGroupBy objects:

+
+
grouped_by_party = elections.groupby("Party")
+grouped_by_party.groups
+
+
{'American': [22, 126], 'American Independent': [115, 119, 124], 'Anti-Masonic': [6], 'Anti-Monopoly': [38], 'Citizens': [127], 'Communist': [89], 'Constitution': [160, 164, 172], 'Constitutional Union': [24], 'Democratic': [2, 4, 8, 10, 13, 14, 17, 20, 28, 29, 34, 37, 39, 45, 47, 52, 55, 57, 64, 70, 74, 77, 81, 83, 86, 91, 94, 97, 100, 105, 108, 111, 114, 116, 118, 123, 129, 134, 137, 140, 144, 151, 158, 162, 168, 176, 178], 'Democratic-Republican': [0, 1], 'Dixiecrat': [103], 'Farmer–Labor': [78], 'Free Soil': [15, 18], 'Green': [149, 155, 156, 165, 170, 177, 181], 'Greenback': [35], 'Independent': [121, 130, 143, 161, 167, 174], 'Liberal Republican': [31], 'Libertarian': [125, 128, 132, 138, 139, 146, 153, 159, 163, 169, 175, 180], 'National Democratic': [50], 'National Republican': [3, 5], 'National Union': [27], 'Natural Law': [148], 'New Alliance': [136], 'Northern Democratic': [26], 'Populist': [48, 61, 141], 'Progressive': [68, 82, 101, 107], 'Prohibition': [41, 44, 49, 51, 54, 59, 63, 67, 73, 75, 99], 'Reform': [150, 154], 'Republican': [21, 23, 30, 32, 33, 36, 40, 43, 46, 53, 56, 60, 65, 69, 72, 79, 80, 84, 87, 90, 96, 98, 104, 106, 109, 112, 113, 117, 120, 122, 131, 133, 135, 142, 145, 152, 157, 166, 171, 173, 179], 'Socialist': [58, 62, 66, 71, 76, 85, 88, 92, 95, 102], 'Southern Democratic': [25], 'States' Rights': [110], 'Taxpayers': [147], 'Union': [93], 'Union Labor': [42], 'Whig': [7, 9, 11, 12, 16, 19]}
+
+
+
+
grouped_by_party.get_group("Socialist")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
581904Eugene V. DebsSocialist402810loss2.985897
621908Eugene V. DebsSocialist420852loss2.850866
661912Eugene V. DebsSocialist901551loss6.004354
711916Allan L. BensonSocialist590524loss3.194193
761920Eugene V. DebsSocialist913693loss3.428282
851928Norman ThomasSocialist267478loss0.728623
881932Norman ThomasSocialist884885loss2.236211
921936Norman ThomasSocialist187910loss0.412876
951940Norman ThomasSocialist116599loss0.234237
1021948Norman ThomasSocialist139569loss0.286312
+ +
+
+
+
+
+

4.3.2 Other GroupBy Methods

+

There are many aggregation methods we can use with .agg. Some useful options are:

+
    +
  • .mean: creates a new DataFrame with the mean value of each group
  • +
  • .sum: creates a new DataFrame with the sum of each group
  • +
  • .max and .min: creates a new DataFrame with the maximum/minimum value of each group
  • +
  • .first and .last: creates a new DataFrame with the first/last row in each group
  • +
  • .size: creates a new Series with the number of entries in each group
  • +
  • .count: creates a new DataFrame with the number of entries, excluding missing values.
  • +
+

Let’s illustrate some examples by creating a DataFrame called df.

+
+
df = pd.DataFrame({'letter':['A','A','B','C','C','C'], 
+                   'num':[1,2,3,4,np.nan,4], 
+                   'state':[np.nan, 'tx', 'fl', 'hi', np.nan, 'ak']})
+df
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
letternumstate
0A1.0NaN
1A2.0tx
2B3.0fl
3C4.0hi
4CNaNNaN
5C4.0ak
+ +
+
+
+

Note the slight difference between .size() and .count(): while .size() returns a Series and counts the number of entries including the missing values, .count() returns a DataFrame and counts the number of entries in each column excluding missing values.

+
+
df.groupby("letter").size()
+
+
letter
+A    2
+B    1
+C    3
+dtype: int64
+
+
+
+
df.groupby("letter").count()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
numstate
letter
A21
B11
C22
+ +
+
+
+

You might recall that the value_counts() function in the previous note does something similar. It turns out value_counts() and groupby.size() are the same, except value_counts() sorts the resulting Series in descending order automatically.

+
+
df["letter"].value_counts()
+
+
letter
+C    3
+A    2
+B    1
+Name: count, dtype: int64
+
+
+

These (and other) aggregation functions are so common that pandas allows for writing shorthand. Instead of explicitly stating the use of .agg, we can call the function directly on the GroupBy object.

+

For example, the following are equivalent:

+
    +
  • elections.groupby("Candidate").agg(mean)
  • +
  • elections.groupby("Candidate").mean()
  • +
+

There are many other methods that pandas supports. You can check them out on the pandas documentation.

+
+
+

4.3.3 Filtering by Group

+

Another common use for GroupBy objects is to filter data by group.

+

groupby.filter takes an argument func, where func is a function that:

+
    +
  • Takes a DataFrame object as input
  • +
  • Returns a single True or False.
  • +
+

groupby.filter applies func to each group/sub-DataFrame:

+
    +
  • If func returns True for a group, then all rows belonging to the group are preserved.
  • +
  • If func returns False for a group, then all rows belonging to that group are filtered out.
  • +
+

In other words, sub-DataFrames that correspond to True are returned in the final result, whereas those with a False value are not. Importantly, groupby.filter is different from groupby.agg in that an entire sub-DataFrame is returned in the final DataFrame, not just a single row. As a result, groupby.filter preserves the original indices and the column we grouped on does NOT become the index!

+

groupby_demo

+

To illustrate how this happens, let’s go back to the elections dataset. Say we want to identify “tight” election years – that is, we want to find all rows that correspond to election years where all candidates in that year won a similar portion of the total vote. Specifically, let’s find all rows corresponding to a year where no candidate won more than 45% of the total vote.

+

In other words, we want to:

+
    +
  • Find the years where the maximum % in that year is less than 45%
  • +
  • Return all DataFrame rows that correspond to these years
  • +
+

For each year, we need to find the maximum % among all rows for that year. If this maximum % is lower than 45%, we will tell pandas to keep all rows corresponding to that year.

+
+
elections.groupby("Year").filter(lambda sf: sf["%"].max() < 45).head(9)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
231860Abraham LincolnRepublican1855993win39.699408
241860John BellConstitutional Union590901loss12.639283
251860John C. BreckinridgeSouthern Democratic848019loss18.138998
261860Stephen A. DouglasNorthern Democratic1380202loss29.522311
661912Eugene V. DebsSocialist901551loss6.004354
671912Eugene W. ChafinProhibition208156loss1.386325
681912Theodore RooseveltProgressive4122721loss27.457433
691912William TaftRepublican3486242loss23.218466
701912Woodrow WilsonDemocratic6296284win41.933422
+ +
+
+
+

What’s going on here? In this example, we’ve defined our filtering function, func, to be lambda sf: sf["%"].max() < 45. This filtering function will find the maximum "%" value among all entries in the grouped sub-DataFrame, which we call sf. If the maximum value is less than 45, then the filter function will return True and all rows in that grouped sub-DataFrame will appear in the final output DataFrame.

+

Examine the DataFrame above. Notice how, in this preview of the first 9 rows, all entries from the years 1860 and 1912 appear. This means that in 1860 and 1912, no candidate in that year won more than 45% of the total vote.

+

You may ask: how is the groupby.filter procedure different to the boolean filtering we’ve seen previously? Boolean filtering considers individual rows when applying a boolean condition. For example, the code elections[elections["%"] < 45] will check the "%" value of every single row in elections; if it is less than 45, then that row will be kept in the output. groupby.filter, in contrast, applies a boolean condition across all rows in a group. If not all rows in that group satisfy the condition specified by the filter, the entire group will be discarded in the output.

+
+
+

4.3.4 Aggregation with lambda Functions

+

What if we wish to aggregate our DataFrame using a non-standard function – for example, a function of our own design? We can do so by combining .agg with lambda expressions.

+

Let’s first consider a puzzle to jog our memory. We will attempt to find the Candidate from each Party with the highest % of votes.

+

A naive approach may be to group by the Party column and aggregate by the maximum.

+
+
elections.groupby("Party").agg(max).head(10)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/4278286395.py:1: FutureWarning:
+
+The provided callable <built-in function max> is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePopular voteResult%
Party
American1976Thomas J. Anderson873053loss21.554001
American Independent1976Lester Maddox9901118loss13.571218
Anti-Masonic1832William Wirt100715loss7.821583
Anti-Monopoly1884Benjamin Butler134294loss1.335838
Citizens1980Barry Commoner233052loss0.270182
Communist1932William Z. Foster103307loss0.261069
Constitution2016Michael Peroutka203091loss0.152398
Constitutional Union1860John Bell590901loss12.639283
Democratic2020Woodrow Wilson81268924win61.344703
Democratic-Republican1824John Quincy Adams151271win57.210122
+ +
+
+
+

This approach is clearly wrong – the DataFrame claims that Woodrow Wilson won the presidency in 2020.

+

Why is this happening? Here, the max aggregation function is taken over every column independently. Among Democrats, max is computing:

+
    +
  • The most recent Year a Democratic candidate ran for president (2020)
  • +
  • The Candidate with the alphabetically “largest” name (“Woodrow Wilson”)
  • +
  • The Result with the alphabetically “largest” outcome (“win”)
  • +
+

Instead, let’s try a different approach. We will:

+
    +
  1. Sort the DataFrame so that rows are in descending order of %
  2. +
  3. Group by Party and select the first row of each sub-DataFrame
  4. +
+

While it may seem unintuitive, sorting elections by descending order of % is extremely helpful. If we then group by Party, the first row of each GroupBy object will contain information about the Candidate with the highest voter %.

+
+
elections_sorted_by_percent = elections.sort_values("%", ascending=False)
+elections_sorted_by_percent.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
1141964Lyndon JohnsonDemocratic43127041win61.344703
911936Franklin RooseveltDemocratic27752648win60.978107
1201972Richard NixonRepublican47168710win60.907806
791920Warren HardingRepublican16144093win60.574501
1331984Ronald ReaganRepublican54455472win59.023326
+ +
+
+
+
+
elections_sorted_by_percent.groupby("Party").agg(lambda x : x.iloc[0]).head(10)
+
+# Equivalent to the below code
+# elections_sorted_by_percent.groupby("Party").agg('first').head(10)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePopular voteResult%
Party
American1856Millard Fillmore873053loss21.554001
American Independent1968George Wallace9901118loss13.571218
Anti-Masonic1832William Wirt100715loss7.821583
Anti-Monopoly1884Benjamin Butler134294loss1.335838
Citizens1980Barry Commoner233052loss0.270182
Communist1932William Z. Foster103307loss0.261069
Constitution2008Chuck Baldwin199750loss0.152398
Constitutional Union1860John Bell590901loss12.639283
Democratic1964Lyndon Johnson43127041win61.344703
Democratic-Republican1824Andrew Jackson151271loss57.210122
+ +
+
+
+

Here’s an illustration of the process:

+

groupby_demo

+

Notice how our code correctly determines that Lyndon Johnson from the Democratic Party has the highest voter %.

+

More generally, lambda functions are used to design custom aggregation functions that aren’t pre-defined by Python. The input parameter x to the lambda function is a GroupBy object. Therefore, it should make sense why lambda x : x.iloc[0] selects the first row in each groupby object.

+

In fact, there’s a few different ways to approach this problem. Each approach has different tradeoffs in terms of readability, performance, memory consumption, complexity, etc. We’ve given a few examples below.

+

Note: Understanding these alternative solutions is not required. They are given to demonstrate the vast number of problem-solving approaches in pandas.

+
+
# Using the idxmax function
+best_per_party = elections.loc[elections.groupby('Party')['%'].idxmax()]
+best_per_party.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
221856Millard FillmoreAmerican873053loss21.554001
1151968George WallaceAmerican Independent9901118loss13.571218
61832William WirtAnti-Masonic100715loss7.821583
381884Benjamin ButlerAnti-Monopoly134294loss1.335838
1271980Barry CommonerCitizens233052loss0.270182
+ +
+
+
+
+
# Using the .drop_duplicates function
+best_per_party2 = elections.sort_values('%').drop_duplicates(['Party'], keep='last')
+best_per_party2.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
1481996John HagelinNatural Law113670loss0.118219
1642008Chuck BaldwinConstitution199750loss0.152398
1101956T. Coleman AndrewsStates' Rights107929loss0.174883
1471996Howard PhillipsTaxpayers184656loss0.192045
1361988Lenora FulaniNew Alliance217221loss0.237804
+ +
+
+
+
+
+
+

4.4 Aggregating Data with Pivot Tables

+

We know now that .groupby gives us the ability to group and aggregate data across our DataFrame. The examples above formed groups using just one column in the DataFrame. It’s possible to group by multiple columns at once by passing in a list of column names to .groupby.

+

Let’s consider the babynames dataset again. In this problem, we will find the total number of baby names associated with each sex for each year. To do this, we’ll group by both the "Year" and "Sex" columns.

+
+
babynames.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountFirst Letter
115957CAF1990Deandrea5D
101976CAF1986Deandrea6D
131029CAF1994Leandrea5L
108731CAF1988Deandrea5D
308131CAM1985Deandrea6D
+ +
+
+
+
+
# Find the total number of baby names associated with each sex for each 
+# year in the data
+babynames.groupby(["Year", "Sex"])[["Count"]].agg(sum).head(6)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/3186035650.py:3: FutureWarning:
+
+The provided callable <built-in function sum> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Count
YearSex
1910F5950
M3213
1911F6602
M3381
1912F9804
M8142
+ +
+
+
+

Notice that both "Year" and "Sex" serve as the index of the DataFrame (they are both rendered in bold). We’ve created a multi-index DataFrame where two different index values, the year and sex, are used to uniquely identify each row.

+

This isn’t the most intuitive way of representing this data – and, because multi-indexed DataFrames have multiple dimensions in their index, they can often be difficult to use.

+

Another strategy to aggregate across two columns is to create a pivot table. You saw these back in Data 8. One set of values is used to create the index of the pivot table; another set is used to define the column names. The values contained in each cell of the table correspond to the aggregated data for each index-column pair.

+

Here’s an illustration of the process:

+

groupby_demo

+

The best way to understand pivot tables is to see one in action. Let’s return to our original goal of summing the total number of names associated with each combination of year and sex. We’ll call the pandas .pivot_table method to create a new table.

+
+
# The `pivot_table` method is used to generate a Pandas pivot table
+import numpy as np
+babynames.pivot_table(
+    index = "Year",
+    columns = "Sex",    
+    values = "Count", 
+    aggfunc = np.sum, 
+).head(5)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/2548053048.py:3: FutureWarning:
+
+The provided callable <function sum at 0x103984160> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
SexFM
Year
191059503213
191166023381
191298048142
19131186010234
19141381513111
+ +
+
+
+

Looks a lot better! Now, our DataFrame is structured with clear index-column combinations. Each entry in the pivot table represents the summed count of names for a given combination of "Year" and "Sex".

+

Let’s take a closer look at the code implemented above.

+
    +
  • index = "Year" specifies the column name in the original DataFrame that should be used as the index of the pivot table
  • +
  • columns = "Sex" specifies the column name in the original DataFrame that should be used to generate the columns of the pivot table
  • +
  • values = "Count" indicates what values from the original DataFrame should be used to populate the entry for each index-column combination
  • +
  • aggfunc = np.sum tells pandas what function to use when aggregating the data specified by values. Here, we are summing the name counts for each pair of "Year" and "Sex"
  • +
+

We can even include multiple values in the index or columns of our pivot tables.

+
+
babynames_pivot = babynames.pivot_table(
+    index="Year",     # the rows (turned into index)
+    columns="Sex",    # the column values
+    values=["Count", "Name"], 
+    aggfunc=max,      # group operation
+)
+babynames_pivot.head(6)
+
+
/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57856/970182367.py:1: FutureWarning:
+
+The provided callable <built-in function max> is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead.
+
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CountName
SexFMFM
Year
1910295237YvonneWilliam
1911390214ZelmaWillis
1912534501YvonneWoodrow
1913584614ZelmaYoshio
1914773769ZelmaYoshio
19159981033ZitaYukio
+ +
+
+
+

Note that each row provides the number of girls and number of boys having that year’s most common name, and also lists the alphabetically largest girl name and boy name. The counts for number of girls/boys in the resulting DataFrame do not correspond to the names listed. For example, in 1910, the most popular girl name is given to 295 girls, but that name was likely not Yvonne.

+
+
+

4.5 Joining Tables

+

When working on data science projects, we’re unlikely to have absolutely all the data we want contained in a single DataFrame – a real-world data scientist needs to grapple with data coming from multiple sources. If we have access to multiple datasets with related information, we can join two or more tables into a single DataFrame.

+

To put this into practice, we’ll revisit the elections dataset.

+
+
elections.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%
01824Andrew JacksonDemocratic-Republican151271loss57.210122
11824John Quincy AdamsDemocratic-Republican113142win42.789878
21828Andrew JacksonDemocratic642806win56.203927
31828John Quincy AdamsNational Republican500897loss43.796073
41832Andrew JacksonDemocratic702735win54.574789
+ +
+
+
+

Say we want to understand the popularity of the names of each presidential candidate in 2022. To do this, we’ll need the combined data of babynames and elections.

+

We’ll start by creating a new column containing the first name of each presidential candidate. This will help us join each name in elections to the corresponding name data in babynames.

+
+
# This `str` operation splits each candidate's full name at each 
+# blank space, then takes just the candidate's first name
+elections["First Name"] = elections["Candidate"].str.split().str[0]
+elections.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
YearCandidatePartyPopular voteResult%First Name
01824Andrew JacksonDemocratic-Republican151271loss57.210122Andrew
11824John Quincy AdamsDemocratic-Republican113142win42.789878John
21828Andrew JacksonDemocratic642806win56.203927Andrew
31828John Quincy AdamsNational Republican500897loss43.796073John
41832Andrew JacksonDemocratic702735win54.574789Andrew
+ +
+
+
+
+
# Here, we'll only consider `babynames` data from 2022
+babynames_2022 = babynames[babynames["Year"]==2022]
+babynames_2022.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
StateSexYearNameCountFirst Letter
237964CAF2022Leandra10L
404916CAM2022Leandro99L
405892CAM2022Andreas14A
235927CAF2022Andrea322A
405695CAM2022Deandre18D
+ +
+
+
+

Now, we’re ready to join the two tables. pd.merge is the pandas method used to join DataFrames together.

+
+
merged = pd.merge(left = elections, right = babynames_2022, \
+                  left_on = "First Name", right_on = "Name")
+merged.head()
+# Notice that pandas automatically specifies `Year_x` and `Year_y` 
+# when both merged DataFrames have the same column name to avoid confusion
+
+# Second option
+# merged = elections.merge(right = babynames_2022, \
+    # left_on = "First Name", right_on = "Name")
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Year_xCandidatePartyPopular voteResult%First NameStateSexYear_yNameCountFirst Letter
01824Andrew JacksonDemocratic-Republican151271loss57.210122AndrewCAM2022Andrew741A
11824John Quincy AdamsDemocratic-Republican113142win42.789878JohnCAM2022John490J
21828Andrew JacksonDemocratic642806win56.203927AndrewCAM2022Andrew741A
31828John Quincy AdamsNational Republican500897loss43.796073JohnCAM2022John490J
41832Andrew JacksonDemocratic702735win54.574789AndrewCAM2022Andrew741A
+ +
+
+
+

Let’s take a closer look at the parameters:

+
    +
  • left and right parameters are used to specify the DataFrames to be joined.
  • +
  • left_on and right_on parameters are assigned to the string names of the columns to be used when performing the join. These two on parameters tell pandas what values should act as pairing keys to determine which rows to merge across the DataFrames. We’ll talk more about this idea of a pairing key next lecture.
  • +
+
+
+

4.6 Parting Note

+

Congratulations! We finally tackled pandas. Don’t worry if you are still not feeling very comfortable with it—you will have plenty of chances to practice over the next few weeks.

+

Next, we will get our hands dirty with some real-world datasets and use our pandas knowledge to conduct some exploratory data analysis.

+ + + + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/pca_1/images/PCA_1.png b/docs/pca_1/images/PCA_1.png new file mode 100644 index 000000000..d2e3b1310 Binary files /dev/null and b/docs/pca_1/images/PCA_1.png differ diff --git a/docs/pca_1/images/dataset3.png b/docs/pca_1/images/dataset3.png new file mode 100644 index 000000000..a9ab94fb4 Binary files /dev/null and b/docs/pca_1/images/dataset3.png differ diff --git a/docs/pca_1/images/dataset3_outlier.png b/docs/pca_1/images/dataset3_outlier.png new file mode 100644 index 000000000..a5d8fd045 Binary files /dev/null and b/docs/pca_1/images/dataset3_outlier.png differ diff --git a/docs/pca_1/images/dataset4.png b/docs/pca_1/images/dataset4.png new file mode 100644 index 000000000..f8da69964 Binary files /dev/null and b/docs/pca_1/images/dataset4.png differ diff --git a/docs/pca_1/images/dataset_dims.png b/docs/pca_1/images/dataset_dims.png new file mode 100644 index 000000000..b7b3cb5da Binary files /dev/null and b/docs/pca_1/images/dataset_dims.png differ diff --git a/docs/pca_1/images/diff_reductions.png b/docs/pca_1/images/diff_reductions.png new file mode 100644 index 000000000..782f93fc1 Binary files /dev/null and b/docs/pca_1/images/diff_reductions.png differ diff --git a/docs/pca_1/images/factorization.png b/docs/pca_1/images/factorization.png new file mode 100644 index 000000000..000913a12 Binary files /dev/null and b/docs/pca_1/images/factorization.png differ diff --git a/docs/pca_1/images/factorization_constraints.png b/docs/pca_1/images/factorization_constraints.png new file mode 100644 index 000000000..0b80ff2df Binary files /dev/null and b/docs/pca_1/images/factorization_constraints.png differ diff --git a/docs/pca_1/images/matmul.png b/docs/pca_1/images/matmul.png new file mode 100644 index 000000000..6c56dba85 Binary files /dev/null and b/docs/pca_1/images/matmul.png differ diff --git a/docs/pca_1/images/matmul2.png b/docs/pca_1/images/matmul2.png new file mode 100644 index 000000000..6b7c6edb3 Binary files /dev/null and b/docs/pca_1/images/matmul2.png differ diff --git a/docs/pca_1/images/matmul3.png b/docs/pca_1/images/matmul3.png new file mode 100644 index 000000000..35bc575d8 Binary files /dev/null and b/docs/pca_1/images/matmul3.png differ diff --git a/docs/pca_1/images/matrix_decomp.png b/docs/pca_1/images/matrix_decomp.png new file mode 100644 index 000000000..7e5a90ad7 Binary files /dev/null and b/docs/pca_1/images/matrix_decomp.png differ diff --git a/docs/pca_1/images/optimization_takeaways.png b/docs/pca_1/images/optimization_takeaways.png new file mode 100644 index 000000000..3b3da0ee9 Binary files /dev/null and b/docs/pca_1/images/optimization_takeaways.png differ diff --git a/docs/pca_1/images/pca_example.png b/docs/pca_1/images/pca_example.png new file mode 100644 index 000000000..825ca3b66 Binary files /dev/null and b/docs/pca_1/images/pca_example.png differ diff --git a/docs/pca_1/images/reconstruction_loss.png b/docs/pca_1/images/reconstruction_loss.png new file mode 100644 index 000000000..984246187 Binary files /dev/null and b/docs/pca_1/images/reconstruction_loss.png differ diff --git a/docs/pca_1/pca_1.html b/docs/pca_1/pca_1.html new file mode 100644 index 000000000..14ae6a28f --- /dev/null +++ b/docs/pca_1/pca_1.html @@ -0,0 +1,1337 @@ + + + + + + + + + +24  PCA I – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

24  PCA I

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
    +
  • Discuss the dimensionality of a dataset and strategies for dimensionality reduction
  • +
  • Derive and carry out the procedure of PCA
  • +
+
+
+

So far in this course, we’ve focused on supervised learning techniques that create a function to map inputs (features) to labelled outputs. Regression and classification are two main examples, where the output value of regression is quantitative while the output value of classification is categorical.

+

Today, we’ll introduce an unsupervised learning technique called PCA. Unlike supervised learning, unsupervised learning is applied to unlabeled data. Because we have features but no labels, we aim to identify patterns in those features.

+
+

24.1 Visualization (Revisited)

+

Visualization can help us identify clusters or patterns in our dataset, and it can give us an intuition about our data and how to clean it for the model. For this demo, we’ll return to the MPG dataset from Lecture 19 and see how far we can push visualization for multiple features.

+
+
+Code +
import pandas as pd
+import numpy as np
+import scipy as sp
+import plotly.express as px
+import seaborn as sns
+
+
+
+
+Code +
mpg = sns.load_dataset("mpg").dropna()
+mpg.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
mpgcylindersdisplacementhorsepowerweightaccelerationmodel_yearoriginname
018.08307.0130.0350412.070usachevrolet chevelle malibu
115.08350.0165.0369311.570usabuick skylark 320
218.08318.0150.0343611.070usaplymouth satellite
316.08304.0150.0343312.070usaamc rebel sst
417.08302.0140.0344910.570usaford torino
+ +
+
+
+

We can plot one feature as a histogram to see it’s distribution. Since we only plot one feature, we consider this a 1-dimensional plot.

+
+
+Code +
px.histogram(mpg, x="displacement")
+
+
+
+
+
+

We can also visualize two features (2-dimensional scatter plot):

+
+
+Code +
px.scatter(mpg, x="displacement", y="horsepower")
+
+
+
+
+
+

Three features (3-dimensional scatter plot):

+
+
+Code +
fig = px.scatter_3d(mpg, x="displacement", y="horsepower", z="weight",
+                    width=800, height=800)
+fig.update_traces(marker=dict(size=3))
+
+
+
+
+
+

We can even push to 4 features using a 3D scatter plot and a colorbar:

+
+
+Code +
fig = px.scatter_3d(mpg, x="displacement", 
+                    y="horsepower", 
+                    z="weight", 
+                    color="model_year",
+                    width=800, height=800, 
+                    opacity=.7)
+fig.update_traces(marker=dict(size=5))
+
+
+
+
+
+

Visualizing 5 features is also possible if we make the scatter dots unique to the datapoint’s origin.

+
+
+Code +
fig = px.scatter_3d(mpg, x="displacement", 
+                    y="horsepower", 
+                    z="weight", 
+                    color="model_year",
+                    size="mpg",
+                    symbol="origin",
+                    width=900, height=800, 
+                    opacity=.7)
+# hide color scale legend on the plotly fig
+fig.update_layout(coloraxis_showscale=False)
+
+
+
+
+
+

However, adding more features to our visualization can make our plot look messy and uninformative, and it can also be near impossible if we have a large number of features. The problem is that many datasets come with more than 5 features —— hundreds, even. Is it still possible to visualize all those features?

+
+
+

24.2 Dimensionality

+

Suppose we have a dataset of:

+
    +
  • \(N\) observations (datapoints/rows)
  • +
  • \(d\) attributes (features/columns)
  • +
+

Let’s “rename” this in terms of linear algebra so that we can be more clear with our wording. Using linear algebra, we can view our matrix as:

+
    +
  • \(N\) row vectors in a \(d\)-Dimensional space, OR
  • +
  • \(d\) column vectors in an \(N\)-Dimensions space
  • +
+

The intrinsic dimension of a dataset is the minimal set of dimensions needed to approximately represent the data. In linear algebra terms, it is the dimension of the column space of a matrix, or the number of linearly independent columns in a matrix; this is equivalently called the rank of a matrix.

+

In the examples below, Dataset 1 has 2 dimensions because it has 2 linearly independent columns. Similarly, Dataset 2 has 3 dimensions because it has 3 linearly independent columns.

+
+ +
+


+

What about Dataset 4 below?

+
+ +
+

It may be tempting to say that it has 4 dimensions, but the Weight (lbs) column is actually just a linear transformation of the Weight (kg) column. Thus, no new information is captured, and the matrix of our dataset has a (column) rank of 3! Therefore, despite having 4 columns, we still say that this data is 3-dimensional.

+

Plotting the weight columns together reveals the key visual intuition. While the two columns visually span a 2D space as a line, the data does not deviate at all from that singular line. This means that one of the weight columns is redundant! Even given the option to cover the whole 2D space, the data below does not. It might as well not have this dimension, which is why we still do not consider the data below to span more than 1 dimension.

+
+ +
+

What happens when there are outliers? Below, we’ve added one outlier point to the dataset above, and just that one point is enough to change the rank of the matrix from 1 to 2 dimensions. However, the data is still approximately 1-dimensional.

+
+ +
+

Dimensionality reduction is generally an approximation of the original data that’s achieved by projecting the data onto a desired dimension. In the example below, our original datapoints (blue dots) are 2-dimensional. We have a few choices if we want to project them down to 1-dimension: project them onto the \(x\)-axis (left), project them onto the \(y\)-axis (middle), or project them to a line \(mx + b\) (right). The resulting datapoints after the projection is shown in red. Which projection do you think is better? How can we calculate that?

+
+ +
+


+

In general, we want the projection which is the best approximation for the original data (the graph on the right). In other words, we want the projection that captures the most variance of the original data. In the next section, we’ll see how this is calculated.

+
+
+

24.3 Matrix Decomposition (Factorization)

+

One linear technique for dimensionality reduction is matrix decomposition, which is closely tied to matrix multiplication. In this section, we will decompose our data matrix \(X\) into a lower-dimensional matrix \(Z\) that approximately recovers the original data when multiplied by \(W\).

+
+ +
+

First, consider the matrix multiplication example below:

+
+ +
+
    +
  • For table 1, each row of the fruits matrix represents one bowl of fruit; for example, the first bowl/row has 2 apples, 2 lemons, and 2 melons.
  • +
  • For table 2, each column of the dollars matrix represents the cost of fruit at a store; for example, the first store/column charges 2 dollars for an apple, 1 dollar for a lemon, and 4 dollars for a melon.
  • +
  • The output is the cost of each bowl at each store.
  • +
+
+
+
+ +
+
+Linear Algebra Review: Matrix Multiplication +
+
+
+

In general, there are two ways to interpret matrix multiplication:

+
    +
  1. Each datapoint in our output is a dot product between a row in the data matrix and a column in the operations matrix. In this view, we perform multiple linear operations on the data +
    + +
  2. +
  3. Each column in our output is a linear transformation of the original columns based on a column in the transformation matrix +
    + +
  4. +
+

We will use the second interpretation to link matrix multiplication with matrix decomposition, where we receive a lower dimensional representation of data along with a transformation matrix.

+
+
+

Matrix decomposition (a.k.a matrix factorization) is the opposite of matrix multiplication. Instead of multiplying two matrices, we want to decompose a single matrix into 2 separate matrices. Just like with real numbers, there are infinite ways to decompose a matrix into a product of two matrices. For example, \(9.9\) can be decomposed as \(1.1 * 9\), \(3.3 * 3.3\), \(1 * 9.9\), etc. Additionally, the sizes of the 2 decomposed matrices can vary drastically. In the example below, the first factorization (top) multiplies a \(3x2\) matrix by a \(2x3\) matrix while the second factorization (bottom) multiplies a \(3x3\) matrix by a \(3x3\) matrix; both result in the original matrix on the right.

+
+ +
+


+

We can even expand the \(3x3\) matrices to \(3x4\) and \(4x3\) (shown below as the factorization on top), but this defeats the point of dimensionality reduction since we’re adding more “useless” dimensions. On the flip side, we also can’t reduce the dimension to \(3x1\) and \(1x3\) (shown below as the factorization on the bottom); since the rank of the original matrix is greater than 1, this decomposition will not result in the original matrix.

+
+ +
+


In practice, we often work with datasets containing many features, so we usually want to construct decompositions where the dimensionality is below the rank of the original matrix. While this does not recover the data exactly, we can still provide approximate reconstructions of the matrix.

+

In the next section, we will discuss a method to automatically and approximately factorize data. This avoids redundant features and makes computation easier because we can train on less data. Since some approximations are better than others, we will also discuss how the method helps us capture a lot of information in a low number of dimensions.

+
+
+

24.4 Principal Component Analysis (PCA)

+

In PCA, our goal is to transform observations from high-dimensional data down to low dimensions (often 2, as most visualizations are 2D) through linear transformations. In other words, we want to find a linear transformation that creates a low-dimension representation that captures as much of the original data’s total variance as possible.

+
+ +
+

We often perform PCA during the Exploratory Data Analysis (EDA) stage of our data science lifecycle when we don’t know what model to use. It helps us with:

+
    +
  • Visually identifying clusters of similar observations in high dimensions.
  • +
  • Removing irrelevant dimensions if we suspect that the dataset is inherently low rank. For example, if the columns are collinear, there are many attributes, but only a few mostly determine the rest through linear associations.
  • +
  • Creating a transformed dataset of decorrelated features.
  • +
+
+ +
+

There are two equivalent ways of framing PCA:

+
    +
  1. Finding directions of maximum variability in the data.
  2. +
  3. Finding the low dimensional (rank) matrix factorization that best approximates the data.
  4. +
+

To execute the first approach of variance maximization framing (more common), we can find the variances of each attribute with np.var and then keep the \(k\) attributes with the highest variance. However, this approach limits us to work with attributes individually; it cannot resolve collinearity, and we cannot combine features.

+

The second approach uses PCA to construct principal components with the most variance in the data (even higher than the first approach) using linear combinations of features. We’ll describe the procedure in the next section.

+
+

24.4.1 PCA Procedure (Overview)

+

To perform PCA on a matrix:

+
    +
  1. Center the data matrix by subtracting the mean of each attribute column.
  2. +
  3. To find the \(i\)-th principal component, \(v_i\): +
      +
    1. \(v\) is a unit vector that linearly combines the attributes.
    2. +
    3. \(v\) gives a one-dimensional projection of the data.
    4. +
    5. \(v\) is chosen to maximize the variance along the projection onto \(v\). This is equivalent to minimizing the sum of squared distances between each point and its projection onto \(v\).
    6. +
    7. Choose \(v\) such that it is orthogonal to all previous principal components.
    8. +
  4. +
+

The \(k\) principal components capture the most variance of any \(k\)-dimensional reduction of the data matrix.

+

In practice, however, we don’t carry out the procedures in step 2 because they take too long to compute. Instead, we use singular value decomposition (SVD) to find all principal components efficiently.

+
+
+

24.4.2 Deriving PCA as Error Minimization

+

In this section, we will derive PCA keeping the following goal in mind: minimize the reconstruction loss for our matrix factorization model. You are not expected to be able to be able to redo this derivation, but understanding the derivation may help with future assignments.

+

Given a matrix \(X\) with \(n\) rows and \(d\) columns, our goal is to find its best decomposition such that \[X \approx Z W\] Z has \(n\) rows and \(k\) columns; W has \(k\) rows and \(d\) columns.

+
+ +
+

To measure the accuracy of our reconstruction, we define the reconstruction loss below, where \(X_i\) is the row vector of \(X\), and \(Z_i\) is the row vector of \(Z\):

+
+ +
+ +

There are many solutions to the above, so let’s constrain our model such that \(W\) is a row-orthonormal matrix (i.e. \(WW^T=I\)) where the rows of \(W\) are our principal components.

+

In our derivation, let’s first work with the case where \(k=1\). Here Z will be an \(n \times 1\) vector and W will be a \(1 \times d\) vector.

+

\[\begin{aligned} +L(z,w) &= \frac{1}{n}\sum_{i=1}^{n}(X_i - z_{i}w)(X_i - z_{i}w)^T \\ +&= \frac{1}{n}\sum_{i=1}^{n}(X_{i}X_{i}^T - 2z_{i}X_{i}w^T + z_{i}^{2}ww^T) & \text{(expand the loss)} \\ += \frac{1}{n}\sum_{i=1}^{n}(-2z_{i}X_{i}w^T + z_{i}^{2}) & \text{(First term is constant and }ww^T=1\text{ by orthonormality)} \\ +\end{aligned}\]

+

Now, we can take the derivative with respect to \(Z_i\). \[\begin{aligned} +\frac{\partial{L(Z,W)}}{\partial{z_i}} &= \frac{1}{n}(-2X_{i}w^T + 2z_{i}) \\ +z_i &= X_iw^T & \text{(Setting derivative equal to 0 and solving for }z_i\text{)}\end{aligned}\]

+

We can now substitute our solution for \(z_i\) in our loss function:

+

\[\begin{aligned} +L(z,w) &= \frac{1}{n}\sum_{i=1}^{n}(-2z_{i}X_{i}w^T + z_{i}^{2}) \\ +L(z=X_iw^T,w) &= \frac{1}{n}\sum_{i=1}^{n}(-2X_iw^TX_{i}w^T + (X_iw^T)^{2}) \\ +&= \frac{1}{n}\sum_{i=1}^{n}(-X_iw^TX_{i}w^T) \\ +&= \frac{1}{n}\sum_{i=1}^{n}(-wX_{i}^TX_{i}w^T) \\ +&= -w\frac{1}{n}\sum_{i=1}^{n}(X_i^TX_{i})w^T \\ +&= -w\Sigma w^T +\end{aligned}\]

+

Now, we need to minimize our loss with respect to \(w\). Since we have a negative sign, one way we can do this is by making \(w\) really big. However, we also have the orthonormality constraint \(ww^T=1\). To incorporate this constraint into the equation, we can add a Lagrange multiplier, \(\lambda\). Note that lagrangian multipliers are out of scope for Data 100.

+

\[ +L(w,\lambda) = -w\Sigma w^T + \lambda(ww^T-1) +\]

+

Taking the derivative with respect to \(w\), \[\begin{aligned} +\frac{\partial{L(w,\lambda)}}{w} &= -2\Sigma w^T + 2\lambda w^T \\ +2\Sigma w^T - 2\lambda w^T &= 0 & \text{(Setting derivative equal to 0)} \\ +\Sigma w^T &= \lambda w^T \\ +\end{aligned}\]

+

This result implies that:

+
    +
  • \(w\) is a unitary eigenvector of the covariance matrix. This means that \(||w||^2 = ww^T = 1\)
  • +
  • The error is minimized when \(w\) is the eigenvector with the largest eigenvalue \(\lambda\).
  • +
+

This derivation can inductively be used for the next (second) principal component (not shown).

+

The final takeaway from this derivation is that the principal components are the eigenvectors with the largest eigenvalues of the covariance matrix. These are the directions of the maximum variance of the data. We can construct the latent factors (the Z matrix) by projecting the centered data X onto the principal component vectors:

+
+ +
+ + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/pca_2/images/Z.png b/docs/pca_2/images/Z.png new file mode 100644 index 000000000..63b012730 Binary files /dev/null and b/docs/pca_2/images/Z.png differ diff --git a/docs/pca_2/images/diag_matrix.png b/docs/pca_2/images/diag_matrix.png new file mode 100644 index 000000000..b8e5c8154 Binary files /dev/null and b/docs/pca_2/images/diag_matrix.png differ diff --git a/docs/pca_2/images/lin_reg.png b/docs/pca_2/images/lin_reg.png new file mode 100644 index 000000000..f1806cc49 Binary files /dev/null and b/docs/pca_2/images/lin_reg.png differ diff --git a/docs/pca_2/images/lin_reg_reverse.png b/docs/pca_2/images/lin_reg_reverse.png new file mode 100644 index 000000000..0238aa061 Binary files /dev/null and b/docs/pca_2/images/lin_reg_reverse.png differ diff --git a/docs/pca_2/images/mnist.png b/docs/pca_2/images/mnist.png new file mode 100644 index 000000000..d40283d42 Binary files /dev/null and b/docs/pca_2/images/mnist.png differ diff --git a/docs/pca_2/images/orthonormal.png b/docs/pca_2/images/orthonormal.png new file mode 100644 index 000000000..9256b5c21 Binary files /dev/null and b/docs/pca_2/images/orthonormal.png differ diff --git a/docs/pca_2/images/pca_plot.png b/docs/pca_2/images/pca_plot.png new file mode 100644 index 000000000..3aad5a3bf Binary files /dev/null and b/docs/pca_2/images/pca_plot.png differ diff --git a/docs/pca_2/images/rank1.png b/docs/pca_2/images/rank1.png new file mode 100644 index 000000000..f2df603c3 Binary files /dev/null and b/docs/pca_2/images/rank1.png differ diff --git a/docs/pca_2/images/rotate_center_plot.png b/docs/pca_2/images/rotate_center_plot.png new file mode 100644 index 000000000..878afe51e Binary files /dev/null and b/docs/pca_2/images/rotate_center_plot.png differ diff --git a/docs/pca_2/images/s.png b/docs/pca_2/images/s.png new file mode 100644 index 000000000..e4c769995 Binary files /dev/null and b/docs/pca_2/images/s.png differ diff --git a/docs/pca_2/images/scree_plot.png b/docs/pca_2/images/scree_plot.png new file mode 100644 index 000000000..c4400a020 Binary files /dev/null and b/docs/pca_2/images/scree_plot.png differ diff --git a/docs/pca_2/images/slide10.png b/docs/pca_2/images/slide10.png new file mode 100644 index 000000000..e7578f8c0 Binary files /dev/null and b/docs/pca_2/images/slide10.png differ diff --git a/docs/pca_2/images/slide16.png b/docs/pca_2/images/slide16.png new file mode 100644 index 000000000..9b46197c1 Binary files /dev/null and b/docs/pca_2/images/slide16.png differ diff --git a/docs/pca_2/images/slide17_2.png b/docs/pca_2/images/slide17_2.png new file mode 100644 index 000000000..5261c9c3c Binary files /dev/null and b/docs/pca_2/images/slide17_2.png differ diff --git a/docs/pca_2/images/slide21.png b/docs/pca_2/images/slide21.png new file mode 100644 index 000000000..b008c0969 Binary files /dev/null and b/docs/pca_2/images/slide21.png differ diff --git a/docs/pca_2/images/u.png b/docs/pca_2/images/u.png new file mode 100644 index 000000000..e18743ae6 Binary files /dev/null and b/docs/pca_2/images/u.png differ diff --git a/docs/pca_2/images/v.png b/docs/pca_2/images/v.png new file mode 100644 index 000000000..7a4ec99a6 Binary files /dev/null and b/docs/pca_2/images/v.png differ diff --git a/docs/pca_2/pca_2.html b/docs/pca_2/pca_2.html new file mode 100644 index 000000000..f757ae5df --- /dev/null +++ b/docs/pca_2/pca_2.html @@ -0,0 +1,3169 @@ + + + + + + + + + +25  PCA II – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

25  PCA II

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
    +
  • Dissect Singular Value Decomposition (SVD) and use it to calculate principal components
  • +
  • Develop a deeper understanding of how to interpret Principal Component Analysis (PCA)
  • +
  • See applications of PCA in some real-world contexts
  • +
+
+
+
+

25.1 Dimensionality Reduction

+

We often work with high-dimensional data that contain many columns/features. Given all these dimensions, this data can be difficult to visualize and model. However, not all the data in this high-dimensional space is useful —— there could be repeated features or outliers that make the data seem more complex than it really is. The most concise representation of high-dimensional data is its intrinsic dimension. Our goal with this lecture is to use dimensionality reduction to find the intrinsic dimension of a high-dimensional dataset. In other words, we want to find a smaller set of new features/columns that approximates the original data well without losing that much information. This is especially useful because this smaller set of features allows us to better visualize the data and do EDA to understand which modeling techniques would fit the data well.

+
+

25.1.1 Loss Minimization

+

In order to find the intrinsic dimension of a high-dimensional dataset, we’ll use techniques from linear algebra. Suppose we have a high-dimensional dataset, \(X\), that has \(n\) rows and \(d\) columns. We want to factor (split) \(X\) into two matrices, \(Z\) and \(W\). \(Z\) has \(n\) rows and \(k\) columns; \(W\) has \(k\) rows and \(d\) columns.

+

\[ X \approx ZW\]

+

We can reframe this problem as a loss function: in other words, if we want \(X\) to roughly equal \(ZW\), their difference should be as small as possible, ideally 0. This difference becomes our loss function, \(L(Z, W)\):

+

\[L(Z, W) = \frac{1}{n}\sum_{i=1}^{n}||X_i - Z_iW||^2\]

+

Breaking down the variables in this formula:

+
    +
  • \(X_i\): A row vector from the original data matrix \(X\), which we can assume is centered to a mean of 0.
  • +
  • \(Z_i\): A row vector from the lower-dimension matrix \(Z\). The rows of \(Z\) are also known as latent vectors and are used for EDA.
  • +
  • \(W\): The rows of \(W\) are the principal components. We constrain our model so that \(W\) is a row-orthonormal matrix (e.g., \(WW^T = I\)).
  • +
+

Using calculus and optimization techniques (take EECS 127 if you’re interested!), we find that this loss is minimized when \[Z = XW^T\] The proof for this is out of scope for Data 100, but for those who are interested, we:

+
    +
  • Use Lagrangian multipliers to introduce the orthonormality constraint on \(W\).
  • +
  • Took the derivative with respect to \(W\) (which requires vector calculus) and solve for 0.
  • +
+

This gives us a very cool result of

+

\[\Sigma w^T = \lambda w^T\]

+

\(\Sigma\) is the covariance matrix of \(X\). The equation above implies that:

+
    +
  1. \(w\) is a unitary eigenvector of the covariance matrix \(\Sigma\). In other words, its norm is equal to 1: \(||w||^2 = ww^T = 1\).
  2. +
  3. The loss is minimized when \(w\) is the eigenvector with the largest eigenvalue \(\lambda\).
  4. +
+

This tells us that the principal components (rows of \(W\)) are the eigenvectors with the largest eigenvalues of the covariance matrix \(\Sigma\). They represent the directions of maximum variance in the data. We can construct the latent factors, or the \(Z\) matrix, by projecting the centered data \(X\) onto the principal component vectors, \(W^T\).

+
+ +
+

But how do we compute the eigenvectors of \(\Sigma\)? Let’s dive into SVD to answer this question.

+
+
+
+

25.2 Singular Value Decomposition (SVD)

+

Singular value decomposition (SVD) is an important concept in linear algebra. Since this class requires a linear algebra course (MATH 54, MATH 56, or EECS 16A) as a pre/co-requisite, we assume you have taken or are taking a linear algebra course, so we won’t explain SVD in its entirety. In particular, we will go over:

+
    +
  • Why SVD is a valid decomposition of rectangular matrices
  • +
  • Why PCA is an application of SVD
  • +
+

We will not dive deep into the theory and details of SVD. Instead, we will only cover what is needed for a data science interpretation. If you’d like more information, check out EECS 16B Note 14 or EECS 16B Note 15.

+
+
+
+ +
+
+[Linear Algebra Review] Orthonormality +
+
+
+

Orthonormal is a combination of two words: orthogonal and normal.

+

When we say the columns of a matrix are orthonormal, we know that:

+
    +
  1. The columns are all orthogonal to each other (all pairs of columns have a dot product of zero)
  2. +
  3. All columns are unit vectors (the length of each column vector is 1) +
    + +
  4. +
+

Orthonormal matrices have a few important properties:

+
    +
  • Orthonormal inverse: If an \(m \times n\) matrix \(Q\) has orthonormal columns, \(QQ^T= Iₘ\) and \(Q^TQ=Iₙ\).
  • +
  • Rotation of coordinates: The linear transformation represented by an orthonormal matrix is often a rotation (and less often a reflection). We can imagine columns of the matrix as where the unit vectors of the original space will land.
  • +
+
+
+
+
+
+ +
+
+[Linear Algebra Review] Diagonal Matrices +
+
+
+

Diagonal matrices are square matrices with non-zero values on the diagonal axis and zeros everywhere else.

+

Right-multiplied diagonal matrices scale each column up or down by a constant factor. Geometrically, this transformation can be viewed as scaling the coordinate system.

+
+ +
+
+
+

Singular value decomposition (SVD) describes a matrix \(X\)’s decomposition into three matrices: \[ X = U S V^T \]

+

Let’s break down each of these terms one by one.

+
+

25.2.1 \(U\)

+
    +
  • \(U\) is an \(n \times d\) matrix: \(U \in \mathbb{R}^{n \times d}\).
  • +
  • Its columns are orthonormal. +
      +
    • \(\vec{u_i}^T\vec{u_j} = 0\) for all pairs \(i, j\).
    • +
    • All vectors \(\vec{u_i}\) are unit vectors where \(|| \vec{u_i} || = 1\) .
    • +
  • +
  • Columns of U are called the left singular vectors and are eigenvectors of \(XX^T\).
  • +
  • \(UU^T = I_n\) and \(U^TU = I_d\).
  • +
  • We can think of \(U\) as a rotation.
  • +
+
+ +
+
+
+

25.2.2 \(S\)

+
    +
  • \(S\) is a \(d \times d\) matrix: \(S \in \mathbb{R}^{d \times d}\).
  • +
  • The majority of the matrix is zero.
  • +
  • It has \(r\) non-zero singular values, and \(r\) is the rank of \(X\). Note that rank \(r \leq d\).
  • +
  • Diagonal values (singular values \(s_1, s_2, ... s_r\)), are non-negative ordered from largest to smallest: \(s_1 \ge s_2 \ge ... \ge s_r > 0\).
  • +
  • We can think of \(S\) as a scaling operation.
  • +
+
+ +
+
+
+

25.2.3 \(V^T\)

+
    +
  • \(V^T\) is an \(d \times d\) matrix: \(V \in \mathbb{R}^{d \times d}\).
  • +
  • Columns of \(V\) are orthonormal, so the rows of \(V^T\) are orthonormal.
  • +
  • Columns of \(V\) are called the right singular vectors, and similarly to \(U\), are eigenvectors of \(X^TX\).
  • +
  • \(VV^T = V^TV = I_d\)
  • +
  • We can think of \(V\) as a rotation.
  • +
+
+ +
+ +
+
+

25.2.4 SVD in NumPy

+

For this demo, we’ll work with a rectangular dataset containing \(n=100\) rows and \(d=4\) columns.

+
+
+Code +
import pandas as pd
+import seaborn as sns
+import matplotlib.pyplot as plt
+import numpy as np
+
+np.random.seed(23)  # kallisti
+
+plt.rcParams["figure.figsize"] = (4, 4)
+plt.rcParams["figure.dpi"] = 150
+sns.set()
+
+rectangle = pd.read_csv("data/rectangle_data.csv")
+rectangle.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
widthheightareaperimeter
0864828
124812
21338
3932724
4987234
+ +
+
+
+

In NumPy, the SVD decomposition function can be called with np.linalg.svd (documentation). There are multiple versions of SVD; to get the version that we will follow, we need to set the full_matrices parameter to False.

+
+
U, S, Vt = np.linalg.svd(rectangle, full_matrices=False)
+
+

First, let’s examine U. As we can see, it’s dimensions are \(n \times d\).

+
+
U.shape
+
+
(100, 4)
+
+
+

The first 5 rows of U are shown below.

+
+
pd.DataFrame(U).head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
0123
0-0.1551510.064830-0.0299350.934418
1-0.038370-0.0891550.062019-0.299462
2-0.020357-0.0811380.0589970.006852
3-0.101519-0.076203-0.148160-0.011848
4-0.2189730.2064230.007274-0.056580
+ +
+
+
+

\(S\) is a little different in NumPy. Since the only useful values in the diagonal matrix \(S\) are the singular values on the diagonal axis, only those values are returned and they are stored in an array.

+

Our rectangle_data has a rank of \(3\), so we should have 3 non-zero singular values, sorted from largest to smallest.

+
+
S
+
+
array([3.62932568e+02, 6.29904732e+01, 2.56544651e+01, 2.56364534e-14])
+
+
+

It seems like we have 4 non-zero values instead of 3, but notice that the last value is so small (\(10^{-15}\)) that it’s practically \(0\). Hence, we can round the values to get 3 singular values.

+
+
np.round(S)
+
+
array([363.,  63.,  26.,   0.])
+
+
+

To get S in matrix format, we use np.diag.

+
+
Sm = np.diag(S)
+Sm
+
+
array([[3.62932568e+02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
+       [0.00000000e+00, 6.29904732e+01, 0.00000000e+00, 0.00000000e+00],
+       [0.00000000e+00, 0.00000000e+00, 2.56544651e+01, 0.00000000e+00],
+       [0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.56364534e-14]])
+
+
+

Finally, we can see that Vt is indeed a \(d \times d\) matrix.

+
+
Vt.shape
+
+
(4, 4)
+
+
+
+
pd.DataFrame(Vt)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
0123
0-0.146436-0.129942-8.100201e-01-0.552756
1-0.192736-0.1891285.863482e-01-0.763727
2-0.7049570.7091557.951614e-030.008396
3-0.666667-0.6666679.775109e-170.333333
+ +
+
+
+

To check that this SVD is a valid decomposition, we can reverse it and see if it matches our original table (it does, yay!).

+
+
pd.DataFrame(U @ Sm @ Vt).head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
0123
08.06.048.028.0
12.04.08.012.0
21.03.03.08.0
39.03.027.024.0
49.08.072.034.0
+ +
+
+
+
+
+
+

25.3 PCA with SVD

+

Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) can be easily mixed up, especially when you have to keep track of so many acronyms. Here is a quick summary:

+
    +
  • SVD: a linear algebra algorithm that splits a matrix into 3 component parts.
  • +
  • PCA: a data science procedure used for dimensionality reduction that uses SVD as one of the steps.
  • +
+ +
+

25.3.1 Deriving Principal Components From SVD

+

After centering the original data matrix \(X\) so that each column has a mean of 0, we find its SVD: \[ X = U S V^T \]

+

Because \(X\) is centered, the covariance matrix of \(X\), \(\Sigma\), is equal to \(X^T X\). Rearranging this equation, we get

+

\[ +\begin{align} +\Sigma &= X^T X \\ +&= (U S V^T)^T U S V^T \\ +&= V S^T U^T U S V^T & \text{U is orthonormal, so $U^T U = I$} \\ +&= V S^2 V^T +\end{align} +\]

+

Multiplying both sides by \(V\), we get

+

\[ +\begin{align} +\Sigma V &= VS^2 V^T V \\ +&= V S^2 +\end{align} +\]

+

This shows that the columns of \(V\) are the eigenvectors of the covariance matrix \(\Sigma\) and, therefore, the principal components. Additionally, the squared singular values \(S^2\) are the eigenvalues of \(\Sigma\).

+ +

We’ve now shown that the first \(k\) columns of \(V\) (equivalently, the first \(k\) rows of \(V^{T}\)) are the first k principal components of \(X\). We can use them to construct the latent vector representation of \(X\), \(Z\), by projecting \(X\) onto the principal components.

+ +
+slide16 +
+

We can then instead compute \(Z\) as follows:

+

\[ +\begin{align} +Z &= X V \\ +&= USV^T V \\ +&= U S +\end{align} +\]

+

\[Z = XV = US\]

+

In other words, we can construct \(X\)’s’ latent vector representation \(Z\) through:

+
    +
  1. Projecting \(X\) onto the first \(k\) columns of \(V\), \(V[:, :k]\)
  2. +
  3. Multiplying the first \(k\) columns of U and the first \(k\) rows of S
  4. +
+

Using \(Z\), we can approximately recover the centered \(X\) matrix by multiplying \(Z\) by \(V^T\): \[ Z V^T = XV V^T = USV^T = X\]

+

Note that to recover the original (uncentered) \(X\) matrix, we would also need to add back the mean.

+
+
+
+ +
+
+[Summary] Terminology +
+
+
+

Note: The notation used for PCA this semester differs from previous semesters a bit. Please bay careful attention to the terminology presented in this note.

+

To summarize the terminology and concepts we’ve covered so far:

+
    +
  1. Principal Component: The columns of \(V\) . These vectors specify the principal coordinate system and represent the directions along which the most variance in the data is captured.
  2. +
  3. Latent Vector Representation of \(X\): The projection of our data matrix \(X\) onto the principal components, \(Z = XV = US\). In previous semesters, the terminology was different and this was termed the principal components of \(X\). In other classes, the term principal coordinate is also used. The \(i\)-th latent vector refers to the \(i\)-th column of \(V\), corresponding to the \(i\)-th largest singular value of \(X\).
  4. +
  5. \(S\) (as in SVD): The diagonal matrix containing all the singular values of \(X\).
  6. +
  7. \(\Sigma\): The covariance matrix of \(X\). Assuming \(X\) is centered, \(\Sigma = X^T X\). In previous semesters, the singular value decomposition of \(X\) was written out as \(X = U{\Sigma}V^T\). Note the difference between \(\Sigma\) in that context compared to this semester.
  8. +
+
+
+
+
+

25.3.2 PCA Visualization

+
+slide17 +
+

As we discussed above, when conducting PCA, we first center the data matrix \(X\) and then rotate it such that the direction with the most variation (e.g., the direction that is most spread out) aligns with the x-axis.

+
+slide16 +
+

In particular, the elements of each column of \(V\) (row of \(V^{T}\)) rotate the original feature vectors, projecting \(X\) onto the principal components.

+

The first column of \(V\) indicates how each feature contributes (e.g. positive, negative, etc.) to principal component 1; it essentially assigns “weights” to each feature.

+

Coupled together, this interpretation also allows us to understand that:

+
    +
  • The principal components are all orthogonal to each other because the columns of \(V\) are orthonormal.
  • +
  • Principal components are axis-aligned. That is, if you plot two PCs on a 2D plane, one will lie on the x-axis and the other on the y-axis.
  • +
  • Principal components are linear combinations of columns in our data \(X\).
  • +
+
+
+

25.3.3 Using Principal Components

+

Let’s summarize the steps to obtain Principal Components via SVD:

+
    +
  1. Center the data matrix \(X\) by subtracting the mean of each attribute column.

  2. +
  3. To find the \(k\) principal components:

    +
      +
    1. Compute the SVD of the data matrix (\(X = U{S}V^{T}\)).
    2. +
    3. The first \(k\) columns of \(V\) contain the \(k\) principal components of \(X\). The \(k\)-th column of \(V\) is also known as the \(k\)-th latent vector and corresponds to the \(k\)-th largest singular value of \(X\).
    4. +
  4. +
+
+
+

25.3.4 Code Demo

+

Let’s now walk through an example where we compute PCA using SVD. In order to get the first \(k\) principal components from an \(n \times d\) matrix \(X\), we:

+
    +
  1. Center \(X\) by subtracting the mean from each column. Notice how we specify axis=0 so that the mean is computed per column.
  2. +
+
+
centered_df = rectangle - np.mean(rectangle, axis=0)
+centered_df.head(5)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
widthheightareaperimeter
02.971.3524.788.64
1-3.03-0.65-15.22-7.36
2-4.03-1.65-20.22-11.36
33.97-1.653.784.64
43.973.3548.7814.64
+ +
+
+
+
    +
  1. Get the Singular Value Decomposition of the centered \(X\): \(U\), \(S\) and \(V^T\)
  2. +
+
+
U, S, Vt = np.linalg.svd(centered_df, full_matrices=False)
+Sm = pd.DataFrame(np.diag(np.round(S, 1)))
+
+
    +
  1. Take the first \(k\) columns of \(V\). These are the first \(k\) principal components of \(X\).
  2. +
+
+
two_PCs = Vt.T[:, :2]
+pd.DataFrame(two_PCs).head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
01
0-0.0986310.668460
1-0.072956-0.374186
2-0.931226-0.258375
3-0.3431730.588548
+ +
+
+
+
+
+
+

25.4 Data Variance and Centering

+

We define the total variance of a data matrix as the sum of variances of attributes. The principal components are a low-dimension representation that capture as much of the original data’s total variance as possible. Formally, the \(i\)-th singular value tells us the component score, or how much of the data variance is captured by the \(i\)-th principal component. Assuming the number of datapoints is \(n\):

+

\[\text{i-th component score} = \frac{(\text{i-th singular value}^2)}{n}\]

+

Summing up the component scores is equivalent to computing the total variance if we center our data.

+

Data Centering: PCA has a data-centering step that precedes any singular value decomposition, where, if implemented, the component score is defined as above.

+

If you want to dive deeper into PCA, Steve Brunton’s SVD Video Series is a great resource.

+
+
+

25.5 Interpreting PCA

+
+

25.5.1 PCA Plot

+

We often plot the first two principal components using a scatter plot, with PC1 on the \(x\)-axis and PC2 on the \(y\)-axis. This is often called a PCA plot.

+

If the first two singular values are large and all others are small, then two dimensions are enough to describe most of what distinguishes one observation from another. If not, a PCA plot omits a lot of information.

+

PCA plots help us assess similarities between our data points and if there are any clusters in our dataset. In the case study before, for example, we could create the following PCA plot:

+
+pca_plot +
+
+
+

25.5.2 Scree Plots

+

A scree plot shows the variance ratio captured by each principal component, with the largest variance ratio first. They help us visually determine the number of dimensions needed to describe the data reasonably. The singular values that fall in the region of the plot after a large drop-off correspond to principal components that are not needed to describe the data since they explain a relatively low proportion of the total variance of the data. This point where adding more principal components results in diminishing returns is called the “elbow” and is the point just before the line flattens out. Using this “elbow method”, we can see that the elbow is at the second principal component.

+
+scree_plot +
+
+
+

25.5.3 Biplots

+

Biplots superimpose the directions onto the plot of PC1 vs. PC2, where vector \(j\) corresponds to the direction for feature \(j\) (e.g., \(v_{1j}, v_{2j}\)). There are several ways to scale biplot vectors —— in this course, we plot the direction itself. For other scalings, which can lead to more interpretable directions/loadings, see SAS biplots.

+

Through biplots, we can interpret how features correlate with the principal components shown: positively, negatively, or not much at all.

+
+slide17_2 +
+

The directions of the arrow are (\(v_1\), \(v_2\)), where \(v_1\) and \(v_2\) are how that specific feature column contributes to PC1 and PC2, respectively. \(v_1\) and \(v_2\) are elements of the first and second columns of \(V\), respectively (i.e., the first two rows of \(V^T\)).

+

Say we were considering feature 3, and say that was the purple arrow labeled “520” here (pointing bottom right).

+
    +
  • \(v_1\) and \(v_2\) are the third elements of the respective columns in \(V\). They are scale feature 3’s column vector in the linear transformation to PC1 and PC2, respectively.
  • +
  • Here, we would infer that \(v_1\) (in the \(x\)/PC1-direction) is positive, meaning that a linear increase in feature 3 would correspond to a linear increase of PC1, meaning feature 3 and PC1 are positively correlated.
  • +
  • \(v_2\) (in the \(y\)/pc2-direction) is negative, meaning a linear increase in feature 3 would correspond to a linear decrease in PC2, meaning feature 3 and PC2 are negatively correlated.
  • +
+
+
+
+

25.6 Example 1: House of Representatives Voting

+

Let’s examine how the House of Representatives (of the 116th Congress, 1st session) voted in the month of September 2019.

+

Specifically, we’ll look at the records of Roll call votes. From the U.S. Senate (link): roll call votes occur when a representative or senator votes “yea” or “nay” so that the names of members voting on each side are recorded. A voice vote is a vote in which those in favor or against a measure say “yea” or “nay,” respectively, without the names or tallies of members voting on each side being recorded.

+
+
+Code +
import pandas as pd
+import seaborn as sns
+import matplotlib.pyplot as plt
+import numpy as np
+import yaml
+from datetime import datetime
+import plotly.express as px
+import plotly.graph_objects as go
+
+
+votes = pd.read_csv("data/votes.csv")
+votes = votes.astype({"roll call": str})
+votes.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
chambersessionroll callmembervote
0House1555A000374Not Voting
1House1555A000370Yes
2House1555A000055No
3House1555A000371Yes
4House1555A000372No
+ +
+
+
+

Suppose we pivot this table to group each legislator and their voting pattern across every (roll call) vote in this month. We mark 1 if the legislator voted Yes (“yea”), and 0 otherwise (“No”, “nay”, no vote, speaker, etc.).

+
+
+Code +
def was_yes(s):
+    return 1 if s.iloc[0] == "Yes" else 0
+
+
+vote_pivot = votes.pivot_table(
+    index="member", columns="roll call", values="vote", aggfunc=was_yes, fill_value=0
+)
+print(vote_pivot.shape)
+vote_pivot.head()
+
+
+
(441, 41)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
roll call515516517518519520521522523524...546547548549550551552553554555
member
A0000551000110111...0010010010
A0003670000000000...0111101101
A0003691100110111...0010010010
A0003701111101000...1111101111
A0003711111101000...1111101111
+ +

5 rows × 41 columns

+
+
+
+

Do legislators’ roll call votes show a relationship with their political party?

+
+

25.6.1 PCA with SVD

+

While we could consider loading information about the legislator, such as their party, and see how this relates to their voting pattern, it turns out that we can do a lot with PCA to cluster legislators by how they vote. Let’s calculate the principal components using the SVD method.

+
+
vote_pivot_centered = vote_pivot - np.mean(vote_pivot, axis=0)
+u, s, vt = np.linalg.svd(vote_pivot_centered, full_matrices=False) # SVD
+
+

We can use the singular values in s to construct a scree plot:

+
+
fig = px.line(y=s**2 / sum(s**2), title='Variance Explained', width=700, height=600, markers=True)
+fig.update_xaxes(title_text='Principal Component i')
+fig.update_yaxes(title_text='Proportion of Variance Explained')
+
+
+
+
+

It looks like this graph plateaus after the third principal component, so our “elbow” is at PC3, and most of the variance is captured by just the first three principal components. Let’s use these PCs to visualize the latent vector representation of \(X\)!

+
+
# Calculate the latent vector representation (US or XV)
+# using the first 3 principal components
+vote_2d = pd.DataFrame(index=vote_pivot_centered.index)
+vote_2d[["z1", "z2", "z3"]] = (u * s)[:, :3]
+
+# Plot the latent vector representation
+fig = px.scatter_3d(vote_2d, x='z1', y='z2', z='z3', title='Vote Data', width=800, height=600)
+fig.update_traces(marker=dict(size=5))
+
+
+
+
+

Baesd on the plot above, it looks like there are two clusters of datapoints. What do you think this corresponds to?

+

By incorporating member information (source), we can augment our graph with biographic data like each member’s party and gender.

+
+
+Code +
legislators_data = yaml.safe_load(open("data/legislators-2019.yaml"))
+
+
+def to_date(s):
+    return datetime.strptime(s, "%Y-%m-%d")
+
+
+legs = pd.DataFrame(
+    columns=[
+        "leg_id",
+        "first",
+        "last",
+        "gender",
+        "state",
+        "chamber",
+        "party",
+        "birthday",
+    ],
+    data=[
+        [
+            x["id"]["bioguide"],
+            x["name"]["first"],
+            x["name"]["last"],
+            x["bio"]["gender"],
+            x["terms"][-1]["state"],
+            x["terms"][-1]["type"],
+            x["terms"][-1]["party"],
+            to_date(x["bio"]["birthday"]),
+        ]
+        for x in legislators_data
+    ],
+)
+legs["age"] = 2024 - legs["birthday"].dt.year
+legs.set_index("leg_id")
+legs.sort_index()
+
+vote_2d = vote_2d.join(legs.set_index("leg_id")).dropna()
+
+np.random.seed(42)
+vote_2d["z1_jittered"] = vote_2d["z1"] + np.random.normal(0, 0.1, len(vote_2d))
+vote_2d["z2_jittered"] = vote_2d["z2"] + np.random.normal(0, 0.1, len(vote_2d))
+vote_2d["z3_jittered"] = vote_2d["z3"] + np.random.normal(0, 0.1, len(vote_2d))
+
+px.scatter_3d(vote_2d, x='z1_jittered', y='z2_jittered', z='z3_jittered', color='party', symbol="gender", size='age',
+           title='Vote Data', width=800, height=600, size_max=10,
+           opacity = 0.7,
+           color_discrete_map={'Democrat':'blue', 'Republican':'red', "Independent": "green"},
+           hover_data=['first', 'last', 'state', 'party', 'gender', 'age'])
+
+
+
+
+
+

Using SVD and PCA, we can clearly see a separation between the red dots (Republican) and blue dots (Deomcrat).

+ +
+
+

25.6.2 Exploring the Principal Components

+

We can also look at \(V^T\) directly to try to gain insight into why each component is as it is.

+
+
+Code +
fig_eig = px.bar(x=vote_pivot_centered.columns, y=vt[0, :])
+# extract the trace from the figure
+fig_eig.show()
+
+
+
+
+
+

We have the party affiliation labels so we can see if this eigenvector aligns with one of the parties.

+
+
+Code +
party_line_votes = (
+    vote_pivot_centered.join(legs.set_index("leg_id")["party"])
+    .groupby("party")
+    .mean()
+    .T.reset_index()
+    .rename(columns={"index": "call"})
+    .melt("call")
+)
+fig = px.bar(
+    party_line_votes,
+    x="call", y="value", facet_row = "party", color="party",
+    color_discrete_map={'Democrat':'blue', 'Republican':'red', "Independent": "green"})
+fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
+
+
+
+
+
+
+
+

25.6.3 Biplot

+
+
+Code +
loadings = pd.DataFrame(
+    {"pc1": np.sqrt(s[0]) * vt[0, :], "pc2": np.sqrt(s[1]) * vt[1, :]},
+    index=vote_pivot_centered.columns,
+)
+
+vote_2d["num votes"] = votes[votes["vote"].isin(["Yes", "No"])].groupby("member").size()
+vote_2d.dropna(inplace=True)
+
+fig = px.scatter(
+    vote_2d, 
+    x='z1_jittered', 
+    y='z2_jittered', 
+    color='party', 
+    symbol="gender", 
+    size='num votes',
+    title='Biplot', 
+    width=800, 
+    height=600, 
+    size_max=10,
+    opacity = 0.7,
+    color_discrete_map={'Democrat':'blue', 'Republican':'red', "Independent": "green"},
+    hover_data=['first', 'last', 'state', 'party', 'gender', 'age'])
+
+for (call, pc1, pc2) in loadings.head(20).itertuples():
+    fig.add_scatter(x=[0,pc1], y=[0,pc2], name=call, 
+                    mode='lines+markers', textposition='top right',
+                    marker= dict(size=10,symbol= "arrow-bar-up", angleref="previous"))
+fig
+
+
+
+
+
+

Each roll call from the 116th Congress - 1st Session: https://clerk.house.gov/evs/2019/ROLL_500.asp

+ +

As shown in the demo, the primary goal of PCA is to transform observations from high-dimensional data down to low dimensions through linear transformations.

+
+
+
+

25.7 Example 2: Image Classification

+

In machine learning, PCA is often used as a preprocessing step prior to training a supervised model.

+

Let’s explore how PCA is useful for building an image classification model based on the Fashion-MNIST dataset, a dataset containing images of articles of clothing; these images are gray scale with a size of 28 by 28 pixels. The copyright for Fashion-MNIST is held by Zalando SE. Fashion-MNIST is licensed under the MIT license.

+
+slide21 +
+

First, we’ll load in the data.

+
+
+Code +
import requests
+from pathlib import Path
+import time
+import gzip
+import os
+import numpy as np
+import plotly.express as px
+
+def fetch_and_cache(data_url, file, data_dir="data", force=False):
+    """
+    Download and cache a url and return the file object.
+
+    data_url: the web address to download
+    file: the file in which to save the results.
+    data_dir: (default="data") the location to save the data
+    force: if true the file is always re-downloaded
+
+    return: The pathlib.Path object representing the file.
+    """
+
+    data_dir = Path(data_dir)
+    data_dir.mkdir(exist_ok=True)
+    file_path = data_dir / Path(file)
+    # If the file already exists and we want to force a download then
+    # delete the file first so that the creation date is correct.
+    if force and file_path.exists():
+        file_path.unlink()
+    if force or not file_path.exists():
+        print("Downloading...", end=" ")
+        resp = requests.get(data_url)
+        with file_path.open("wb") as f:
+            f.write(resp.content)
+        print("Done!")
+        last_modified_time = time.ctime(file_path.stat().st_mtime)
+    else:
+        last_modified_time = time.ctime(file_path.stat().st_mtime)
+        print("Using cached version that was downloaded (UTC):", last_modified_time)
+    return file_path
+
+
+def head(filename, lines=5):
+    """
+    Returns the first few lines of a file.
+
+    filename: the name of the file to open
+    lines: the number of lines to include
+
+    return: A list of the first few lines from the file.
+    """
+    from itertools import islice
+
+    with open(filename, "r") as f:
+        return list(islice(f, lines))
+
+
+def load_data():
+    """
+    Loads the Fashion-MNIST dataset.
+
+    This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories,
+    along with a test set of 10,000 images. This dataset can be used as
+    a drop-in replacement for MNIST.
+
+    The classes are:
+
+    | Label | Description |
+    |:-----:|-------------|
+    |   0   | T-shirt/top |
+    |   1   | Trouser     |
+    |   2   | Pullover    |
+    |   3   | Dress       |
+    |   4   | Coat        |
+    |   5   | Sandal      |
+    |   6   | Shirt       |
+    |   7   | Sneaker     |
+    |   8   | Bag         |
+    |   9   | Ankle boot  |
+
+    Returns:
+      Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.
+
+    **x_train**: uint8 NumPy array of grayscale image data with shapes
+      `(60000, 28, 28)`, containing the training data.
+
+    **y_train**: uint8 NumPy array of labels (integers in range 0-9)
+      with shape `(60000,)` for the training data.
+
+    **x_test**: uint8 NumPy array of grayscale image data with shapes
+      (10000, 28, 28), containing the test data.
+
+    **y_test**: uint8 NumPy array of labels (integers in range 0-9)
+      with shape `(10000,)` for the test data.
+
+    Example:
+
+    (x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
+    assert x_train.shape == (60000, 28, 28)
+    assert x_test.shape == (10000, 28, 28)
+    assert y_train.shape == (60000,)
+    assert y_test.shape == (10000,)
+
+    License:
+      The copyright for Fashion-MNIST is held by Zalando SE.
+      Fashion-MNIST is licensed under the [MIT license](
+      https://github.com/zalandoresearch/fashion-mnist/blob/master/LICENSE).
+
+    """
+    dirname = os.path.join("datasets", "fashion-mnist")
+    base = "https://storage.googleapis.com/tensorflow/tf-keras-datasets/"
+    files = [
+        "train-labels-idx1-ubyte.gz",
+        "train-images-idx3-ubyte.gz",
+        "t10k-labels-idx1-ubyte.gz",
+        "t10k-images-idx3-ubyte.gz",
+    ]
+
+    paths = []
+    for fname in files:
+        paths.append(fetch_and_cache(base + fname, fname))
+        # paths.append(get_file(fname, origin=base + fname, cache_subdir=dirname))
+
+    with gzip.open(paths[0], "rb") as lbpath:
+        y_train = np.frombuffer(lbpath.read(), np.uint8, offset=8)
+
+    with gzip.open(paths[1], "rb") as imgpath:
+        x_train = np.frombuffer(imgpath.read(), np.uint8, offset=16).reshape(
+            len(y_train), 28, 28
+        )
+
+    with gzip.open(paths[2], "rb") as lbpath:
+        y_test = np.frombuffer(lbpath.read(), np.uint8, offset=8)
+
+    with gzip.open(paths[3], "rb") as imgpath:
+        x_test = np.frombuffer(imgpath.read(), np.uint8, offset=16).reshape(
+            len(y_test), 28, 28
+        )
+
+    return (x_train, y_train), (x_test, y_test)
+
+
+
+
+Code +
class_names = [
+    "T-shirt/top",
+    "Trouser",
+    "Pullover",
+    "Dress",
+    "Coat",
+    "Sandal",
+    "Shirt",
+    "Sneaker",
+    "Bag",
+    "Ankle boot",
+]
+class_dict = {i: class_name for i, class_name in enumerate(class_names)}
+
+(train_images, train_labels), (test_images, test_labels) = load_data()
+print("Training images", train_images.shape)
+print("Test images", test_images.shape)
+
+rng = np.random.default_rng(42)
+n = 5000
+sample_idx = rng.choice(np.arange(len(train_images)), size=n, replace=False)
+
+# Invert and normalize the images so they look better
+img_mat = -1 * train_images[sample_idx].astype(np.int16)
+img_mat = (img_mat - img_mat.min()) / (img_mat.max() - img_mat.min())
+
+images = pd.DataFrame(
+    {
+        "images": img_mat.tolist(),
+        "labels": train_labels[sample_idx],
+        "class": [class_dict[x] for x in train_labels[sample_idx]],
+    }
+)
+
+
+
Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024
+Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024
+Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024
+Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024
+Training images (60000, 28, 28)
+Test images (10000, 28, 28)
+
+
+

Let’s see what some of the images contained in this dataset look like.

+
+
+Code +
def show_images(images, ncols=5, max_images=30):
+    # conver the subset of images into a n,28,28 matrix for facet visualization
+    img_mat = np.array(images.head(max_images)["images"].to_list())
+    fig = px.imshow(
+        img_mat,
+        color_continuous_scale="gray",
+        facet_col=0,
+        facet_col_wrap=ncols,
+        height=220 * int(np.ceil(len(images) / ncols)),
+    )
+    fig.update_layout(coloraxis_showscale=False)
+    # Extract the facet number and convert it back to the class label.
+    fig.for_each_annotation(
+        lambda a: a.update(text=images.iloc[int(a.text.split("=")[-1])]["class"])
+    )
+    return fig
+
+
+fig = show_images(images.groupby("class", as_index=False).sample(2), ncols=6)
+fig.show()
+
+
+
+
+
+

Let’s break this down further and look at it by class, or the category of clothing:

+
+
+Code +
print(class_dict)
+
+show_images(images.groupby('class',as_index=False).sample(2), ncols=6)
+
+
+
{0: 'T-shirt/top', 1: 'Trouser', 2: 'Pullover', 3: 'Dress', 4: 'Coat', 5: 'Sandal', 6: 'Shirt', 7: 'Sneaker', 8: 'Bag', 9: 'Ankle boot'}
+
+
+
+
+
+
+

25.7.1 Raw Data

+

As we can see, each 28x28 pixel image is labelled by the category of clothing it belongs to. Us humans can very easily look at these images and identify the type of clothing being displayed, even if the image is a little blurry. However, this task is less intuitive for machine learning models. To illustrate this, let’s take a small sample of the training data to see how the images above are represented in their raw format:

+
+
+Code +
images.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
imageslabelsclass
0[[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,...3Dress
1[[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,...4Coat
2[[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,...0T-shirt/top
3[[1.0, 1.0, 1.0, 1.0, 1.0, 0.996078431372549, ...2Pullover
4[[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,...1Trouser
+ +
+
+
+

Each row represents one image. Every image belongs to a "class" of clothing with it’s enumerated "label". In place of a typically displayed image, the raw data contains a 28x28 2D array of pixel values; each pixel value is a float between 0 and 1. If we just focus on the images, we get a 3D matrix. You can think of this as a matrix containing 2D images.

+
+
X = np.array(images["images"].to_list())
+X.shape 
+
+
(5000, 28, 28)
+
+
+

However, we’re not used to working with 3D matrices for our training data X. Typical training data expects a vector of features for each datapoint, not a matrix per datapoint. We can reshape our 3D matrix so that it fits our typical training data by “unrolling” the the 28x28 pixels into a single row vector containing 28*28 = 784 dimensions.

+
+
X = X.reshape(X.shape[0], -1)
+X.shape
+
+
(5000, 784)
+
+
+

What we have now is 5000 datapoints that each have 784 features. That’s a lot of features! Not only would training a model on this data take a very long time, it’s also very likely that our matrix is linearly independent. PCA is a very good strategy to use in situations like these when there are lots of features, but we want to remove redundant information.

+
+
+

25.7.2 PCA with sklearn

+

To perform PCA, let’s begin by centering our data.

+
+
X = X - X.mean(axis=0)
+
+

We can run PCA using sklearn’s PCA package.

+
+
from sklearn.decomposition import PCA
+
+n_comps = 50
+pca = PCA(n_components=n_comps)
+pca.fit(X)
+
+
PCA(n_components=50)
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
+
+
+
+
+

25.7.3 Examining PCA Results

+

Now that sklearn helped us find the principal components, let’s visualize a scree plot.

+
+
# Make a line plot and show markers
+fig = px.line(y=pca.explained_variance_ratio_ * 100, markers=True)
+fig.show()
+
+
+
+
+

We can see that the line starts flattening out around 2 or 3, which suggests that most of the data is explained by just the first two or three dimensions. To illustrate this, let’s plot the first three principal components and the datapoints’ corresponding classes. Can you identify any patterns?

+
+
images[['z1', 'z2', 'z3']] = pca.transform(X)[:, :3]
+fig = px.scatter_3d(images, x='z1', y='z2', z='z3', color='class', hover_data=['labels'], 
+              width=1000, height=800)
+# set marker size to 5
+fig.update_traces(marker=dict(size=5))
+
+
+
+
+
+
+
+

25.8 Why Perform PCA

+

As we saw in the demos, we often perform PCA during the Exploratory Data Analysis (EDA) stage of our data science lifecycle (if we already know what to model, we probably don’t need PCA!). It helps us with:

+
    +
  • Visually identifying clusters of similar observations in high dimensions.
  • +
  • Removing irrelevant dimensions if we suspect that the dataset is inherently low rank. For example, if the columns are collinear: there are many attributes but only a few mostly determine the rest through linear associations.
  • +
  • Finding a small basis for representing variations in complex things, e.g., images, genes.
  • +
  • Reducing the number of dimensions to make some computations cheaper.
  • +
+
+

25.8.1 Why PCA, then Model?

+
    +
  1. Reduces dimensionality, allowing us to speed up training and reduce the number of features, etc.
  2. +
  3. Avoids multicollinearity in the new features created (i.e. the principal components)
  4. +
+
+slide21 +
+
+
+
+

25.9 (Bonus) Applications of PCA

+
+

25.9.1 PCA in Biology

+

PCA is commonly used in biomedical contexts, which have many named variables! It can be used to:

+
    +
  1. Cluster data (Paper 1, Paper 2).
  2. +
  3. Identify correlated variables (interpret rows of \(V^{T}\) as linear coefficients) (Paper 3). Uses biplots.
  4. +
+
+
+
+

25.10 (Bonus) PCA vs. Regression

+
+

25.10.1 Regression: Minimizing Horizontal/Verticle Error

+

Suppose we know the child mortality rate of a given country. Linear regression tries to predict the fertility rate from the mortality rate; for example, if the mortality is 6, we might guess the fertility is near 4. The regression line tells us the “best” prediction of fertility given all possible mortality values by minimizing the root mean squared error. See the vertical red lines (note that only some are shown).

+
+ +
+


We can also perform a regression in the reverse direction. That is, given fertility, we try to predict mortality. In this case, we get a different regression line that minimizes the root mean squared length of the horizontal lines.

+
+ +
+
+
+

25.10.2 SVD: Minimizing Perpendicular Error

+

The rank-1 approximation is close but not the same as the mortality regression line. Instead of minimizing horizontal or vertical error, our rank-1 approximation minimizes the error perpendicular to the subspace onto which we’re projecting. That is, SVD finds the line such that if we project our data onto that line, the error between the projection and our original data is minimized. The similarity of the rank-1 approximation and the fertility was just a coincidence. Looking at adiposity and bicep size from our body measurements dataset, we see the 1D subspace onto which we are projecting is between the two regression lines.

+
+ +
+
+
+

25.10.3 Beyond 1D and 2D

+

Even in higher dimensions, the idea behind principal components is the same! Suppose we have 30-dimensional data and decide to use the first 5 principal components. Our procedure minimizes the error between the original 30-dimensional data and the projection of that 30-dimensional data onto the “best” 5-dimensional subspace. See CS 189 Note 10 for more details.

+
+
+
+

25.11 (Bonus) Automatic Factorization

+

One key fact to remember is that the decomposition is not arbitrary. The rank of a matrix limits how small our inner dimensions can be if we want to perfectly recreate our matrix. The proof for this is out of scope.

+

Even if we know we have to factorize our matrix using an inner dimension of \(R\), that still leaves a large space of solutions to traverse. What if we have a procedure to automatically factorize a rank \(R\) matrix into an \(R\)-dimensional representation with some transformation matrix?

+
    +
  • Lower dimensional representation avoids redundant features.
  • +
  • Imagine a 1000-dimensional dataset: If the rank is only 5, it’s much easier to do EDA after this mystery procedure.
  • +
+

What if we wanted a 2D representation? It’s valuable to compress all of the data that is relevant into as few dimensions as possible in order to plot it efficiently. Some 2D matrices yield better approximations than others. How well can we do?

+
+
+

25.12 (Bonus) Proof of Component Score

+

The proof defining component score is out of scope for this class, but it is included below for your convenience.

+

Setup: Consider the design matrix \(X \in \mathbb{R}^{n \times d}\), where the \(j\)-th column (corresponding to the \(j\)-th feature) is \(x_j \in \mathbb{R}^n\) and the element in row \(i\), column \(j\) is \(x_{ij}\). Further, define \(\tilde{X}\) as the centered design matrix. The \(j\)-th column is \(\tilde{x}_j \in \mathbb{R}^n\) and the element in row \(i\), column \(j\) is \(\tilde{x}_{ij} = x_{ij} - \bar{x_j}\), where \(\bar{x_j}\) is the mean of the \(x_j\) column vector from the original \(X\).

+

Variance: Construct the covariance matrix: \(\frac{1}{n} \tilde{X}^T \tilde{X} \in \mathbb{R}^{d \times d}\). The \(j\)-th element along the diagonal is the variance of the \(j\)-th column of the original design matrix \(X\):

+

\[\left( \frac{1}{n} \tilde{X}^T \tilde{X} \right)_{jj} = \frac{1}{n} \tilde{x}_j ^T \tilde{x}_j = \frac{1}{n} \sum_{i=i}^n (\tilde{x}_{ij} )^2 = \frac{1}{n} \sum_{i=i}^n (x_{ij} - \bar{x_j})^2\]

+

SVD: Suppose singular value decomposition of the centered design matrix \(\tilde{X}\) yields \(\tilde{X} = U S V^T\), where \(U \in \mathbb{R}^{n \times d}\) and \(V \in \mathbb{R}^{d \times d}\) are matrices with orthonormal columns, and \(S \in \mathbb{R}^{d \times d}\) is a diagonal matrix with singular values of \(\tilde{X}\).

+

\[ +\begin{aligned} +\tilde{X}^T \tilde{X} &= (U S V^T )^T (U S V^T) \\ +&= V S U^T U S V^T & (S^T = S) \\ +&= V S^2 V^T & (U^T U = I) \\ +\frac{1}{n} \tilde{X}^T \tilde{X} &= \frac{1}{n} V S V^T =V \left( \frac{1}{n} S \right) V^T \\ +\frac{1}{n} \tilde{X}^T \tilde{X} V &= V \left( \frac{1}{n} S \right) V^T V = V \left( \frac{1}{n} S \right) & \text{(right multiply by }V \rightarrow V^T V = I \text{)} \\ +V^T \frac{1}{n} \tilde{X}^T \tilde{X} V &= V^T V \left( \frac{1}{n} S \right) = \frac{1}{n} S & \text{(left multiply by }V^T \rightarrow V^T V = I \text{)} \\ +\left( \frac{1}{n} \tilde{X}^T \tilde{X} \right)_{jj} &= \frac{1}{n}S_j^2 & \text{(Define }S_j\text{ as the} j\text{-th singular value)} \\ +\frac{1}{n} S_j^2 &= \frac{1}{n} \sum_{i=i}^n (x_{ij} - \bar{x_j})^2 +\end{aligned} +\]

+

The last line defines the \(j\)-th component score.

+ + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/probability_1/images/discrete_continuous.png b/docs/probability_1/images/discrete_continuous.png new file mode 100644 index 000000000..6c0100129 Binary files /dev/null and b/docs/probability_1/images/discrete_continuous.png differ diff --git a/docs/probability_1/images/distribution.png b/docs/probability_1/images/distribution.png new file mode 100644 index 000000000..ba92219a2 Binary files /dev/null and b/docs/probability_1/images/distribution.png differ diff --git a/docs/probability_1/images/exp_var.png b/docs/probability_1/images/exp_var.png new file mode 100644 index 000000000..ca2af61dd Binary files /dev/null and b/docs/probability_1/images/exp_var.png differ diff --git a/docs/probability_1/images/probability_areas.png b/docs/probability_1/images/probability_areas.png new file mode 100644 index 000000000..f9acbf7a3 Binary files /dev/null and b/docs/probability_1/images/probability_areas.png differ diff --git a/docs/probability_1/images/rv.png b/docs/probability_1/images/rv.png new file mode 100644 index 000000000..3a5f70bdf Binary files /dev/null and b/docs/probability_1/images/rv.png differ diff --git a/docs/probability_1/images/transformation.png b/docs/probability_1/images/transformation.png new file mode 100644 index 000000000..d6c454a56 Binary files /dev/null and b/docs/probability_1/images/transformation.png differ diff --git a/docs/probability_1/images/yz.png b/docs/probability_1/images/yz.png new file mode 100644 index 000000000..e0ca34e1f Binary files /dev/null and b/docs/probability_1/images/yz.png differ diff --git a/docs/probability_1/images/yz_distribution.png b/docs/probability_1/images/yz_distribution.png new file mode 100644 index 000000000..b208edafc Binary files /dev/null and b/docs/probability_1/images/yz_distribution.png differ diff --git a/docs/probability_1/probability_1.html b/docs/probability_1/probability_1.html new file mode 100644 index 000000000..dce1148b8 --- /dev/null +++ b/docs/probability_1/probability_1.html @@ -0,0 +1,1859 @@ + + + + + + + + + +17  Random Variables – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

17  Random Variables

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Define a random variable in terms of its distribution
  • +
  • Compute the expectation and variance of a random variable
  • +
  • Gain familiarity with the Bernoulli and binomial random variables
  • +
+
+
+
+

In the past few lectures, we’ve examined the role of complexity in influencing model performance. We’ve considered model complexity in the context of a tradeoff between two competing factors: model variance and training error.

+

So far, our analysis has been mostly qualitative. We’ve acknowledged that our choice of model complexity needs to strike a balance between model variance and training error, but we haven’t yet discussed why exactly this tradeoff exists.

+

To better understand the origin of this tradeoff, we will need to dive into random variables. The next two course notes on probability will be a brief digression from our work on modeling so we can build up the concepts needed to understand this so-called bias-variance tradeoff. In specific, we will cover:

+
    +
  1. Random Variables: introduce random variables, considering the concepts of expectation, variance, and covariance
  2. +
  3. Estimators, Bias, and Variance: re-express the ideas of model variance and training error in terms of random variables and use this new perspective to investigate our choice of model complexity
  4. +
+

We’ll go over just enough probability to help you understand its implications for modeling, but if you want to go a step further, take Data 140, CS 70, and/or EECS 126.

+
+
+
+ +
+
+Data 8 Recap +
+
+
+
+
+

Recall the following concepts from Data 8:

+
    +
  1. Sample mean: The mean of the random sample

  2. +
  3. Central Limit Theorem: If you draw a large random sample with replacement, then, regardless of the population distribution, the probability distribution of the sample mean

    +
      +
    1. is roughly normal

    2. +
    3. is centered at the population mean

    4. +
    5. has an \(SD = \frac{\text{population SD}}{\sqrt{\text{sample size}}}\)

    6. +
  4. +
+
+
+
+

In Data 100, we want to understand the broader relationship between the following:

+
    +
  • Population parameter: a number that describes something about the population
  • +
  • Sample statistic: an estimate of the number computed on a sample
  • +
+
+

17.1 Random Variables and Distributions

+

Suppose we generate a set of random data, like a random sample from some population. A random variable is a function from the outcome of a random event to a number.

+

It is random since our sample was drawn at random; it is variable because its exact value depends on how this random sample came out. As such, the domain or input of our random variable is all possible outcomes for some random event in a sample space, and its range or output is the real number line. We typically denote random variables with uppercase letters, such as \(X\) or \(Y\). In contrast, note that regular variables tend to be denoted using lowercase letters. Sometimes we also use uppercase letters to refer to matrices (such as your design matrix \(\mathbb{X}\)), but we will do our best to be clear with the notation.

+

To motivate what this (rather abstract) definition means, let’s consider the following examples:

+
+

17.1.1 Example: Tossing a Coin

+

Let’s formally define a fair coin toss. A fair coin can land on heads (\(H\)) or tails (\(T\)), each with a probability of 0.5. With these possible outcomes, we can define a random variable \(X\) as: \[X = \begin{cases} + 1, \text{if the coin lands heads} \\ + 0, \text{if the coin lands tails} + \end{cases}\]

+

\(X\) is a function with a domain, or input, of \(\{H, T\}\) and a range, or output, of \(\{1, 0\}\). In practice, while we don’t use the following function notation, you could write the above as \[X = \begin{cases} X(H) = 1 \\ X(T) = 0 \end{cases}\]

+
+
+

17.1.2 Example: Sampling Data 100 Students

+

Suppose we draw a random sample \(s\) of size 3 from all students enrolled in Data 100.

+

We can define \(Y\) as the number of data science students in our sample. Its domain is all possible samples of size 3, and its range is \(\{0, 1, 2, 3\}\).

+

+rv +

+

Note that we can use random variables in mathematical expressions to create new random variables.

+

For example, let’s say we sample 3 students at random from lecture and look at their midterm scores. Let \(X_1\), \(X_2\), and \(X_3\) represent each student’s midterm grade.

+

We can use these random variables to create a new random variable, \(Y\), which represents the average of the 3 scores: \(Y = (X_1 + X_2 + X_3)/3\).

+

As we’re creating this random variable, a few questions arise:

+
    +
  • What can we say about the distribution of \(Y\)?
  • +
  • How does it depend on the distribution of \(X_1\), \(X_2\), and \(X_3\)?
  • +
+

But, what exactly is a distribution? Let’s dive into this!

+
+
+

17.1.3 Distributions

+

To define any random variable \(X\), we need to be able to specify 2 things:

+
    +
  1. Possible values: the set of values the random variable can take on.
  2. +
  3. Probabilities: the set of probabilities describing how the total probability of 100% is split over the possible values.
  4. +
+

If \(X\) is discrete (has a finite number of possible values), the probability that a random variable \(X\) takes on the value \(x\) is given by \(P(X=x)\), and probabilities must sum to 1: \(\sum_{\text{all } x} P(X=x) = 1\),

+

We can often display this using a probability distribution table. In the coin toss example, the probability distribution table of \(X\) is given by.

+ + + + + + + + + + + + + + + + + +
\(x\)\(P(X=x)\)
0\(\frac{1}{2}\)
1\(\frac{1}{2}\)
+

The distribution of a random variable \(X\) describes how the total probability of 100% is split across all the possible values of \(X\), and it fully defines a random variable. If you know the distribution of a random variable you can:

+
    +
  • compute properties of the random variables and derived variables
  • +
  • simulate the random variables by randomly picking values of \(X\) according to its distribution using np.random.choice, df.sample, or scipy.stats.<dist>.rvs(...)
  • +
+

The distribution of a discrete random variable can also be represented using a histogram. If a variable is continuous, meaning it can take on infinitely many values, we can illustrate its distribution using a density curve.

+

+discrete_continuous +

+

We often don’t know the (true) distribution and instead compute an empirical distribution. If you flip a coin 3 times and get {H, H, T}, you may ask —— what is the probability that the coin will land heads? We can come up with an empirical estimate of \(\frac{2}{3}\), though the true probability might be \(\frac{1}{2}\).

+

Probabilities are areas. For discrete random variables, the area of the red bars represents the probability that a discrete random variable \(X\) falls within those values. For continuous random variables, the area under the curve represents the probability that a discrete random variable \(Y\) falls within those values.

+

+discrete_continuous +

+

If we sum up the total area of the bars/under the density curve, we should get 100%, or 1.

+

We can show the distribution of \(Y\) in the following tables. The table on the left lists all possible samples of \(s\) and the number of times they can appear (\(Y(s)\)). We can use this to calculate the values for the table on the right, a probability distribution table.

+

+distribution +

+

Rather than fully write out a probability distribution or show a histogram, there are some common distributions that come up frequently when doing data science. These distributions are specified by some parameters, which are constants that specify the shape of the distribution. In terms of notation, the ‘~’ means “has the probability distribution of”.

+

These common distributions are listed below:

+
    +
  1. Bernoulli(\(p\)): If \(X\) ~ Bernoulli(\(p\)), then \(X\) takes on a value 1 with probability \(p\), and 0 with probability \(1 - p\). Bernoulli random variables are also termed the “indicator” random variables.
  2. +
  3. Binomial(\(n\), \(p\)): If \(X\) ~ Binomial(\(n\), \(p\)), then \(X\) counts the number of 1s in \(n\) independent Bernoulli(\(p\)) trials.
  4. +
  5. Categorical(\(p_1, ..., p_k\)) of values: The probability of each value is 1 / (number of possible values).
  6. +
  7. Uniform on the unit interval (0, 1): The density is flat at 1 on (0, 1) and 0 elsewhere. We won’t get into what density means as much here, but intuitively, this is saying that there’s an equally likely chance of getting any value on the interval (0, 1).
  8. +
  9. Normal(\(\mu\), \(\sigma^2\)): The probability density is specified by \(\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}}\). This bell-shaped distribution comes up fairly often in data, in part due to the Central Limit Theorem you saw back in Data 8.
  10. +
+
+
+
+

17.2 Expectation and Variance

+

There are several ways to describe a random variable. The methods shown above —— a table of all samples \(s, X(s)\), distribution table \(P(X=x)\), and histograms —— are all definitions that fully describe a random variable. Often, it is easier to describe a random variable using some numerical summary rather than fully defining its distribution. These numerical summaries are numbers that characterize some properties of the random variable. Because they give a “summary” of how the variable tends to behave, they are not random. Instead, think of them as a static number that describes a certain property of the random variable. In Data 100, we will focus our attention on the expectation and variance of a random variable.

+
+

17.2.1 Expectation

+

The expectation of a random variable \(X\) is the weighted average of the values of \(X\), where the weights are the probabilities of each value occurring. There are two equivalent ways to compute the expectation:

+
    +
  1. Apply the weights one sample at a time: \[\mathbb{E}[X] = \sum_{\text{all possible } s} X(s) P(s)\].
  2. +
  3. Apply the weights one possible value at a time: \[\mathbb{E}[X] = \sum_{\text{all possible } x} x P(X=x)\]
  4. +
+

The latter is more commonly used as we are usually just given the distribution, not all possible samples.

+

We want to emphasize that the expectation is a number, not a random variable. Expectation is a generalization of the average, and it has the same units as the random variable. It is also the center of gravity of the probability distribution histogram, meaning if we simulate the variable many times, it is the long-run average of the simulated values.

+
+

17.2.1.1 Example 1: Coin Toss

+

Going back to our coin toss example, we define a random variable \(X\) as: \[X = \begin{cases} + 1, \text{if the coin lands heads} \\ + 0, \text{if the coin lands tails} + \end{cases}\]

+

We can calculate its expectation \(\mathbb{E}[X]\) using the second method of applying the weights one possible value at a time: \[\begin{align} +\mathbb{E}[X] &= \sum_{x} x P(X=x) \\ +&= 1 * 0.5 + 0 * 0.5 \\ +&= 0.5 +\end{align}\]

+

Note that \(\mathbb{E}[X] = 0.5\) is not a possible value of \(X\); it’s an average. The expectation of X does not need to be a possible value of X.

+
+
+

17.2.1.2 Example 2

+

Consider the random variable \(X\):

+ + + + + + + + + + + + + + + + + + + + + + + + + +
\(x\)\(P(X=x)\)
30.1
40.2
60.4
80.3
+

To calculate it’s expectation, \[\begin{align} +\mathbb{E}[X] &= \sum_{x} x P(X=x) \\ +&= 3 * 0.1 + 4 * 0.2 + 6 * 0.4 + 8 * 0.3 \\ +&= 0.3 + 0.8 + 2.4 + 2.4 \\ +&= 5.9 +\end{align}\]

+

Again, note that \(\mathbb{E}[X] = 5.9\) is not a possible value of \(X\); it’s an average. The expectation of X does not need to be a possible value of X.

+
+
+
+

17.2.2 Variance

+

The variance of a random variable is a measure of its chance error. It is defined as the expected squared deviation from the expectation of \(X\). Put more simply, variance asks: how far does \(X\) typically vary from its average value, just by chance? What is the spread of \(X\)’s distribution?

+

\[\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2]\]

+

The units of variance are the square of the units of \(X\). To get it back to the right scale, use the standard deviation of \(X\): \[\text{SD}(X) = \sqrt{\text{Var}(X)}\]

+

Like with expectation, variance and standard deviation are numbers, not random variables! Variance helps us describe the variability of a random variable. It is the expected squared error between the random variable and its expected value. As you will see shortly, we can use variance to help us quantify the chance error that arises when using a sample \(X\) to estimate the population mean.

+

By Chebyshev’s inequality, which you saw in Data 8, no matter what the shape of the distribution of \(X\) is, the vast majority of the probability lies in the interval “expectation plus or minus a few SDs.”

+

If we expand the square and use properties of expectation, we can re-express variance as the computational formula for variance.

+

\[\text{Var}(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2\]

+

This form is often more convenient to use when computing the variance of a variable by hand, and it is also useful in Mean Squared Error calculations, as \(\mathbb{E}[X^2] = \text{Var}(X)\) if \(X\) is centered and \(E(X)=0\).

+
+ +
+
+

\[\begin{align} + \text{Var}(X) &= \mathbb{E}[(X-\mathbb{E}[X])^2] \\ + &= \mathbb{E}(X^2 - 2X\mathbb{E}(X) + (\mathbb{E}(X))^2) \\ + &= \mathbb{E}(X^2) - 2 \mathbb{E}(X)\mathbb{E}(X) +( \mathbb{E}(X))^2\\ + &= \mathbb{E}[X^2] - (\mathbb{E}[X])^2 +\end{align}\]

+
+
+
+

How do we compute \(\mathbb{E}[X^2]\)? Any function of a random variable is also a random variable. That means that by squaring \(X\), we’ve created a new random variable. To compute \(\mathbb{E}[X^2]\), we can simply apply our definition of expectation to the random variable \(X^2\).

+

\[\mathbb{E}[X^2] = \sum_{x} x^2 P(X = x)\]

+
+
+

17.2.3 Example: Die

+

Let \(X\) be the outcome of a single fair die roll. \(X\) is a random variable defined as \[X = \begin{cases} + \frac{1}{6}, \text{if } x \in \{1,2,3,4,5,6\} \\ + 0, \text{otherwise} + \end{cases}\]

+
+ +
+
+

\[ \begin{align} + \mathbb{E}[X] &= 1\big(\frac{1}{6}\big) + 2\big(\frac{1}{6}\big) + 3\big(\frac{1}{6}\big) + 4\big(\frac{1}{6}\big) + 5\big(\frac{1}{6}\big) + 6\big(\frac{1}{6}\big) \\ + &= \big(\frac{1}{6}\big)( 1 + 2 + 3 + 4 + 5 + 6) \\ + &= \frac{7}{2} + \end{align}\]

+
+
+
+
+ +
+
+

Using Approach 1 (definition): \[\begin{align} + \text{Var}(X) &= \big(\frac{1}{6}\big)((1 - \frac{7}{2})^2 + (2 - \frac{7}{2})^2 + (3 - \frac{7}{2})^2 + (4 - \frac{7}{2})^2 + (5 - \frac{7}{2})^2 + (6 - \frac{7}{2})^2) \\ + &= \frac{35}{12} + \end{align}\]

+

Using Approach 2 (property): \[\mathbb{E}[X^2] = \sum_{x} x^2 P(X = x) = \frac{91}{6}\] \[\text{Var}(X) = \frac{91}{6} - (\frac{7}{2})^2 = \frac{35}{12}\]

+
+
+
+

We can summarize our discussion so far in the following diagram:

+

+distribution +

+
+
+
+

17.3 Sums of Random Variables

+

Often, we will work with multiple random variables at the same time. A function of a random variable is also a random variable. If you create multiple random variables based on your sample, then functions of those random variables are also random variables.

+

For example, if \(X_1, X_2, ..., X_n\) are random variables, then so are all of these:

+
    +
  • \(X_n^2\)
  • +
  • \(\#\{i : X_i > 10\}\)
  • +
  • \(\text{max}(X_1, X_2, ..., X_n)\)
  • +
  • \(\frac{1}{n} \sum_{i=1}^n (X_i - c)^2\)
  • +
  • \(\frac{1}{n} \sum_{i=1}^n X_i\)
  • +
+

Many functions of random variables that we are interested in (e.g., counts, means) involve sums of random variables, so let’s dive deeper into the properties of sums of random variables.

+
+

17.3.1 Properties of Expectation

+

Instead of simulating full distributions, we often just compute expectation and variance directly. Recall the definition of expectation: \[\mathbb{E}[X] = \sum_{x} x P(X=x)\]

+

From it, we can derive some useful properties:

+
    +
  1. Linearity of expectation. The expectation of the linear transformation \(aX+b\), where \(a\) and \(b\) are constants, is:
  2. +
+

\[\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\]

+
+ +
+
+

\[\begin{align} + \mathbb{E}[aX+b] &= \sum_{x} (ax + b) P(X=x) \\ + &= \sum_{x} (ax P(X=x) + bP(X=x)) \\ + &= a\sum_{x}P(X=x) + b\sum_{x}P(X=x)\\ + &= a\mathbb{E}(X) + b * 1 + \end{align}\]

+
+
+
+
    +
  1. Expectation is also linear in sums of random variables.
  2. +
+

\[\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]\]

+
+ +
+
+

\[\begin{align} + \mathbb{E}[X+Y] &= \sum_{s} (X+Y)(s) P(s) \\ + &= \sum_{s} (X(s)P(s) + Y(s)P(s)) \\ + &= \sum_{s} X(s)P(s) + \sum_{s} Y(s)P(s)\\ + &= \mathbb{E}[X] + \mathbb{E}[Y] +\end{align}\]

+
+
+
+
    +
  1. If \(g\) is a non-linear function, then in general, \[\mathbb{E}[g(X)] \neq g(\mathbb{E}[X])\] For example, if \(X\) is -1 or 1 with equal probability, then \(\mathbb{E}[X] = 0\), but \(\mathbb{E}[X^2] = 1 \neq 0\).
  2. +
+
+
+

17.3.2 Properties of Variance

+

Let’s now get into the properties of variance. Recall the definition of variance: \[\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2]\]

+

Combining it with the properties of expectation, we can derive some useful properties:

+
    +
  1. Unlike expectation, variance is non-linear. The variance of the linear transformation \(aX+b\) is: \[\text{Var}(aX+b) = a^2 \text{Var}(X)\]
  2. +
+
    +
  • Subsequently, \[\text{SD}(aX+b) = |a| \text{SD}(X)\]
  • +
  • The full proof of this fact can be found using the definition of variance. As general intuition, consider that \(aX+b\) scales the variable \(X\) by a factor of \(a\), then shifts the distribution of \(X\) by \(b\) units.
  • +
+
+ +
+
+

We know that \[\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\]

+

In order to compute \(\text{Var}(aX+b)\), consider that a shift by \(b\) units does not affect spread, so \(\text{Var}(aX+b) = \text{Var}(aX)\).

+

Then, \[\begin{align} + \text{Var}(aX+b) &= \text{Var}(aX) \\ + &= E((aX)^2) - (E(aX))^2 \\ + &= E(a^2 X^2) - (aE(X))^2\\ + &= a^2 (E(X^2) - (E(X))^2) \\ + &= a^2 \text{Var}(X) +\end{align}\]

+
+
+
+
    +
  • Shifting the distribution by \(b\) does not impact the spread of the distribution. Thus, \(\text{Var}(aX+b) = \text{Var}(aX)\).
  • +
  • Scaling the distribution by \(a\) does impact the spread of the distribution.
  • +
+

+transformation +

+
    +
  1. Variance of sums of random variables is affected by the (in)dependence of the random variables. \[\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) + 2\text{cov}(X,Y)\] \[\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) \qquad \text{if } X, Y \text{ independent}\]
  2. +
+
+ +
+
+

The variance of a sum is affected by the dependence between the two random variables that are being added. Let’s expand the definition of \(\text{Var}(X + Y)\) to see what’s going on.

+

To simplify the math, let \(\mu_x = \mathbb{E}[X]\) and \(\mu_y = \mathbb{E}[Y]\).

+

\[ \begin{align} +\text{Var}(X + Y) &= \mathbb{E}[(X+Y- \mathbb{E}(X+Y))^2] \\ +&= \mathbb{E}[((X - \mu_x) + (Y - \mu_y))^2] \\ +&= \mathbb{E}[(X - \mu_x)^2 + 2(X - \mu_x)(Y - \mu_y) + (Y - \mu_y)^2] \\ +&= \mathbb{E}[(X - \mu_x)^2] + \mathbb{E}[(Y - \mu_y)^2] + \mathbb{E}[(X - \mu_x)(Y - \mu_y)] \\ +&= \text{Var}(X) + \text{Var}(Y) + \mathbb{E}[(X - \mu_x)(Y - \mu_y)] +\end{align}\]

+
+
+
+
+
+

17.3.3 Covariance and Correlation

+

We define the covariance of two random variables as the expected product of deviations from expectation. Put more simply, covariance is a generalization of variance to variance:

+

\[\text{Cov}(X, X) = \mathbb{E}[(X - \mathbb{E}[X])^2] = \text{Var}(X)\]

+

\[\text{Cov}(X, Y) = \mathbb{E}[(X - \mathbb{E}[X])(Y - \mathbb{E}[Y])]\]

+

We can treat the covariance as a measure of association. Remember the definition of correlation given when we first established SLR?

+

\[r(X, Y) = \mathbb{E}\left[\left(\frac{X-\mathbb{E}[X]}{\text{SD}(X)}\right)\left(\frac{Y-\mathbb{E}[Y]}{\text{SD}(Y)}\right)\right] = \frac{\text{Cov}(X, Y)}{\text{SD}(X)\text{SD}(Y)}\]

+

It turns out we’ve been quietly using covariance for some time now! If \(X\) and \(Y\) are independent, then \(\text{Cov}(X, Y) =0\) and \(r(X, Y) = 0\). Note, however, that the converse is not always true: \(X\) and \(Y\) could have \(\text{Cov}(X, Y) = r(X, Y) = 0\) but not be independent.

+
+
+

17.3.4 Equal vs. Identically Distributed vs. i.i.d

+

Suppose that we have two random variables \(X\) and \(Y\):

+
    +
  • \(X\) and \(Y\) are equal if \(X(s) = Y(s)\) for every sample \(s\). Regardless of the exact sample drawn, \(X\) is always equal to \(Y\).
  • +
  • \(X\) and \(Y\) are identically distributed if the distribution of \(X\) is equal to the distribution of \(Y\). We say “\(X\) and \(Y\) are equal in distribution.” That is, \(X\) and \(Y\) take on the same set of possible values, and each of these possible values is taken with the same probability. On any specific sample \(s\), identically distributed variables do not necessarily share the same value. If \(X = Y\), then \(X\) and \(Y\) are identically distributed; however, the converse is not true (ex: \(Y = 7 - X\), \(X\) is a die)
  • +
  • \(X\) and \(Y\) are independent and identically distributed (i.i.d) if +
      +
    1. The variables are identically distributed.
    2. +
    3. Knowing the outcome of one variable does not influence our belief of the outcome of the other.
    4. +
  • +
+

Note that in Data 100, you’ll never be expected to prove that random variables are i.i.d.

+

Now let’s walk through an example. Say \(X_1\) and \(X_2\) be numbers on rolls of two fair die. \(X_1\) and \(X_2\) are i.i.d, so \(X_1\) and \(X_2\) have the same distribution. However, the sums \(Y = X_1 + X_1 = 2X_1\) and \(Z=X_1+X_2\) have different distributions but the same expectation.

+

+distribution +

+

However, \(Y = X_1\) has a larger variance.

+

+distribution +

+
+
+

17.3.5 Example: Bernoulli Random Variable

+

To get some practice with the formulas discussed so far, let’s derive the expectation and variance for a Bernoulli(\(p\)) random variable. If \(X\) ~ Bernoulli(\(p\)),

+

\(\mathbb{E}[X] = 1 \cdot p + 0 \cdot (1 - p) = p\)

+

To compute the variance, we will use the computational formula. We first find that: \(\mathbb{E}[X^2] = 1^2 \cdot p + 0^2 \cdot (1 - p) = p\)

+

From there, let’s calculate our variance: \(\text{Var}(X) = \mathbb{E}[X^2] - \mathbb{E}[X]^2 = p - p^2 = p(1-p)\)

+
+
+

17.3.6 Example: Binomial Random Variable

+

Let \(Y\) ~ Binomial(\(n\), \(p\)). We can think of \(Y\) as being the sum of \(n\) i.i.d. Bernoulli(\(p\)) random variables. Mathematically, this translates to

+

\[Y = \sum_{i=1}^n X_i\]

+

where \(X_i\) is the indicator of a success on trial \(i\).

+

Using linearity of expectation,

+

\[\mathbb{E}[Y] = \sum_{i=1}^n \mathbb{E}[X_i] = np\]

+

For the variance, since each \(X_i\) is independent of the other, \(\text{Cov}(X_i, X_j) = 0\),

+

\[\text{Var}(Y) = \sum_{i=1}^n \text{Var}[X_i] = np(1-p)\]

+
+
+

17.3.7 Summary

+
    +
  • Let \(X\) be a random variable with distribution \(P(X=x)\). +
      +
    • \(\mathbb{E}[X] = \sum_{x} x P(X=x)\)
    • +
    • \(\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2] = \mathbb{E}[X^2] - (\mathbb{E}[X])^2\)
    • +
  • +
  • Let \(a\) and \(b\) be scalar values. +
      +
    • \(\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\)
    • +
    • \(\text{Var}(aX+b) = a^2 \text{Var}(X)\)
    • +
  • +
  • Let \(Y\) be another random variable. +
      +
    • \(\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]\)
    • +
    • \(\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) + 2\text{Cov}(X,Y)\)
    • +
  • +
+

Note that \(\text{Cov}(X,Y)\) would equal 0 if \(X\) and \(Y\) are independent.

+ + + + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/probability_2/images/CLTdiff.png b/docs/probability_2/images/CLTdiff.png new file mode 100644 index 000000000..0ce9a27cc Binary files /dev/null and b/docs/probability_2/images/CLTdiff.png differ diff --git a/docs/probability_2/images/bias_v_variance.png b/docs/probability_2/images/bias_v_variance.png new file mode 100644 index 000000000..598e833f3 Binary files /dev/null and b/docs/probability_2/images/bias_v_variance.png differ diff --git a/docs/probability_2/images/breakdown.png b/docs/probability_2/images/breakdown.png new file mode 100644 index 000000000..433b6796b Binary files /dev/null and b/docs/probability_2/images/breakdown.png differ diff --git a/docs/probability_2/images/bvt.png b/docs/probability_2/images/bvt.png new file mode 100644 index 000000000..0af708197 Binary files /dev/null and b/docs/probability_2/images/bvt.png differ diff --git a/docs/probability_2/images/bvt_old.png b/docs/probability_2/images/bvt_old.png new file mode 100644 index 000000000..9cf5c999c Binary files /dev/null and b/docs/probability_2/images/bvt_old.png differ diff --git a/docs/probability_2/images/clt.png b/docs/probability_2/images/clt.png new file mode 100644 index 000000000..0a93294b5 Binary files /dev/null and b/docs/probability_2/images/clt.png differ diff --git a/docs/probability_2/images/data.png b/docs/probability_2/images/data.png new file mode 100644 index 000000000..77808547d Binary files /dev/null and b/docs/probability_2/images/data.png differ diff --git a/docs/probability_2/images/decomposition.png b/docs/probability_2/images/decomposition.png new file mode 100644 index 000000000..21ad6054f Binary files /dev/null and b/docs/probability_2/images/decomposition.png differ diff --git a/docs/probability_2/images/error.png b/docs/probability_2/images/error.png new file mode 100644 index 000000000..7441a3179 Binary files /dev/null and b/docs/probability_2/images/error.png differ diff --git a/docs/probability_2/images/errors.png b/docs/probability_2/images/errors.png new file mode 100644 index 000000000..0929d47d9 Binary files /dev/null and b/docs/probability_2/images/errors.png differ diff --git a/docs/probability_2/images/y_hat.png b/docs/probability_2/images/y_hat.png new file mode 100644 index 000000000..fe953ddc7 Binary files /dev/null and b/docs/probability_2/images/y_hat.png differ diff --git a/docs/probability_2/images/y_hat2.png b/docs/probability_2/images/y_hat2.png new file mode 100644 index 000000000..3b9b8e263 Binary files /dev/null and b/docs/probability_2/images/y_hat2.png differ diff --git a/docs/probability_2/probability_2.html b/docs/probability_2/probability_2.html new file mode 100644 index 000000000..c7a284bd9 --- /dev/null +++ b/docs/probability_2/probability_2.html @@ -0,0 +1,1892 @@ + + + + + + + + + +18  Estimators, Bias, and Variance – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

18  Estimators, Bias, and Variance

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Explore commonly seen random variables like Bernoulli and Binomial distributions
  • +
  • Apply the Central Limit Theorem to approximate parameters of a population
  • +
  • Use sampled data to model an estimation of and infer the true underlying distribution
  • +
  • Estimate the true population distribution from a sample using the bootstrapping technique
  • +
+
+
+
+

Last time, we introduced the idea of random variables: numerical functions of a sample. Most of our work in the last lecture was done to build a background in probability and statistics. Now that we’ve established some key ideas, we’re in a good place to apply what we’ve learned to our original goal – understanding how the randomness of a sample impacts the model design process.

+

In this lecture, we will delve more deeply into the idea of fitting a model to a sample. We’ll explore how to re-express our modeling process in terms of random variables and use this new understanding to steer model complexity.

+
+

18.1 Common Random Variables

+

There are several cases of random variables that appear often and have useful properties. Below are the ones we will explore further in this course. The numbers in parentheses are the parameters of a random variable, which are constants. Parameters define a random variable’s shape (i.e., distribution) and its values. For this lecture, we’ll focus more heavily on the bolded random variables and their special properties, but you should familiarize yourself with all the ones listed below:

+
    +
  • Bernoulli(\(p\)) +
      +
    • Takes on value 1 with probability \(p\), and 0 with probability \((1 - p)\).
    • +
    • AKA the “indicator” random variable.
    • +
    • Let \(X\) be a Bernoulli(\(p\)) random variable. +
        +
      • \(\mathbb{E}[X] = 1 * p + 0 * (1-p) = p\) +
          +
        • \(\mathbb{E}[X^2] = 1^2 * p + 0 * (1-p) = p\)
        • +
      • +
      • \(\text{Var}(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2 = p - p^2 = p(1-p)\)
      • +
    • +
  • +
  • Binomial(\(n\), \(p\)) +
      +
    • Number of 1s in \(n\) independent Bernoulli(\(p\)) trials.
    • +
    • Let \(Y\) be a Binomial(\(n\), \(p\)) random variable. +
        +
      • The distribution of \(Y\) is given by the binomial formula, and we can write \(Y = \sum_{i=1}^n X_i\) where: +
          +
        • \(X_i\) s the indicator of success on trial i. \(X_i = 1\) if trial i is a success, else 0.
        • +
        • All \(X_i\) are i.i.d. and Bernoulli(\(p\)).
        • +
      • +
      • \(\mathbb{E}[Y] = \sum_{i=1}^n \mathbb{E}[X_i] = np\)
      • +
      • \(\text{Var}(X) = \sum_{i=1}^n \text{Var}(X_i) = np(1-p)\) +
          +
        • \(X_i\)’s are independent, so \(\text{Cov}(X_i, X_j) = 0\) for all i, j.
        • +
      • +
    • +
  • +
  • Uniform on a finite set of values +
      +
    • The probability of each value is \(\frac{1}{\text{(number of possible values)}}\).
    • +
    • For example, a standard/fair die.
    • +
  • +
  • Uniform on the unit interval (0, 1) +
      +
    • Density is flat at 1 on (0, 1) and 0 elsewhere.
    • +
  • +
  • Normal(\(\mu, \sigma^2\)), a.k.a Gaussian +
      +
    • \(f(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left( -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{\!2}\,\right)\)
    • +
  • +
+
+

18.1.1 Example

+

Suppose you win cash based on the number of heads you get in a series of 20 coin flips. Let \(X_i = 1\) if the \(i\)-th coin is heads, \(0\) otherwise. Which payout strategy would you choose?

+

A. \(Y_A = 10 * X_1 + 10 * X_2\)

+

B. \(Y_B = \sum_{i=1}^{20} X_i\)

+

C. \(Y_C = 20 * X_1\)

+
+ +
+
+

Let \(X_1, X_2, ... X_{20}\) be 20 i.i.d Bernoulli(0.5) random variables. Since the \(X_i\)’s are independent, \(\text{Cov}(X_i, X_j) = 0\) for all pairs \(i, j\). Additionally, Since \(X_i\) is Bernoulli(0.5), we know that \(\mathbb{E}[X] = p = 0.5\) and \(\text{Var}(X) = p(1-p) = 0.25\). We can calculate the following for each scenario:

+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
A. \(Y_A = 10 * X_1 + 10 * X_2\)B. \(Y_B = \sum_{i=1}^{20} X_i\)C. \(Y_C = 20 * X_1\)
Expectation\(\mathbb{E}[Y_A] = 10 (0.5) + 10(0.5) = 10\)\(\mathbb{E}[Y_B] = 0.5 + ... + 0.5 = 10\)\(\mathbb{E}[Y_C] = 20(0.5) = 10\)
Variance\(\text{Var}(Y_A) = 10^2 (0.25) + 10^2 (0.25) = 50\)\(\text{Var}(Y_B) = 0.25 + ... + 0.25 = 5\)\(\text{Var}(Y_C) = 20^2 (0.25) = 100\)
Standard Deviation\(\text{SD}(Y_A) \approx 7.07\)\(\text{SD}(Y_B) \approx 2.24\)\(\text{SD}(Y_C) = 10\)
+

As we can see, all the scenarios have the same expected value but different variances. The higher the variance, the greater the risk and uncertainty, so the “right” strategy depends on your personal preference. Would you choose the “safest” option B, the most “risky” option C, or somewhere in the middle (option A)?

+
+
+
+
+
+
+

18.2 Sample Statistics

+

Today, we’ve talked extensively about populations; if we know the distribution of a random variable, we can reliably compute expectation, variance, functions of the random variable, etc. Note that:

+
    +
  • The distribution of a population describes how a random variable behaves across all individuals of interest.
  • +
  • The distribution of a sample describes how a random variable behaves in a specific sample from the population.
  • +
+

In Data Science, however, we often do not have access to the whole population, so we don’t know its distribution. As such, we need to collect a sample and use its distribution to estimate or infer properties of the population. In cases like these, we can take several samples of size \(n\) from the population (an easy way to do this is using df.sample(n, replace=True)), and compute the mean of each sample. When sampling, we make the (big) assumption that we sample uniformly at random with replacement from the population; each observation in our sample is a random variable drawn i.i.d from our population distribution. Remember that our sample mean is a random variable since it depends on our randomly drawn sample! On the other hand, our population mean is simply a number (a fixed value).

+
+

18.2.1 Sample Mean

+

Consider an i.i.d. sample \(X_1, X_2, ..., X_n\) drawn from a population with mean 𝜇 and SD 𝜎. We define the sample mean as \[\bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i\]

+

The expectation of the sample mean is given by: \[\begin{align} + \mathbb{E}[\bar{X}_n] &= \frac{1}{n} \sum_{i=1}^n \mathbb{E}[X_i] \\ + &= \frac{1}{n} (n \mu) \\ + &= \mu +\end{align}\]

+

The variance is given by: \[\begin{align} + \text{Var}(\bar{X}_n) &= \frac{1}{n^2} \text{Var}( \sum_{i=1}^n X_i) \\ + &= \frac{1}{n^2} \left( \sum_{i=1}^n \text{Var}(X_i) \right) \\ + &= \frac{1}{n^2} (n \sigma^2) = \frac{\sigma^2}{n} +\end{align}\]

+

\(\bar{X}_n\) is approximately normally distributed by the Central Limit Theorem (CLT).

+
+
+

18.2.2 Central Limit Theorem

+

In Data 8 and in the previous lecture, you encountered the Central Limit Theorem (CLT). This is a powerful theorem for estimating the distribution of a population with mean \(\mu\) and standard deviation \(\sigma\) from a collection of smaller samples. The CLT tells us that if an i.i.d sample of size \(n\) is large, then the probability distribution of the sample mean is roughly normal with mean \(\mu\) and SD of \(\frac{\sigma}{\sqrt{n}}\). More generally, any theorem that provides the rough distribution of a statistic and doesn’t need the distribution of the population is valuable to data scientists! This is because we rarely know a lot about the population.

+

+

clt

+

+

Importantly, the CLT assumes that each observation in our samples is drawn i.i.d from the distribution of the population. In addition, the CLT is accurate only when \(n\) is “large”, but what counts as a “large” sample size depends on the specific distribution. If a population is highly symmetric and unimodal, we could need as few as \(n=20\); if a population is very skewed, we need a larger \(n\). If in doubt, you can bootstrap the sample mean and see if the bootstrapped distribution is bell-shaped. Classes like Data 140 investigate this idea in great detail.

+

For a more in-depth demo, check out onlinestatbook.

+
+
+

18.2.3 Using the Sample Mean to Estimate the Population Mean

+

Now let’s say we want to use the sample mean to estimate the population mean, for example, the average height of Cal undergraduates. We can typically collect a single sample, which has just one average. However, what if we happened, by random chance, to draw a sample with a different mean or spread than that of the population? We might get a skewed view of how the population behaves (consider the extreme case where we happen to sample the exact same value \(n\) times!).

+

+clt +

+

For example, notice the difference in variation between these two distributions that are different in sample size. The distribution with a bigger sample size (\(n=800\)) is tighter around the mean than the distribution with a smaller sample size (\(n=200\)). Try plugging in these values into the standard deviation equation for the sample mean to make sense of this!

+

Applying the CLT allows us to make sense of all of this and resolve this issue. By drawing many samples, we can consider how the sample distribution varies across multiple subsets of the data. This allows us to approximate the properties of the population without the need to survey every single member.

+

Given this potential variance, it is also important that we consider the average value and spread of all possible sample means, and what this means for how big \(n\) should be. For every sample size, the expected value of the sample mean is the population mean: \[\mathbb{E}[\bar{X}_n] = \mu\] We call the sample mean an unbiased estimator of the population mean and will explore this idea more in the next lecture.

+
+
+
+ +
+
+Data 8 Recap: Square Root Law +
+
+
+
+
+

The square root law (Data 8) states that if you increase the sample size by a factor, the SD of the sample mean decreases by the square root of the factor. \[\text{SD}(\bar{X_n}) = \frac{\sigma}{\sqrt{n}}\] The sample mean is more likely to be close to the population mean if we have a larger sample size.

+
+
+
+
+
+
+

18.3 Prediction and Inference

+

At this point in the course, we’ve spent a great deal of time working with models. When we first introduced the idea of modeling a few weeks ago, we did so in the context of prediction: using models to make accurate predictions about unseen data. Another reason we might build models is to better understand complex phenomena in the world around us. Inference is the task of using a model to infer the true underlying relationships between the feature and response variables. For example, if we are working with a set of housing data, prediction might ask: given the attributes of a house, how much is it worth? Inference might ask: how much does having a local park impact the value of a house?

+

A major goal of inference is to draw conclusions about the full population of data given only a random sample. To do this, we aim to estimate the value of a parameter, which is a numerical function of the population (for example, the population mean \(\mu\)). We use a collected sample to construct a statistic, which is a numerical function of the random sample (for example, the sample mean \(\bar{X}_n\)). It’s helpful to think “p” for “parameter” and “population,” and “s” for “sample” and “statistic.”

+

Since the sample represents a random subset of the population, any statistic we generate will likely deviate from the true population parameter, and it could have been different. We say that the sample statistic is an estimator of the true population parameter. Notationally, the population parameter is typically called \(\theta\), while its estimator is denoted by \(\hat{\theta}\).

+

To address our inference question, we aim to construct estimators that closely estimate the value of the population parameter. We evaluate how “good” an estimator is by answering three questions:

+
    +
  • How close is our answer to the parameter? (Risk / MSE) \[ MSE(\hat{\theta}) = E[(\hat{\theta} - \theta)]^2\]
  • +
  • Do we get the right answer for the parameter, on average? (Bias) \[\text{Bias}(\hat{\theta}) = E[\hat{\theta} - \theta] = E[\hat{\theta}] - \theta\]
  • +
  • How variable is the answer? (Variance) \[Var(\hat{\theta}) = E[(\theta - E[\theta])^2] \]
  • +
+

This relationship can be illustrated with an archery analogy. Imagine that the center of the target is the \(\theta\) and each arrow corresponds to a separate parameter estimate \(\hat{\theta}\)

+

+ +

+

Ideally, we want our estimator to have low bias and low variance, but how can we mathematically quantify that? See sec-bias-variance-tradeoff for more detail.

+
+

18.3.1 Prediction as Estimation

+

Now that we’ve established the idea of an estimator, let’s see how we can apply this learning to the modeling process. To do so, we’ll take a moment to formalize our data collection and models in the language of random variables.

+

Say we are working with an input variable, \(x\), and a response variable, \(Y\). We assume that \(Y\) and \(x\) are linked by some relationship \(g\); in other words, \(Y = g(x)\) where \(g\) represents some “universal truth” or “law of nature” that defines the underlying relationship between \(x\) and \(Y\). In the image below, \(g\) is denoted by the red line.

+

As data scientists, however, we have no way of directly “seeing” the underlying relationship \(g\). The best we can do is collect observed data out in the real world to try to understand this relationship. Unfortunately, the data collection process will always have some inherent error (think of the randomness you might encounter when taking measurements in a scientific experiment). We say that each observation comes with some random error or noise term, \(\epsilon\) (read: “epsilon”). This error is assumed to be a random variable with expectation \(\mathbb{E}(\epsilon)=0\), variance \(\text{Var}(\epsilon) = \sigma^2\), and be i.i.d. across each observation. The existence of this random noise means that our observations, \(Y(x)\), are random variables.

+

+data +

+

We can only observe our random sample of data, represented by the blue points above. From this sample, we want to estimate the true relationship \(g\). We do this by constructing the model \(\hat{Y}(x)\) to estimate \(g\).

+

\[\text{True relationship: } g(x)\]

+

\[\text{Observed relationship: }Y = g(x) + \epsilon\]

+

\[\text{Prediction: }\hat{Y}(x)\]

+

+y_hat +

+

When building models, it is also important to note that our choice of features will also significantly impact our estimation. In the plot below, you can see how the different models (green and purple) can lead to different estimates.

+

+y_hat +

+
+

18.3.1.1 Estimating a Linear Relationship

+

If we assume that the true relationship \(g\) is linear, we can express the response as \(Y = f_{\theta}(x)\), where our true relationship is modeled by \[Y = g(x) + \epsilon\] \[ f_{\theta}(x) = Y = \theta_0 + \sum_{j=1}^p \theta_j x_j + \epsilon\]

+
+ +
+
+

In our two equations above, the true relationship \(g(x) = \theta_0 + \sum_{j=1}^p \theta_j x_j\) is not random, but \(\epsilon\) is random. Hence, \(Y = f_{\theta}(x)\) is also random.

+
+
+
+

This true relationship has true, unobservable parameters \(\theta\), and it has random noise \(\epsilon\), so we can never observe the true relationship. Instead, the next best thing we can do is obtain a sample \(\Bbb{X}\), \(\Bbb{Y}\) of \(n\) observed relationships, \((x, Y)\) and use it to train a model and obtain an estimate of \(\hat{\theta}\) \[\hat{Y}(x) = f_{\hat{\theta}}(x) = \hat{\theta_0} + \sum_{j=1}^p \hat{\theta_j} x_j\]

+
+ +
+
+

In our estimating equation above, our sample \(\Bbb{X}\), \(\Bbb{Y}\) are random (often due to human error). Hence, the estimates we calculate from our samples \(\hat{\theta}\) are also random, so our predictor \(\hat{Y}(x)\) is also random.

+
+
+
+

Now taking a look at our original equations, we can see that they both have differing sources of randomness. For our observed relationship, \(Y = g(x) + \epsilon\), \(\epsilon\) represents errors which occur during or after the observation or measurement process. For the estimation model, the data we have is a random sample collected from the population, which was constructed from decisions made before the measurement process.

+
+
+
+
+

18.4 Bias-Variance Tradeoff

+

Recall the model and the data we generated from that model in the last section:

+

\[\text{True relationship: } g(x)\]

+

\[\text{Observed relationship: }Y = g(x) + \epsilon\]

+

\[\text{Prediction: }\hat{Y}(x)\]

+

With this reformulated modeling goal, we can now revisit the Bias-Variance Tradeoff from two lectures ago (shown below):

+

+ +

+

In today’s lecture, we’ll explore a more mathematical version of the graph you see above by introducing the terms model risk, observation variance, model bias, and model variance. Eventually, we’ll work our way up to an updated version of the Bias-Variance Tradeoff graph that you see below

+

+ +

+
+

18.4.1 Model Risk

+

Model risk is defined as the mean square prediction error of the random variable \(\hat{Y}\). It is an expectation across all samples we could have possibly gotten when fitting the model, which we can denote as random variables \(X_1, X_2, \ldots, X_n, Y\). Model risk considers the model’s performance on any sample that is theoretically possible, rather than the specific data that we have collected.

+

\[\text{model risk }=E\left[(Y-\hat{Y(x)})^2\right]\]

+

What is the origin of the error encoded by model risk? Note that there are two types of errors:

+
    +
  • Chance errors: happen due to randomness alone +
      +
    • Source 1 (Observation Variance): randomness in new observations \(Y\) due to random noise \(\epsilon\)
    • +
    • Source 2 (Model Variance): randomness in the sample we used to train the models, as samples \(X_1, X_2, \ldots, X_n, Y\) are random
    • +
  • +
  • (Model Bias): non-random error due to our model being different from the true underlying function \(g\)
  • +
+

Recall the data-generating process we established earlier. There is a true underlying relationship \(g\), observed data (with random noise) \(Y\), and model \(\hat{Y}\).

+

+errors +

+

To better understand model risk, we’ll zoom in on a single data point in the plot above.

+

+breakdown +

+

Remember that \(\hat{Y}(x)\) is a random variable – it is the prediction made for \(x\) after being fit on the specific sample used for training. If we had used a different sample for training, a different prediction might have been made for this value of \(x\). To capture this, the diagram above considers both the prediction \(\hat{Y}(x)\) made for a particular random training sample, and the expected prediction across all possible training samples, \(E[\hat{Y}(x)]\).

+

We can use this simplified diagram to break down the prediction error into smaller components. First, start by considering the error on a single prediction, \(Y(x)-\hat{Y}(x)\).

+

+error +

+

We can identify three components of this error.

+

+decomposition +

+

That is, the error can be written as:

+

\[Y(x)-\hat{Y}(x) = \epsilon + \left(g(x)-E\left[\hat{Y}(x)\right]\right) + \left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)\] \[\newline \]

+

The model risk is the expected square of the expression above, \(E\left[(Y(x)-\hat{Y}(x))^2\right]\). If we square both sides and then take the expectation, we will get the following decomposition of model risk:

+

\[E\left[(Y(x)-\hat{Y}(x))^2\right] = E[\epsilon^2] + \left(g(x)-E\left[\hat{Y}(x)\right]\right)^2 + E\left[\left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)^2\right]\]

+

It looks like we are missing some cross-product terms when squaring the right-hand side, but it turns out that all of those cross-product terms are zero. The detailed derivation is out of scope for this class, but a proof is included at the end of this note for your reference.

+

This expression may look complicated at first glance, but we’ve actually already defined each term earlier in this lecture! Let’s look at them term by term.

+
+

18.4.1.1 Observation Variance

+

The first term in the above decomposition is \(E[\epsilon^2]\). Remember \(\epsilon\) is the random noise when observing \(Y\), with expectation \(\mathbb{E}(\epsilon)=0\) and variance \(\text{Var}(\epsilon) = \sigma^2\). We can show that \(E[\epsilon^2]\) is the variance of \(\epsilon\): \[ +\begin{align*} +\text{Var}(\epsilon) &= E[\epsilon^2] + \left(E[\epsilon]\right)^2\\ +&= E[\epsilon^2] + 0^2\\ +&= \sigma^2. +\end{align*} +\]

+

This term describes how variable the random error \(\epsilon\) (and \(Y\)) is for each observation. This is called the observation variance. It exists due to the randomness in our observations \(Y\). It is a form of chance error we talked about in the Sampling lecture.

+

\[\text{observation variance} = \text{Var}(\epsilon) = \sigma^2.\]

+

The observation variance results from measurement errors when observing data or missing information that acts like noise. To reduce this observation variance, we could try to get more precise measurements, but it is often beyond the control of data scientists. Because of this, the observation variance \(\sigma^2\) is sometimes called “irreducible error.”

+
+
+

18.4.1.2 Model Variance

+

We will then look at the last term: \(E\left[\left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)^2\right]\). If you recall the definition of variance from the last lecture, this is precisely \(\text{Var}(\hat{Y}(x))\). We call this the model variance.

+

It describes how much the prediction \(\hat{Y}(x)\) tends to vary when we fit the model on different samples. Remember the sample we collect can come out very differently, thus the prediction \(\hat{Y}(x)\) will also be different. The model variance describes this variability due to the randomness in our sampling process. Like observation variance, it is also a form of chance error—even though the sources of randomness are different.

+

\[\text{model variance} = \text{Var}(\hat{Y}(x)) = E\left[\left(\hat{Y}(x) - E\left[\hat{Y}(x)\right]\right)^2\right]\]

+

The main reason for the large model variance is because of overfitting: we paid too much attention to the details in our sample that small differences in our random sample lead to large differences in the fitted model. To remediate this, we try to reduce model complexity (e.g. take out some features and limit the magnitude of estimated model coefficients) and not fit our model on the noises.

+
+
+

18.4.1.3 Model Bias

+

Finally, the second term is \(\left(g(x)-E\left[\hat{Y}(x)\right]\right)^2\). What is this? The term \(E\left[\hat{Y}(x)\right] - g(x)\) is called the model bias.

+

Remember that \(g(x)\) is the fixed underlying truth and \(\hat{Y}(x)\) is our fitted model, which is random. Model bias therefore measures how far off \(g(x)\) and \(\hat{Y}(x)\) are on average over all possible samples.

+

\[\text{model bias} = E\left[\hat{Y}(x) - g(x)\right] = E\left[\hat{Y}(x)\right] - g(x)\]

+

The model bias is not random; it’s an average measure for a specific individual \(x\). If bias is positive, our model tends to overestimate \(g(x)\); if it’s negative, our model tends to underestimate \(g(x)\). And if it’s 0, we can say that our model is unbiased.

+
+
+
+ +
+
+Unbiased Estimators +
+
+
+

An unbiased model has a \(\text{model bias } = 0\). In other words, our model predicts \(g(x)\) on average.

+

Similarly, we can define bias for estimators like the mean. The sample mean is an unbiased estimator of the population mean, as by CLT, \(\mathbb{E}[\bar{X}_n] = \mu\). Therefore, the \(\text{estimator bias } = \mathbb{E}[\bar{X}_n] - \mu = 0\).

+
+
+

There are two main reasons for large model biases:

+
    +
  • Underfitting: our model is too simple for the data
  • +
  • Lack of domain knowledge: we don’t understand what features are useful for the response variable
  • +
+

To fix this, we increase model complexity (but we don’t want to overfit!) or consult domain experts to see which models make sense. You can start to see a tradeoff here: if we increase model complexity, we decrease the model bias, but we also risk increasing the model variance.

+
+
+
+

18.4.2 The Decomposition

+

To summarize:

+
    +
  • The model risk, \(\mathbb{E}\left[(Y(x)-\hat{Y}(x))^2\right]\), is the mean squared prediction error of the model. It is an expectation and is therefore a fixed number (for a given x).
  • +
  • The observation variance, \(\sigma^2\), is the variance of the random noise in the observations. It describes how variable the random error \(\epsilon\) is for each observation and cannot be addressed by modeling.
  • +
  • The model bias, \(\mathbb{E}\left[\hat{Y}(x)\right]-g(x)\), is how “off” the \(\hat{Y}(x)\) is as an estimator of the true underlying relationship \(g(x)\).
  • +
  • The model variance, \(\text{Var}(\hat{Y}(x))\), describes how much the prediction \(\hat{Y}(x)\) tends to vary when we fit the model on different samples.
  • +
+

The above definitions enable us to simplify the decomposition of model risk before as:

+

\[ E[(Y(x) - \hat{Y}(x))^2] = \sigma^2 + (E[\hat{Y}(x)] - g(x))^2 + \text{Var}(\hat{Y}(x)) \] \[\text{model risk } = \text{observation variance} + (\text{model bias})^2 \text{+ model variance}\]

+

This is known as the bias-variance tradeoff. What does it mean? Remember that the model risk is a measure of the model’s performance. Our goal in building models is to keep model risk low; this means that we will want to ensure that each component of model risk is kept at a small value.

+

Observation variance is an inherent, random part of the data collection process. We aren’t able to reduce the observation variance, so we’ll focus our attention on the model bias and model variance.

+

In the Feature Engineering lecture, we considered the issue of overfitting. We saw that the model’s error or bias tends to decrease as model complexity increases — if we design a highly complex model, it will tend to make predictions that are closer to the true relationship \(g\). At the same time, model variance tends to increase as model complexity increases; a complex model may overfit to the training data, meaning that small differences in the random samples used for training lead to large differences in the fitted model. We have a problem. To decrease model bias, we could increase the model’s complexity, which would lead to overfitting and an increase in model variance. Alternatively, we could decrease model variance by decreasing the model’s complexity at the cost of increased model bias due to underfitting.

+

+bvt +

+

We need to strike a balance. Our goal in model creation is to use a complexity level that is high enough to keep bias low, but not so high that model variance is large.

+
+
+
+

18.5 [Bonus] Proof of Bias-Variance Decomposition

+

This section walks through the detailed derivation of the Bias-Variance Decomposition in the Bias-Variance Tradeoff section above, and this content is out of scope.

+
+ +
+
+

We want to prove that the model risk can be decomposed as

+

\[ +\begin{align*} +E\left[(Y(x)-\hat{Y}(x))^2\right] &= E[\epsilon^2] + \left(g(x)-E\left[\hat{Y}(x)\right]\right)^2 + E\left[\left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)^2\right]. +\end{align*} +\]

+

To prove this, we will first need the following lemma:

+
+If \(V\) and \(W\) are independent random variables then \(E[VW] = E[V]E[W]\). +
+

We will prove this in the discrete finite case. Trust that it’s true in greater generality.

+

The job is to calculate the weighted average of the values of \(VW\), where the weights are the probabilities of those values. Here goes.

+

\[\begin{align*} +E[VW] ~ &= ~ \sum_v\sum_w vwP(V=v \text{ and } W=w) \\ +&= ~ \sum_v\sum_w vwP(V=v)P(W=w) ~~~~ \text{by independence} \\ +&= ~ \sum_v vP(V=v)\sum_w wP(W=w) \\ +&= ~ E[V]E[W] +\end{align*}\]

+

Now we go into the actual proof:

+
+

18.5.1 Goal

+

Decompose the model risk into recognizable components.

+
+
+

18.5.2 Step 1

+

\[ +\begin{align*} +\text{model risk} ~ &= ~ E\left[\left(Y - \hat{Y}(x)\right)^2 \right] \\ +&= ~ E\left[\left(g(x) + \epsilon - \hat{Y}(x)\right)^2 \right] \\ +&= ~ E\left[\left(\epsilon + \left(g(x)- \hat{Y}(x)\right)\right)^2 \right] \\ +&= ~ E\left[\epsilon^2\right] + 2E\left[\epsilon \left(g(x)- \hat{Y}(x)\right)\right] + E\left[\left(g(x) - \hat{Y}(x)\right)^2\right]\\ +\end{align*} +\]

+

On the right hand side:

+
    +
  • The first term is the observation variance \(\sigma^2\).
  • +
  • The cross product term is 0 because \(\epsilon\) is independent of \(g(x) - \hat{Y}(x)\) and \(E(\epsilon) = 0\)
  • +
  • The last term is the mean squared difference between our predicted value and the value of the true function at \(x\)
  • +
+
+
+

18.5.3 Step 2

+

At this stage we have

+

\[ +\text{model risk} ~ = ~ E\left[\epsilon^2\right] + E\left[\left(g(x) - \hat{Y}(x)\right)^2\right] +\]

+

We don’t yet have a good understanding of \(g(x) - \hat{Y}(x)\). But we do understand the deviation \(D_{\hat{Y}(x)} = \hat{Y}(x) - E\left[\hat{Y}(x)\right]\). We know that

+
    +
  • \(E\left[D_{\hat{Y}(x)}\right] ~ = ~ 0\)
  • +
  • \(E\left[D_{\hat{Y}(x)}^2\right] ~ = ~ \text{model variance}\)
  • +
+

So let’s add and subtract \(E\left[\hat{Y}(x)\right]\) and see if that helps.

+

\[ +g(x) - \hat{Y}(x) ~ = ~ \left(g(x) - E\left[\hat{Y}(x)\right] \right) + \left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right) +\]

+

The first term on the right hand side is the model bias at \(x\). The second term is \(-D_{\hat{Y}(x)}\). So

+

\[ +g(x) - \hat{Y}(x) ~ = ~ \text{model bias} - D_{\hat{Y}(x)} +\]

+
+
+

18.5.4 Step 3

+

Remember that the model bias at \(x\) is a constant, not a random variable. Think of it as your favorite number, say 10. Then \[ +\begin{align*} +E\left[ \left(g(x) - \hat{Y}(x)\right)^2 \right] ~ &= ~ \text{model bias}^2 - 2(\text{model bias})E\left[D_{\hat{Y}(x)}\right] + E\left[D_{\hat{Y}(x)}^2\right] \\ +&= ~ \text{model bias}^2 - 0 + \text{model variance} \\ +&= ~ \text{model bias}^2 + \text{model variance} +\end{align*} +\]

+

Again, the cross-product term is \(0\) because \(E\left[D_{\hat{Y}(x)}\right] ~ = ~ 0\).

+
+
+

18.5.5 Step 4: Bias-Variance Decomposition

+

In Step 2, we had:

+

\[ +\text{model risk} ~ = ~ \text{observation variance} + E\left[\left(g(x) - \hat{Y}(x)\right)^2\right] +\]

+

Step 3 showed:

+

\[ +E\left[ \left(g(x) - \hat{Y}(x)\right)^2 \right] ~ = ~ \text{model bias}^2 + \text{model variance} +\]

+

Thus, we have proven the bias-variance decomposition:

+

\[ +\text{model risk} = \text{observation variance} + \text{model bias}^2 + \text{model variance}. +\]

+

That is,

+

\[ +E\left[(Y(x)-\hat{Y}(x))^2\right] = \sigma^2 + \left(E\left[\hat{Y}(x)\right] - g(x)\right)^2 + E\left[\left(\hat{Y}(x)-E\left[\hat{Y}(x)\right]\right)^2\right] +\]

+
+
+
+
+ + + + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/regex/regex.html b/docs/regex/regex.html new file mode 100644 index 000000000..df18ee2cd --- /dev/null +++ b/docs/regex/regex.html @@ -0,0 +1,1982 @@ + + + + + + + + + +6  Regular Expressions – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

6  Regular Expressions

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Understand Python string manipulation, pandas Series methods
  • +
  • Parse and create regex, with a reference table
  • +
  • Use vocabulary (closure, metacharacters, groups, etc.) to describe regex metacharacters
  • +
+
+
+
+
+

6.1 Why Work with Text?

+

Last lecture, we learned of the difference between quantitative and qualitative variable types. The latter includes string data — the primary focus of lecture 6. In this note, we’ll discuss the necessary tools to manipulate text: Python string manipulation and regular expressions.

+

There are two main reasons for working with text.

+
    +
  1. Canonicalization: Convert data that has multiple formats into a standard form. +
      +
    • By manipulating text, we can join tables with mismatched string labels.
    • +
  2. +
  3. Extract information into a new feature. +
      +
    • For example, we can extract date and time features from text.
    • +
  4. +
+
+
+

6.2 Python String Methods

+

First, we’ll introduce a few methods useful for string manipulation. The following table includes a number of string operations supported by Python and pandas. The Python functions operate on a single string, while their equivalent in pandas are vectorized — they operate on a Series of string data.

+ +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationPythonPandas (Series)
Transformation
    +
  • s.lower()
  • +
  • s.upper()
  • +
    +
  • ser.str.lower()
  • +
  • ser.str.upper()
  • +
Replacement + Deletion
    +
  • s.replace(_)
  • +
    +
  • ser.str.replace(_)
  • +
Split
    +
  • s.split(_)
  • +
    +
  • ser.str.split(_)
  • +
Substring
    +
  • s[1:4]
  • +
    +
  • ser.str[1:4]
  • +
Membership
    +
  • '_' in s
  • +
    +
  • ser.str.contains(_)
  • +
Length
    +
  • len(s)
  • +
    +
  • ser.str.len()
  • +
+

We’ll discuss the differences between Python string functions and pandas Series methods in the following section on canonicalization.

+
+

6.2.1 Canonicalization

+

Assume we want to merge the given tables.

+
+
+Code +
import pandas as pd
+
+with open('data/county_and_state.csv') as f:
+    county_and_state = pd.read_csv(f)
+    
+with open('data/county_and_population.csv') as f:
+    county_and_pop = pd.read_csv(f)
+
+
+
+
display(county_and_state), display(county_and_pop);
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CountyState
0De Witt CountyIL
1Lac qui Parle CountyMN
2Lewis and Clark CountyMT
3St John the Baptist ParishLS
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CountyPopulation
0DeWitt16798
1Lac Qui Parle8067
2Lewis & Clark55716
3St. John the Baptist43044
+ +
+
+
+

Last time, we used a primary key and foreign key to join two tables. While neither of these keys exist in our DataFrames, the "County" columns look similar enough. Can we convert these columns into one standard, canonical form to merge the two tables?

+
+

6.2.1.1 Canonicalization with Python String Manipulation

+

The following function uses Python string manipulation to convert a single county name into canonical form. It does so by eliminating whitespace, punctuation, and unnecessary text.

+
+
def canonicalize_county(county_name):
+    return (
+        county_name
+            .lower()
+            .replace(' ', '')
+            .replace('&', 'and')
+            .replace('.', '')
+            .replace('county', '')
+            .replace('parish', '')
+    )
+
+canonicalize_county("St. John the Baptist")
+
+
'stjohnthebaptist'
+
+
+

We will use the pandas map function to apply the canonicalize_county function to every row in both DataFrames. In doing so, we’ll create a new column in each called clean_county_python with the canonical form.

+
+
county_and_pop['clean_county_python'] = county_and_pop['County'].map(canonicalize_county)
+county_and_state['clean_county_python'] = county_and_state['County'].map(canonicalize_county)
+display(county_and_state), display(county_and_pop);
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CountyStateclean_county_python
0De Witt CountyILdewitt
1Lac qui Parle CountyMNlacquiparle
2Lewis and Clark CountyMTlewisandclark
3St John the Baptist ParishLSstjohnthebaptist
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CountyPopulationclean_county_python
0DeWitt16798dewitt
1Lac Qui Parle8067lacquiparle
2Lewis & Clark55716lewisandclark
3St. John the Baptist43044stjohnthebaptist
+ +
+
+
+
+
+

6.2.1.2 Canonicalization with Pandas Series Methods

+

Alternatively, we can use pandas Series methods to create this standardized column. To do so, we must call the .str attribute of our Series object prior to calling any methods, like .lower and .replace. Notice how these method names match their equivalent built-in Python string functions.

+

Chaining multiple Series methods in this manner eliminates the need to use the map function (as this code is vectorized).

+
+
def canonicalize_county_series(county_series):
+    return (
+        county_series
+            .str.lower()
+            .str.replace(' ', '')
+            .str.replace('&', 'and')
+            .str.replace('.', '')
+            .str.replace('county', '')
+            .str.replace('parish', '')
+    )
+
+county_and_pop['clean_county_pandas'] = canonicalize_county_series(county_and_pop['County'])
+county_and_state['clean_county_pandas'] = canonicalize_county_series(county_and_state['County'])
+display(county_and_pop), display(county_and_state);
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CountyPopulationclean_county_pythonclean_county_pandas
0DeWitt16798dewittdewitt
1Lac Qui Parle8067lacquiparlelacquiparle
2Lewis & Clark55716lewisandclarklewisandclark
3St. John the Baptist43044stjohnthebaptiststjohnthebaptist
+ +
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CountyStateclean_county_pythonclean_county_pandas
0De Witt CountyILdewittdewitt
1Lac qui Parle CountyMNlacquiparlelacquiparle
2Lewis and Clark CountyMTlewisandclarklewisandclark
3St John the Baptist ParishLSstjohnthebaptiststjohnthebaptist
+ +
+
+
+
+
+
+

6.2.2 Extraction

+

Extraction explores the idea of obtaining useful information from text data. This will be particularily important in model building, which we’ll study in a few weeks.

+

Say we want to read some data from a .txt file.

+
+
with open('data/log.txt', 'r') as f:
+    log_lines = f.readlines()
+
+log_lines
+
+
['169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /stat141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n',
+ '193.205.203.3 - - [2/Feb/2005:17:23:6 -0800] "GET /stat141/Notes/dim.html HTTP/1.0" 404 302 "http://eeyore.ucdavis.edu/stat141/Notes/session.html"\n',
+ '169.237.46.240 - "" [3/Feb/2006:10:18:37 -0800] "GET /stat141/homework/Solutions/hw1Sol.pdf HTTP/1.1"\n']
+
+
+

Suppose we want to extract the day, month, year, hour, minutes, seconds, and time zone. Unfortunately, these items are not in a fixed position from the beginning of the string, so slicing by some fixed offset won’t work.

+

Instead, we can use some clever thinking. Notice how the relevant information is contained within a set of brackets, further separated by / and :. We can hone in on this region of text, and split the data on these characters. Python’s built-in .split function makes this easy.

+
+
first = log_lines[0] # Only considering the first row of data
+
+pertinent = first.split("[")[1].split(']')[0]
+day, month, rest = pertinent.split('/')
+year, hour, minute, rest = rest.split(':')
+seconds, time_zone = rest.split(' ')
+day, month, year, hour, minute, seconds, time_zone
+
+
('26', 'Jan', '2014', '10', '47', '58', '-0800')
+
+
+

There are two problems with this code:

+
    +
  1. Python’s built-in functions limit us to extract data one record at a time, +
      +
    • This can be resolved using the map function or pandas Series methods.
    • +
  2. +
  3. The code is quite verbose. +
      +
    • This is a larger issue that is trickier to solve
    • +
  4. +
+

In the next section, we’ll introduce regular expressions - a tool that solves problem 2.

+
+
+
+

6.3 RegEx Basics

+

A regular expression (“RegEx”) is a sequence of characters that specifies a search pattern. They are written to extract specific information from text. Regular expressions are essentially part of a smaller programming language embedded in Python, made available through the re module. As such, they have a stand-alone syntax and methods for various capabilities.

+

Regular expressions are useful in many applications beyond data science. For example, Social Security Numbers (SSNs) are often validated with regular expressions.

+
+
r"[0-9]{3}-[0-9]{2}-[0-9]{4}" # Regular Expression Syntax
+
+# 3 of any digit, then a dash,
+# then 2 of any digit, then a dash,
+# then 4 of any digit
+
+
'[0-9]{3}-[0-9]{2}-[0-9]{4}'
+
+
+ +

There are a ton of resources to learn and experiment with regular expressions. A few are provided below:

+ +
+

6.3.1 Basics RegEx Syntax

+

There are four basic operations with regular expressions.

+ +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationOrderSyntax ExampleMatchesDoesn’t Match
Or: |4AA|BAABAA
BAAB
every other string
Concatenation3AABAABAABAABevery other string
Closure: *
(zero or more)
2AB*AAA ABBBBBBAAB
ABABA
Group: ()
(parenthesis)
1A(A|B)AAB


(AB)*A
AAAAB ABAAB


A
ABABABABA
every other string


AA
ABBA
+

Notice how these metacharacter operations are ordered. Rather than being literal characters, these metacharacters manipulate adjacent characters. () takes precedence, followed by *, and finally |. This allows us to differentiate between very different regex commands like AB* and (AB)*. The former reads “A then zero or more copies of B”, while the latter specifies “zero or more copies of AB”.

+
+

6.3.1.1 Examples

+

Question 1: Give a regular expression that matches moon, moooon, etc. Your expression should match any even number of os except zero (i.e. don’t match mn).

+

Answer 1: moo(oo)*n

+
    +
  • Hardcoding oo before the capture group ensures that mn is not matched.
  • +
  • A capture group of (oo)* ensures the number of o’s is even.
  • +
+

Question 2: Using only basic operations, formulate a regex that matches muun, muuuun, moon, moooon, etc. Your expression should match any even number of us or os except zero (i.e. don’t match mn).

+

Answer 2: m(uu(uu)*|oo(oo)*)n

+
    +
  • The leading m and trailing n ensures that only strings beginning with m and ending with n are matched.
  • +
  • Notice how the outer capture group surrounds the |. +
      +
    • Consider the regex m(uu(uu)*)|(oo(oo)*)n. This incorrectly matches muu and oooon. +
        +
      • Each OR clause is everything to the left and right of |. The incorrect solution matches only half of the string, and ignores either the beginning m or trailing n.
      • +
      • A set of parenthesis must surround |. That way, each OR clause is everything to the left and right of | within the group. This ensures both the beginning m and trailing n are matched.
      • +
    • +
  • +
+
+
+
+
+

6.4 RegEx Expanded

+

Provided below are more complex regular expression functions.

+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationSyntax ExampleMatchesDoesn’t Match
Any Character: .
(except newline)
.U.U.U.CUMULUS
JUGULUM
SUCCUBUS TUMULTUOUS
Character Class: []
(match one character in [])
[A-Za-z][a-z]*word
Capitalized
camelCase 4illegal
Repeated "a" Times: {a}
j[aeiou]{3}hnjaoehn
jooohn
jhn
jaeiouhn
Repeated "from a to b" Times: {a, b}
j[ou]{1,2}hnjohn
juohn
jhn
jooohn
At Least One: +jo+hnjohn
joooooohn
jhn
jjohn
Zero or One: ?joh?njon
john
any other string
+

A character class matches a single character in its class. These characters can be hardcoded —— in the case of [aeiou] —— or shorthand can be specified to mean a range of characters. Examples include:

+
    +
  1. [A-Z]: Any capitalized letter
  2. +
  3. [a-z]: Any lowercase letter
  4. +
  5. [0-9]: Any single digit
  6. +
  7. [A-Za-z]: Any capitalized of lowercase letter
  8. +
  9. [A-Za-z0-9]: Any capitalized or lowercase letter or single digit
  10. +
+
+

6.4.0.1 Examples

+

Let’s analyze a few examples of complex regular expressions.

+ ++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MatchesDoes Not Match
    +
  1. .*SPB.*
  2. +
RASPBERRY
SPBOO
SUBSPACE
SUBSPECIES
    +
  1. [0-9]{3}-[0-9]{2}-[0-9]{4}
  2. +
231-41-5121
573-57-1821
231415121
57-3571821
    +
  1. [a-z]+@([a-z]+\.)+(edu|com)
  2. +
horse@pizza.com
horse@pizza.food.com
frank_99@yahoo.com
hug@cs
+

Explanations

+
    +
  1. .*SPB.* only matches strings that contain the substring SPB. +
      +
    • The .* metacharacter matches any amount of non-negative characters. Newlines do not count.
      +
    • +
  2. +
  3. This regular expression matches 3 of any digit, then a dash, then 2 of any digit, then a dash, then 4 of any digit. +
      +
    • You’ll recognize this as the familiar Social Security Number regular expression.
    • +
  4. +
  5. Matches any email with a com or edu domain, where all characters of the email are letters. +
      +
    • At least one . must precede the domain name. Including a backslash \ before any metacharacter (in this case, the .) tells RegEx to match that character exactly.
    • +
  6. +
+
+
+
+

6.5 Convenient RegEx

+

Here are a few more convenient regular expressions.

+ ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OperationSyntax ExampleMatchesDoesn’t Match
built in character class\w+
\d+
\s+
Fawef_03
231123
whitespace
this person
423 people
non-whitespace
character class negation: [^] (everything except the given characters)[^a-z]+.PEPPERS3982 17211!↑åporch
CLAmS
escape character: \
(match the literal next character)
cow\.comcow.comcowscom
beginning of line: ^^arkark two ark o arkdark
end of line: $ark$dark
ark o ark
ark two
lazy version of zero or more : *?5.*?55005
55
5005005
+
+

6.5.1 Greediness

+

In order to fully understand the last operation in the table, we have to discuss greediness. RegEx is greedy – it will look for the longest possible match in a string. To motivate this with an example, consider the pattern <div>.*</div>. In the sentence below, we would hope that the bolded portions would be matched:

+

“This is a <div>example</div> of greediness <div>in</div> regular expressions.”

+

However, in reality, RegEx captures far more of the sentence. The way RegEx processes the text given that pattern is as follows:

+
    +
  1. “Look for the exact string <>”

  2. +
  3. Then, “look for any character 0 or more times”

  4. +
  5. Then, “look for the exact string </div>”

  6. +
+

The result would be all the characters starting from the leftmost <div> and the rightmost </div> (inclusive):

+

“This is a <div>example</div> of greediness <div>in</div> regular expressions.”

+

We can fix this by making our pattern non-greedy, <div>.*?</div>. You can read up more in the documentation here.

+
+
+

6.5.2 Examples

+

Let’s revisit our earlier problem of extracting date/time data from the given .txt files. Here is how the data looked.

+
+
log_lines[0]
+
+
'169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /stat141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n'
+
+
+

Question: Give a regular expression that matches everything contained within and including the brackets - the day, month, year, hour, minutes, seconds, and time zone.

+

Answer: \[.*\]

+
    +
  • Notice how matching the literal [ and ] is necessary. Therefore, an escape character \ is required before both [ and ] — otherwise these metacharacters will match character classes.
  • +
  • We need to match a particular format between [ and ]. For this example, .* will suffice.
  • +
+

Alternative Solution: \[\w+/\w+/\w+:\w+:\w+:\w+\s-\w+\]

+
    +
  • This solution is much safer. +
      +
    • Imagine the data between [ and ] was garbage - .* will still match that.
    • +
    • The alternate solution will only match data that follows the correct format.
    • +
  • +
+
+
+
+

6.6 Regex in Python and Pandas (RegEx Groups)

+
+

6.6.1 Canonicalization

+
+

6.6.1.1 Canonicalization with RegEx

+

Earlier in this note, we examined the process of canonicalization using python string manipulation and pandas Series methods. However, we mentioned this approach had a major flaw: our code was unnecessarily verbose. Equipped with our knowledge of regular expressions, let’s fix this.

+

To do so, we need to understand a few functions in the re module. The first of these is the substitute function: re.sub(pattern, rep1, text). It behaves similarly to python’s built-in .replace function, and returns text with all instances of pattern replaced by rep1.

+

The regular expression here removes text surrounded by <> (also known as HTML tags).

+

In order, the pattern matches … 1. a single < 2. any character that is not a > : div, td valign…, /td, /div 3. a single >

+

Any substring in text that fulfills all three conditions will be replaced by ''.

+
+
import re
+
+text = "<div><td valign='top'>Moo</td></div>"
+pattern = r"<[^>]+>"
+re.sub(pattern, '', text) 
+
+
'Moo'
+
+
+

Notice the r preceding the regular expression pattern; this specifies the regular expression is a raw string. Raw strings do not recognize escape sequences (i.e., the Python newline metacharacter \n). This makes them useful for regular expressions, which often contain literal \ characters.

+

In other words, don’t forget to tag your RegEx with an r.

+
+
+

6.6.1.2 Canonicalization with pandas

+

We can also use regular expressions with pandas Series methods. This gives us the benefit of operating on an entire column of data as opposed to a single value. The code is simple:
ser.str.replace(pattern, repl, regex=True).

+

Consider the following DataFrame html_data with a single column.

+
+
+Code +
data = {"HTML": ["<div><td valign='top'>Moo</td></div>", \
+                 "<a href='http://ds100.org'>Link</a>", \
+                 "<b>Bold text</b>"]}
+html_data = pd.DataFrame(data)
+
+
+
+
html_data
+
+
+ + + + + + + + + + + + + + + + + + + + + + + +
HTML
0<div><td valign='top'>Moo</td></div>
1<a href='http://ds100.org'>Link</a>
2<b>Bold text</b>
+ +
+
+
+
+
pattern = r"<[^>]+>"
+html_data['HTML'].str.replace(pattern, '', regex=True)
+
+
0          Moo
+1         Link
+2    Bold text
+Name: HTML, dtype: object
+
+
+
+
+
+

6.6.2 Extraction

+
+

6.6.2.1 Extraction with RegEx

+

Just like with canonicalization, the re module provides capability to extract relevant text from a string:
re.findall(pattern, text). This function returns a list of all matches to pattern.

+

Using the familiar regular expression for Social Security Numbers:

+
+
text = "My social security number is 123-45-6789 bro, or maybe it’s 321-45-6789."
+pattern = r"[0-9]{3}-[0-9]{2}-[0-9]{4}"
+re.findall(pattern, text)  
+
+
['123-45-6789', '321-45-6789']
+
+
+
+
+

6.6.2.2 Extraction with pandas

+

pandas similarily provides extraction functionality on a Series of data: ser.str.findall(pattern)

+

Consider the following DataFrame ssn_data.

+
+
+Code +
data = {"SSN": ["987-65-4321", "forty", \
+                "123-45-6789 bro or 321-45-6789",
+               "999-99-9999"]}
+ssn_data = pd.DataFrame(data)
+
+
+
+
ssn_data
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + +
SSN
0987-65-4321
1forty
2123-45-6789 bro or 321-45-6789
3999-99-9999
+ +
+
+
+
+
ssn_data["SSN"].str.findall(pattern)
+
+
0                 [987-65-4321]
+1                            []
+2    [123-45-6789, 321-45-6789]
+3                 [999-99-9999]
+Name: SSN, dtype: object
+
+
+

This function returns a list for every row containing the pattern matches in a given string.

+

As you may expect, there are similar pandas equivalents for other re functions as well. Series.str.extract takes in a pattern and returns a DataFrame of each capture group’s first match in the string. In contrast, Series.str.extractall returns a multi-indexed DataFrame of all matches for each capture group. You can see the difference in the outputs below:

+
+
pattern_cg = r"([0-9]{3})-([0-9]{2})-([0-9]{4})"
+ssn_data["SSN"].str.extract(pattern_cg)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
012
0987654321
1NaNNaNNaN
2123456789
3999999999
+ +
+
+
+
+
ssn_data["SSN"].str.extractall(pattern_cg)
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
012
match
00987654321
20123456789
1321456789
30999999999
+ +
+
+
+
+
+
+

6.6.3 Regular Expression Capture Groups

+

Earlier we used parentheses ( ) to specify the highest order of operation in regular expressions. However, they have another meaning; parentheses are often used to represent capture groups. Capture groups are essentially, a set of smaller regular expressions that match multiple substrings in text data.

+

Let’s take a look at an example.

+
+

6.6.3.1 Example 1

+
+
text = "Observations: 03:04:53 - Horse awakens. \
+        03:05:14 - Horse goes back to sleep."
+
+

Say we want to capture all occurences of time data (hour, minute, and second) as separate entities.

+
+
pattern_1 = r"(\d\d):(\d\d):(\d\d)"
+re.findall(pattern_1, text)
+
+
[('03', '04', '53'), ('03', '05', '14')]
+
+
+

Notice how the given pattern has 3 capture groups, each specified by the regular expression (\d\d). We then use re.findall to return these capture groups, each as tuples containing 3 matches.

+

These regular expression capture groups can be different. We can use the (\d{2}) shorthand to extract the same data.

+
+
pattern_2 = r"(\d\d):(\d\d):(\d{2})"
+re.findall(pattern_2, text)
+
+
[('03', '04', '53'), ('03', '05', '14')]
+
+
+
+
+

6.6.3.2 Example 2

+

With the notion of capture groups, convince yourself how the following regular expression works.

+
+
first = log_lines[0]
+first
+
+
'169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /stat141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n'
+
+
+
+
pattern = r'\[(\d+)\/(\w+)\/(\d+):(\d+):(\d+):(\d+) (.+)\]'
+day, month, year, hour, minute, second, time_zone = re.findall(pattern, first)[0]
+print(day, month, year, hour, minute, second, time_zone)
+
+
26 Jan 2014 10 47 58 -0800
+
+
+
+
+
+
+

6.7 Limitations of Regular Expressions

+

Today, we explored the capabilities of regular expressions in data wrangling with text data. However, there are a few things to be wary of.

+

Writing regular expressions is like writing a program.

+
    +
  • Need to know the syntax well.
  • +
  • Can be easier to write than to read.
  • +
  • Can be difficult to debug.
  • +
+

Regular expressions are terrible at certain types of problems:

+
    +
  • For parsing a hierarchical structure, such as JSON, use the json.load() parser, not RegEx!
  • +
  • Complex features (e.g. valid email address).
  • +
  • Counting (same number of instances of a and b). (impossible)
  • +
  • Complex properties (palindromes, balanced parentheses). (impossible)
  • +
+

Ultimately, the goal is not to memorize all regular expressions. Rather, the aim is to:

+
    +
  • Understand what RegEx is capable of.
  • +
  • Parse and create RegEx, with a reference table
  • +
  • Use vocabulary (metacharacter, escape character, groups, etc.) to describe regex metacharacters.
  • +
  • Differentiate between (), [], {}
  • +
  • Design your own character classes with , , […-…], ^, etc.
  • +
  • Use python and pandas RegEx methods.
  • +
+ + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/sampling/images/data_life_cycle_sampling.png b/docs/sampling/images/data_life_cycle_sampling.png new file mode 100644 index 000000000..ea49768ed Binary files /dev/null and b/docs/sampling/images/data_life_cycle_sampling.png differ diff --git a/docs/sampling/images/samplingframe.png b/docs/sampling/images/samplingframe.png new file mode 100644 index 000000000..fba469633 Binary files /dev/null and b/docs/sampling/images/samplingframe.png differ diff --git a/docs/sampling/sampling.html b/docs/sampling/sampling.html new file mode 100644 index 000000000..167de7548 --- /dev/null +++ b/docs/sampling/sampling.html @@ -0,0 +1,1275 @@ + + + + + + + + + +9  Sampling – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

9  Sampling

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Understand how to appropriately collect data to help answer a question.
  • +
+
+
+
+

In data science, understanding characteristics of a population starts with having quality data to investigate. While it is often impossible to collect all the data describing a population, we can overcome this by properly sampling from the population. In this note, we will discuss appropriate techniques for sampling from populations.

+
+
+

+
Lifecycle diagram
+
+
+
+

9.1 Censuses and Surveys

+

In general: a census is “a complete count or survey of a population, typically recording various details of individuals.” An example is the U.S. Decennial Census which was held in April 2020. It counts every person living in all 50 states, DC, and US territories, not just citizens. Participation is required by law (it is mandated by the U.S. Constitution). Important uses include the allocation of Federal funds, congressional representation, and drawing congressional and state legislative districts. The census is composed of a survey mailed to different housing addresses in the United States.

+

A survey is a set of questions. An example is workers sampling individuals and households. What is asked and how it is asked can affect how the respondent answers or even whether or not they answer in the first place.

+

While censuses are great, it is often very difficult and expensive to survey everyone in a population. Imagine the amount of resources, money, time, and energy the U.S. spent on the 2020 Census. While this does give us more accurate information about the population, it’s often infeasible to execute. Thus, we usually survey a subset of the population instead.

+

A sample is (usually) a subset of the population that is often used to make inferences about the population. If our sample is a good representation of our population, then we can use it to glean useful information at a lower cost. That being said, how the sample is drawn will affect the reliability of such inferences. Two common sources of error in sampling are chance error, where random samples can vary from what is expected in any direction, and bias, which is a systematic error in one direction. Biases can be the result of many things, for example, our sampling scheme or survey methods.

+

Let’s define some useful vocabulary:

+
    +
  • Population: The group that you want to learn something about. +
      +
    • Individuals in a population are not always people. Other populations include bacteria in your gut (sampled using DNA sequencing), trees of a certain species, small businesses receiving a microloan, or published results in an academic journal or field.
    • +
  • +
  • Sampling Frame: The list from which the sample is drawn. +
      +
    • For example, if sampling people, then the sampling frame is the set of all people that could possibly end up in your sample.
    • +
  • +
  • Sample: Who you actually end up sampling. The sample is therefore a subset of your sampling frame.
  • +
+

While ideally, these three sets would be exactly the same, they usually aren’t in practice. For example, there may be individuals in your sampling frame (and hence, your sample) that are not in your population. And generally, sample sizes are much smaller than population sizes.

+
+
+

+
Sampling_Frames
+
+
+
+
+

9.2 Bias: A Case Study

+

The following case study is adapted from Statistics by Freedman, Pisani, and Purves, W.W. Norton NY, 1978.

+

In 1936, President Franklin D. Roosevelt (Democratic) went up for re-election against Alf Landon (Republican). As is usual, polls were conducted in the months leading up to the election to try and predict the outcome. The Literary Digest was a magazine that had successfully predicted the outcome of 5 general elections coming into 1936. In their polling for the 1936 election, they sent out their survey to 10 million individuals whom they found from phone books, lists of magazine subscribers, and lists of country club members. Of the roughly 2.4 million people who filled out the survey, only 43% reported they would vote for Roosevelt; thus, the Digest predicted that Landon would win.

+

On election day, Roosevelt won in a landslide, winning 61% of the popular vote of about 45 million voters. How could the Digest have been so wrong with their polling?

+

It turns out that the Literary Digest sample was not representative of the population. Their sampling frame of people found in phone books, lists of magazine subscribers, and lists of country club members were more affluent and tended to vote Republican. As such, their sampling frame was inherently skewed in Landon’s favor. The Literary Digest completely overlooked the lion’s share of voters who were still suffering through the Great Depression. Furthermore, they had a dismal response rate (about 24%); who knows how the other non-respondents would have polled? The Digest folded just 18 months after this disaster.

+

At the same time, George Gallup, a rising statistician, also made predictions about the 1936 elections. Despite having a smaller sample size of “only” 50,000 (this is still more than necessary; more when we cover the Central Limit Theorem), his estimate that 56% of voters would choose Roosevelt was much closer to the actual result (61%). Gallup also predicted the Digest’s prediction within 1% with a sample size of only 3000 people by anticipating the Digest’s affluent sampling frame and subsampling those individuals.

+

So what’s the moral of the story? Samples, while convenient, are subject to chance error and bias. Election polling, in particular, can involve many sources of bias. To name a few:

+
    +
  • Selection bias systematically excludes (or favors) particular groups. +
      +
    • Example: the Literary Digest poll excludes people not in phone books.
    • +
    • How to avoid: Examine the sampling frame and the method of sampling.
    • +
  • +
  • Response bias occurs because people don’t always respond truthfully. Survey designers pay special detail to the nature and wording of questions to avoid this type of bias. +
      +
    • Example: Illegal immigrants might not answer truthfully when asked citizenship questions on the census survey.
    • +
    • How to avoid: Examine the nature of questions and the method of surveying. Randomized response - flip a coin and answer yes if heads or answer truthfully if tails.
    • +
  • +
  • Non-response bias occurs because people don’t always respond to survey requests, which can skew responses. +
      +
    • Example: Only 2.4m out of 10m people responded to the Literary Digest’s poll.
    • +
    • How to avoid: Keep surveys short, and be persistent.
    • +
  • +
+

Randomized Response

+

Suppose you want to ask someone a sensitive question: “Have you ever cheated on an exam?” An individual may be embarrassed or afraid to answer truthfully and might lie or not answer the question. One solution is to leverage a randomized response:

+

First, you can ask the individual to secretly flip a fair coin; you (the surveyor) don’t know the outcome of the coin flip.

+

Then, you ask them to answer “Yes” if the coin landed heads and to answer truthfully if the coin landed tails.

+

The surveyor doesn’t know if the “Yes” means that the person cheated or if it means that the coin landed heads. The individual’s sensitive information remains secret. However, if the response is “No”, then the surveyor knows the individual didn’t cheat. We assume the individual is comfortable revealing this information.

+

Generally, we can assume that the coin lands heads 50% of the time, masking the remaining 50% of the “No” answers. We can therefore double the proportion of “No” answers to estimate the true fraction of “No” answers.

+

Election Polls

+

Today, the Gallup Poll is one of the leading polls for election results. The many sources of biases – who responds to polls? Do voters tell the truth? How can we predict turnout? – still remain, but the Gallup Poll uses several tactics to mitigate them. Within their sampling frame of “civilian, non-institutionalized population” of adults in telephone households in continental U.S., they use random digit dialing to include both listed/unlisted phone numbers and to avoid selection bias. Additionally, they use a within-household selection process to randomly select households with one or more adults. If no one answers, re-call multiple times to avoid non-response bias.

+
+
+

9.3 Probability Samples

+

When sampling, it is essential to focus on the quality of the sample rather than the quantity of the sample. A huge sample size does not fix a bad sampling method. Our main goal is to gather a sample that is representative of the population it came from. In this section, we’ll explore the different types of sampling and their pros and cons.

+

A convenience sample is whatever you can get ahold of; this type of sampling is non-random. Note that haphazard sampling is not necessarily random sampling; there are many potential sources of bias.

+

In a probability sample, we provide the chance that any specified set of individuals will be in the sample (individuals in the population can have different chances of being selected; they don’t all have to be uniform), and we sample at random based off this known chance. For this reason, probability samples are also called random samples. The randomness provides a few benefits:

+
    +
  • Because we know the source probabilities, we can measure the errors.
  • +
  • Sampling at random gives us a more representative sample of the population, which reduces bias. (Note: this is only the case when the probability distribution we’re sampling from is accurate. Random samples using “bad” or inaccurate distributions can produce biased estimates of population quantities.)
  • +
  • Probability samples allow us to estimate the bias and chance error, which helps us quantify uncertainty (more in a future lecture).
  • +
+

The real world is usually more complicated, and we often don’t know the initial probabilities. For example, we do not generally know the probability that a given bacterium is in a microbiome sample or whether people will answer when Gallup calls landlines. That being said, still we try to model probability sampling to the best of our ability even when the sampling or measurement process is not fully under our control.

+

A few common random sampling schemes:

+
    +
  • A uniform random sample with replacement is a sample drawn uniformly at random with replacement. +
      +
    • Random doesn’t always mean “uniformly at random,” but in this specific context, it does.
    • +
    • Some individuals in the population might get picked more than once.
    • +
  • +
  • A simple random sample (SRS) is a sample drawn uniformly at random without replacement. +
      +
    • Every individual (and subset of individuals) has the same chance of being selected from the sampling frame.
    • +
    • Every pair has the same chance as every other pair.
    • +
    • Every triple has the same chance as every other triple.
    • +
    • And so on.
    • +
  • +
  • A stratified random sample, where random sampling is performed on strata (specific groups), and the groups together compose a sample.
  • +
+
+

9.3.1 Example Scheme 1: Probability Sample

+

Suppose we have 3 TA’s (Arman, Boyu, Charlie): I decide to sample 2 of them as follows:

+
    +
  • I choose A with probability 1.0
  • +
  • I choose either B or C, each with a probability of 0.5.
  • +
+

We can list all the possible outcomes and their respective probabilities in a table:

+ + + + + + + + + + + + + + + + + + + + + +
OutcomeProbability
{A, B}0.5
{A, C}0.5
{B, C}0
+

This is a probability sample (though not a great one). Of the 3 people in my population, I know the chance of getting each subset. Suppose I’m measuring the average distance TAs live from campus.

+
    +
  • This scheme does not see the entire population!
  • +
  • My estimate using the single sample I take has some chance error depending on if I see AB or AC.
  • +
  • This scheme is biased towards A’s response.
  • +
+
+
+

9.3.2 Example Scheme 2: Simple Random Sample

+

Consider the following sampling scheme:

+
    +
  • A class roster has 1100 students listed alphabetically.
  • +
  • Pick one of the first 10 students on the list at random (e.g. Student 8).
  • +
  • To create your sample, take that student and every 10th student listed after that (e.g. Students 8, 18, 28, 38, etc.).
  • +
+
+ +Is this a probability sample? + +

Yes. For a sample [n, n + 10, n + 20, …, n + 1090], where 1 <= n <= 10, the probability of that sample is 1/10. Otherwise, the probability is 0.

+Only 10 possible samples! +
+
+ +Does each student have the same probability of being selected? + +Yes. Each student is chosen with a probability of 1/10. +
+
+ +Is this a simple random sample? + +No. The chance of selecting (8, 18) is 1/10; the chance of selecting (8, 9) is 0. +
+
+
+

9.3.3 Demo: Barbie v. Oppenheimer

+

We are trying to collect a sample from Berkeley residents to predict the which one of Barbie and Oppenheimer would perform better on their opening day, July 21st.

+

First, let’s grab a dataset that has every single resident in Berkeley (this is a fake dataset) and which movie they actually watched on July 21st.

+

Let’s load in the movie.csv table. We can assume that:

+
    +
  • is_male is a boolean that indicates if a resident identifies as male.
  • +
  • There are only two movies they can watch on July 21st: Barbie and Oppenheimer.
  • +
  • Every resident watches a movie (either Barbie or Oppenheimer) on July 21st.
  • +
+
+
+Code +
import matplotlib.pyplot as plt
+import numpy as np
+import pandas as pd
+import seaborn as sns
+
+sns.set_theme(style='darkgrid', font_scale = 1.5,
+              rc={'figure.figsize':(7,5)})
+
+rng = np.random.default_rng()
+
+
+
+
movie = pd.read_csv("data/movie.csv")
+
+# create a 1/0 int that indicates Barbie vote
+movie['barbie'] = (movie['movie'] == 'Barbie').astype(int)
+movie.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ageis_malemoviebarbie
035FalseBarbie1
142TrueOppenheimer0
255FalseBarbie1
377TrueOppenheimer0
431FalseBarbie1
+ +
+
+
+

What fraction of Berkeley residents chose Barbie?

+
+
actual_barbie = np.mean(movie["barbie"])
+actual_barbie
+
+
np.float64(0.5302792307692308)
+
+
+

This is the actual outcome of the competition. Based on this result, Barbie would win. How did our sample of retirees do?

+
+

9.3.3.1 Convenience Sample: Retirees

+

Let’s take a convenience sample of people who have retired (>= 65 years old). What proportion of them went to see Barbie instead of Oppenheimer?

+
+
convenience_sample = movie[movie['age'] >= 65] # take a convenience sample of retirees
+np.mean(convenience_sample["barbie"]) # what proportion of them saw Barbie? 
+
+
np.float64(0.3744755089093924)
+
+
+

Based on this result, we would have predicted that Oppenheimer would win! What happened? Is it possible that our sample is too small or noisy?

+
+
# what's the size of our sample? 
+len(convenience_sample)
+
+
359396
+
+
+
+
# what proportion of our data is in the convenience sample? 
+len(convenience_sample)/len(movie)
+
+
0.27645846153846154
+
+
+

Seems like our sample is rather large (roughly 360,000 people), so the error is likely not due to solely to chance.

+
+
+

9.3.3.2 Check for Bias

+

Let us aggregate all choices by age and visualize the fraction of Barbie views, split by gender.

+
+
votes_by_barbie = movie.groupby(["age","is_male"]).agg("mean", numeric_only=True).reset_index()
+votes_by_barbie.head()
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ageis_malebarbie
018False0.819594
118True0.667001
219False0.812214
319True0.661252
420False0.805281
+ +
+
+
+
+
+Code +
# A common matplotlib/seaborn pattern: create the figure and axes object, pass ax
+# to seaborn for drawing into, and later fine-tune the figure via ax.
+fig, ax = plt.subplots();
+
+red_blue = ["#bf1518", "#397eb7"]
+with sns.color_palette(red_blue):
+    sns.pointplot(data=votes_by_barbie, x = "age", y = "barbie", hue = "is_male", ax=ax)
+
+new_ticks = [i.get_text() for i in ax.get_xticklabels()]
+ax.set_xticks(range(0, len(new_ticks), 10), new_ticks[::10])
+ax.set_title("Preferences by Demographics");
+
+
+
+
+

+
+
+
+
+
    +
  • We see that retirees (in Berkeley) tend to watch Oppenheimer.
  • +
  • We also see that residents who identify as non-male tend to prefer Barbie.
  • +
+
+
+

9.3.3.3 Simple Random Sample

+

Suppose we took a simple random sample (SRS) of the same size as our retiree sample:

+
+
n = len(convenience_sample)
+random_sample = movie.sample(n, replace = False) ## By default, replace = False
+np.mean(random_sample["barbie"])
+
+
np.float64(0.5306514262818729)
+
+
+

This is very close to the actual vote of 0.5302792307692308!

+

It turns out that we can get similar results with a much smaller sample size, say, 800:

+
+
n = 800
+random_sample = movie.sample(n, replace = False)
+
+# Compute the sample average and the resulting relative error
+sample_barbie = np.mean(random_sample["barbie"])
+err = abs(sample_barbie-actual_barbie)/actual_barbie
+
+# We can print output with Markdown formatting too...
+from IPython.display import Markdown
+Markdown(f"**Actual** = {actual_barbie:.4f}, **Sample** = {sample_barbie:.4f}, "
+         f"**Err** = {100*err:.2f}%.")
+
+

Actual = 0.5303, Sample = 0.5300, Err = 0.05%.

+
+
+

We’ll learn how to choose this number when we (re)learn the Central Limit Theorem later in the semester.

+
+
+

9.3.3.4 Quantifying Chance Error

+

In our SRS of size 800, what would be our chance error?

+

Let’s simulate 1000 versions of taking the 800-sized SRS from before:

+
+
nrep = 1000   # number of simulations
+n = 800       # size of our sample
+poll_result = []
+for i in range(0, nrep):
+    random_sample = movie.sample(n, replace = False)
+    poll_result.append(np.mean(random_sample["barbie"]))
+
+
+
+Code +
fig, ax = plt.subplots()
+sns.histplot(poll_result, stat='density', ax=ax)
+ax.axvline(actual_barbie, color="orange", lw=4);
+
+
+
+
+

+
+
+
+
+

What fraction of these simulated samples would have predicted Barbie?

+
+
poll_result = pd.Series(poll_result)
+np.sum(poll_result > 0.5)/1000
+
+
np.float64(0.949)
+
+
+

You can see the curve looks roughly Gaussian/normal. Using KDE:

+
+
+Code +
sns.histplot(poll_result, stat='density', kde=True);
+
+
+
+
+

+
+
+
+
+
+
+
+
+

9.4 Summary

+

Understanding the sampling process is what lets us go from describing the data to understanding the world. Without knowing / assuming something about how the data were collected, there is no connection between the sample and the population. Ultimately, the dataset doesn’t tell us about the world behind the data.

+ + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/sampling/sampling_files/figure-html/cell-13-output-1.png b/docs/sampling/sampling_files/figure-html/cell-13-output-1.png new file mode 100644 index 000000000..e440ef201 Binary files /dev/null and b/docs/sampling/sampling_files/figure-html/cell-13-output-1.png differ diff --git a/docs/sampling/sampling_files/figure-html/cell-15-output-1.png b/docs/sampling/sampling_files/figure-html/cell-15-output-1.png new file mode 100644 index 000000000..116aa7aa3 Binary files /dev/null and b/docs/sampling/sampling_files/figure-html/cell-15-output-1.png differ diff --git a/docs/sampling/sampling_files/figure-html/cell-9-output-1.png b/docs/sampling/sampling_files/figure-html/cell-9-output-1.png new file mode 100644 index 000000000..3bc39a5cd Binary files /dev/null and b/docs/sampling/sampling_files/figure-html/cell-9-output-1.png differ diff --git a/docs/search.json b/docs/search.json deleted file mode 100644 index ed56e2191..000000000 --- a/docs/search.json +++ /dev/null @@ -1,52 +0,0 @@ -[ - { - "objectID": "index.html", - "href": "index.html", - "title": "Principles and Techniques of Data Science", - "section": "", - "text": "Welcome", - "crumbs": [ - "Welcome" - ] - }, - { - "objectID": "index.html#about-the-course-notes", - "href": "index.html#about-the-course-notes", - "title": "Principles and Techniques of Data Science", - "section": "About the Course Notes", - "text": "About the Course Notes\nThis text offers supplementary resources to accompany lectures presented in the Fall 2024 Edition of the UC Berkeley course Data 100: Principles and Techniques of Data Science.\nNew notes will be added each week to accompany live lectures. See the full calendar of lectures on the course website.\nIf you spot any typos or would like to suggest any changes, please email us at data100.instructors@berkeley.edu.", - "crumbs": [ - "Welcome" - ] - }, - { - "objectID": "intro_lec/introduction.html", - "href": "intro_lec/introduction.html", - "title": "1  Introduction", - "section": "", - "text": "1.1 Data Science Lifecycle\nThe data science lifecycle is a high-level overview of the data science workflow. It’s a cycle of stages that a data scientist should explore as they conduct a thorough analysis of a data-driven problem.\nThere are many variations of the key ideas present in the data science lifecycle. In Data 100, we visualize the stages of the lifecycle using a flow diagram. Notice how there are two entry points.", - "crumbs": [ - "1  Introduction" - ] - }, - { - "objectID": "intro_lec/introduction.html#data-science-lifecycle", - "href": "intro_lec/introduction.html#data-science-lifecycle", - "title": "1  Introduction", - "section": "", - "text": "1.1.1 Ask a Question\nWhether by curiosity or necessity, data scientists constantly ask questions. For example, in the business world, data scientists may be interested in predicting the profit generated by a certain investment. In the field of medicine, they may ask whether some patients are more likely than others to benefit from a treatment.\nPosing questions is one of the primary ways the data science lifecycle begins. It helps to fully define the question. Here are some things you should ask yourself before framing a question.\n\nWhat do we want to know?\n\nA question that is too ambiguous may lead to confusion.\n\nWhat problems are we trying to solve?\n\nThe goal of asking a question should be clear in order to justify your efforts to stakeholders.\n\nWhat are the hypotheses we want to test?\n\nThis gives a clear perspective from which to analyze final results.\n\nWhat are the metrics for our success?\n\nThis establishes a clear point to know when to conclude the project.\n\n\n\n\n\n\n\n1.1.2 Obtain Data\nThe second entry point to the lifecycle is by obtaining data. A careful analysis of any problem requires the use of data. Data may be readily available to us, or we may have to embark on a process to collect it. When doing so, it is crucial to ask the following:\n\nWhat data do we have, and what data do we need?\n\nDefine the units of the data (people, cities, points in time, etc.) and what features to measure.\n\nHow will we sample more data?\n\nScrape the web, collect manually, run experiments, etc.\n\nIs our data representative of the population we want to study?\n\nIf our data is not representative of our population of interest, then we can come to incorrect conclusions.\n\n\nKey procedures: data acquisition, data cleaning\n\n\n\n\n\n1.1.3 Understand the Data\nRaw data itself is not inherently useful. It’s impossible to discern all the patterns and relationships between variables without carefully investigating them. Therefore, translating pure data into actionable insights is a key job of a data scientist. For example, we may choose to ask:\n\nHow is our data organized, and what does it contain?\n\nKnowing what the data says about the world helps us better understand the world.\n\nDo we have relevant data?\n\nIf the data we have collected is not useful to the question at hand, then we must collect more data.\n\nWhat are the biases, anomalies, or other issues with the data?\n\nThese can lead to many false conclusions if ignored, so data scientists must always be aware of these issues.\n\nHow do we transform the data to enable effective analysis?\n\nData is not always easy to interpret at first glance, so a data scientist should strive to reveal the hidden insights.\n\n\nKey procedures: exploratory data analysis, data visualization.\n\n\n\n\n\n1.1.4 Understand the World\nAfter observing the patterns in our data, we can begin answering our questions. This may require that we predict a quantity (machine learning) or measure the effect of some treatment (inference).\nFrom here, we may choose to report our results, or possibly conduct more analysis. We may not be satisfied with our findings, or our initial exploration may have brought up new questions that require new data.\n\nWhat does the data say about the world?\n\nGiven our models, the data will lead us to certain conclusions about the real world.\n\n\nDoes it answer our questions or accurately solve the problem?\n\nIf our model and data can not accomplish our goals, then we must reform our question, model, or both.\n\n\nHow robust are our conclusions and can we trust the predictions?\n\nInaccurate models can lead to false conclusions.\n\n\nKey procedures: model creation, prediction, inference.", - "crumbs": [ - "1  Introduction" - ] - }, - { - "objectID": "intro_lec/introduction.html#conclusion", - "href": "intro_lec/introduction.html#conclusion", - "title": "1  Introduction", - "section": "1.2 Conclusion", - "text": "1.2 Conclusion\nThe data science lifecycle is meant to be a set of general guidelines rather than a hard set of requirements. In our journey exploring the lifecycle, we’ll cover both the underlying theory and technologies used in data science. By the end of the course, we hope that you start to see yourself as a data scientist.\nWith that, we’ll begin by introducing one of the most important tools in exploratory data analysis: pandas.", - "crumbs": [ - "1  Introduction" - ] - } -] \ No newline at end of file diff --git a/docs/sql_I/images/data_storage.png b/docs/sql_I/images/data_storage.png new file mode 100644 index 000000000..0a4bdc8ad Binary files /dev/null and b/docs/sql_I/images/data_storage.png differ diff --git a/docs/sql_I/images/sql_terminology.png b/docs/sql_I/images/sql_terminology.png new file mode 100644 index 000000000..abf03043e Binary files /dev/null and b/docs/sql_I/images/sql_terminology.png differ diff --git a/docs/sql_I/sql_I.html b/docs/sql_I/sql_I.html new file mode 100644 index 000000000..784c66060 --- /dev/null +++ b/docs/sql_I/sql_I.html @@ -0,0 +1,1757 @@ + + + + + + + + + +20  SQL I – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+ + + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Recognizing situations where we need “bigger” tools for manipulating data
  • +
  • Write basic SQL queries using SELECT, FROM, WHERE, ORDER BY, LIMIT, and OFFSET
  • +
  • Perform aggregations using GROUP BY
  • +
+
+
+
+

So far in the course, we have made our way through the entire data science lifecycle: we learned how to load and explore a dataset, formulate questions, and use the tools of prediction and inference to come up with answers. For the remaining weeks of the semester, we are going to make a second pass through the lifecycle, this time with a different set of tools, ideas, and abstractions.

+
+

20.1 Databases

+

With this goal in mind, let’s go back to the very beginning of the lifecycle. We first started our work in data analysis by looking at the pandas library, which offered us powerful tools to manipulate tabular data stored in (primarily) CSV files. CSVs work well when analyzing relatively small datasets (less than 10GB) that don’t need to be shared across many users. In research and industry, however, data scientists often need to access enormous bodies of data that cannot be easily stored in a CSV format. Collaborating with others when working with CSVs can also be tricky —— a real-world data scientist may run into problems when multiple users try to make modifications or more dire security issues arise regarding who should and should not have access to the data.

+

A database is a large, organized collection of data. Databases are administered by Database Management Systems (DBMS), which are software systems that store, manage, and facilitate access to one or more databases. Databases help mitigate many of the issues that come with using CSVs for data storage: they provide reliable storage that can survive system crashes or disk failures, are optimized to compute on data that does not fit into memory, and contain special data structures to improve performance. Using databases rather than CSVs offers further benefits from the standpoint of data management. A DBMS can apply settings that configure how data is organized, block certain data anomalies (for example, enforcing non-negative weights or ages), and determine who is allowed access to the data. It can also ensure safe concurrent operations where multiple users reading and writing to the database will not lead to fatal errors. Below, you can see the functionality of the different types of data storage and management architectures. In data science, common large-scale DBMS systems used are Google BigQuery, Amazon Redshift, Snowflake, Databricks, Microsoft SQL Server, and more. To learn more about these, consider taking Data 101!

+

+ +

+

As you may have guessed, we can’t use our usual pandas methods to work with data in a database. Instead, we’ll turn to Structured Query Language.

+
+
+

20.2 Intro to SQL

+

Structured Query Language, or SQL (commonly pronounced “sequel,” though this is the subject of fierce debate), is a special programming language designed to communicate with databases, and it is the dominant language/technology for working with data. You may have encountered it in classes like CS 61A or Data C88C before, and you likely will encounter it in the future. It is a language of tables: all inputs and outputs are tables. Unlike Python, it is a declarative programming language – this means that rather than writing the exact logic needed to complete a task, a piece of SQL code “declares” what the desired final output should be and leaves the program to determine what logic should be implemented. This logic differs depending on the SQL code itself or on the system it’s running on (ie. MongoDB, SQLite, DuckDB, etc.). Most systems don’t follow the standards, and every system you work with will be a little different.

+

For the purposes of Data 100, we use SQLite or DuckDB. SQLite is an easy-to-use library that allows users to directly manipulate a database file or an in-memory database with a simplified version of SQL. It’s commonly used to store data for small apps on mobile devices and is optimized for simplicity and speed of simple data tasks. DuckDB is an easy-to-use library that lets you directly manipulate a database file, collection of table formatted files (e.g., CSV), or in-memory pandas DataFrames using a more complete version of SQL. It’s optimized for simplicity and speed of advanced data analysis tasks and is becoming increasingly popular for data analysis tasks on large datasets.

+

It is important to reiterate that SQL is an entirely different language from Python. However, Python does have special engines that allow us to run SQL code in a Jupyter notebook. While this is typically not how SQL is used outside of an educational setting, we will use this workflow to illustrate how SQL queries are constructed using the tools we’ve already worked with this semester. You will learn more about how to run SQL queries in Jupyter in an upcoming lab and homework.

+

The syntax below will seem unfamiliar to you; for now, just focus on understanding the output displayed. We will clarify the SQL code in a bit.

+

To start, we’ll look at a database called example_duck.db and connect to it using DuckDB.

+
+
+Code +
# Load the SQL Alchemy Python library and DuckDB
+import sqlalchemy
+import duckdb
+
+
+
+
# Load %%sql cell magic
+%load_ext sql
+
+
+
# Connect to the database
+%sql duckdb:///data/example_duck.db --alias duck
+
+

Now that we’re connected, let’s make some queries!

+
+
%%sql
+SELECT * FROM Dragon;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

Thanks to the pandas magic, the resulting return data is displayed in a format almost identical to our pandas tables but without an index.

+
+
+

20.3 Tables and Schema

+

+ +

+

Looking at the Dragon table above, we can see that it contains contains three columns. The first of these, "name", contains text data. The "year" column contains integer data, with the constraint that year values must be greater than or equal to 2000. The final column, "cute", contains integer data with no restrictions on allowable values.

+

Now, let’s look at the schema of our database. A schema describes the logical structure of a table. Whenever a new table is created, the creator must declare its schema.

+
+
%%sql
+SELECT * 
+FROM sqlite_master
+WHERE type='table'
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + + + +
typenametbl_namerootpagesql
+
+
+

The summary above displays information about the database; it contains four tables named sqlite_sequence, Dragon, Dish, and Scene. The rightmost column above lists the command that was used to construct each table.

+

Let’s look more closely at the command used to create the Dragon table (the second entry above).

+
CREATE TABLE Dragon (name TEXT PRIMARY KEY,
+                     year INTEGER CHECK (year >= 2000),
+                     cute INTEGER)
+

The statement CREATE TABLE is used to specify the schema of the table – a description of what logic is used to organize the table. Schema follows a set format:

+
    +
  • ColName: the name of a column

  • +
  • DataType: the type of data to be stored in a column. Some of the most common SQL data types are:

    +
      +
    • INT (integers)
    • +
    • FLOAT (floating point numbers)
    • +
    • TEXT (strings)
    • +
    • BLOB (arbitrary data, such as audio/video files)
    • +
    • DATETIME (a date and time)
    • +
  • +
  • Constraint: some restriction on the data to be stored in the column. Common constraints are:

    +
      +
    • CHECK (data must obey a certain condition)
    • +
    • PRIMARY KEY (designate a column as the table’s primary key)
    • +
    • NOT NULL (data cannot be null)
    • +
    • DEFAULT (a default fill value if no specific entry is given)
    • +
  • +
+

Note that different implementations of SQL (e.g., DuckDB, SQLite, MySQL) will support different types. In Data 100, we’ll primarily use DuckDB.

+

Database tables (also referred to as relations) are structured much like DataFrames in pandas. Each row, sometimes called a tuple, represents a single record in the dataset. Each column, sometimes called an attribute or field, describes some feature of the record.

+
+

20.3.1 Primary Keys

+

The primary key is a set of column(s) that uniquely identify each record in the table. In the Dragon table, the "name" column is its primary key that uniquely identifies each entry in the table. Because "name" is the primary key of the table, no two entries in the table can have the same name – a given value of "name" is unique to each dragon. Primary keys are used to ensure data integrity and to optimize data access.

+
+
+

20.3.2 Foreign Keys

+

A foreign key is a column or set of columns that references a primary key in another table. A foreign key constraint ensures that a primary key exists in the referenced table. For example, let’s say we have 2 tables, student and assignment, with the following schemas:

+
CREATE TABLE student (
+    student_id INTEGER PRIMARY KEY,
+    name VARCHAR,
+    email VARCHAR
+);
+
+CREATE TABLE assignment (
+    assignment_id INTEGER PRIMARY KEY,
+    description VARCHAR
+);
+

Note that each table has a primary key that uniquely identifies each student and assignment.

+

Say we want to create the table grade to store the score each student got on each assignment. Naturally, this will depend on the information in student and assignment; we should not be saving the grade for a nonexisistent student nor a nonexisistent assignment. Hence, we can create the columns student_id and assignment_id that reference foreign tables student and assignment, respectively. This way, we ensure that the data in grade is always up-to-date with the other tables.

+
CREATE TABLE grade (
+    student_id INTEGER,
+    assignment_id INTEGER,
+    score REAL,
+    FOREIGN KEY (student_id) REFERENCES student(student_id),
+    FOREIGN KEY (assignment_id) REFERENCES assignment(assignment_id)
+);
+
+
+
+

20.4 Basic Queries

+

To extract and manipulate data stored in a SQL table, we will need to familiarize ourselves with the syntax to write pieces of SQL code, which we call queries.

+
+

20.4.1 SELECTing From Tables

+

The basic unit of a SQL query is the SELECT statement. SELECT specifies what columns we would like to extract from a given table. We use FROM to tell SQL the table from which we want to SELECT our data.

+
+
%%sql
+SELECT *
+FROM Dragon;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

In SQL, * means “everything.” The query above grabs all the columns in Dragon and displays them in the outputted table. We can also specify a specific subset of columns to be SELECTed. Notice that the outputted columns appear in the order they were SELECTed.

+
+
%%sql
+SELECT cute, year
+FROM Dragon;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + +
cuteyear
+
+
+

Every SQL query must include both a SELECT and FROM statement. Intuitively, this makes sense —— we know that we’ll want to extract some piece of information from the table; to do so, we also need to indicate what table we want to consider.

+

It is important to note that SQL enforces a strict “order of operations” —— SQL clauses must always follow the same sequence. For example, the SELECT statement must always precede FROM. This means that any SQL query will follow the same structure.

+
SELECT <column list>
+FROM <table>
+[additional clauses]
+

The additional clauses we use depend on the specific task we’re trying to achieve. We may refine our query to filter on a certain condition, aggregate a particular column, or join several tables together. We will spend the rest of this note outlining some useful clauses to build up our understanding of the order of operations.

+
+

20.4.1.1 SQL Style Conventions

+

And just like that, we’ve already written two SQL queries. There are a few things to note in the queries above. Firstly, notice that every “verb” is written in uppercase. It is convention to write SQL operations in capital letters, but your code will run just fine even if you choose to keep things in lowercase. Second, the query above separates each statement with a new line. SQL queries are not impacted by whitespace within the query; this means that SQL code is typically written with a new line after each statement to make things more readable. The semicolon (;) indicates the end of a query. There are some “flavors” of SQL in which a query will not run if no semicolon is present; however, in Data 100, the SQL version we will use works with or without an ending semicolon. Queries in these notes will end with semicolons to build up good habits.

+
+
+

20.4.1.2 Aliasing with AS

+

The AS keyword allows us to give a column a new name (called an alias) after it has been SELECTed. The general syntax is:

+
SELECT column_in_input_table AS new_name_in_output_table
+
+
%%sql
+SELECT cute AS cuteness, year AS birth
+FROM Dragon;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + +
cutenessbirth
+
+
+
+
+

20.4.1.3 Uniqueness with DISTINCT

+

To SELECT only the unique values in a column, we use the DISTINCT keyword. This will cause any any duplicate entries in a column to be removed. If we want to find only the unique years in Dragon, without any repeats, we would write:

+
+
%%sql
+SELECT DISTINCT year
+FROM Dragon;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + +
year
+
+
+
+
+
+

20.4.2 Applying WHERE Conditions

+

The WHERE keyword is used to select only some rows of a table, filtered on a given Boolean condition.

+
+
%%sql
+SELECT name, year
+FROM Dragon
+WHERE cute > 0;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + +
nameyear
+
+
+

We can add complexity to the WHERE condition using the keywords AND, OR, and NOT, much like we would in Python.

+
+
%%sql
+SELECT name, year
+FROM Dragon
+WHERE cute > 0 OR year > 2013;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + +
nameyear
+
+
+

To spare ourselves needing to write complicated logical expressions by combining several conditions, we can also filter for entries that are IN a specified list of values. This is similar to the use of in or .isin in Python.

+
+
%%sql
+SELECT name, year
+FROM Dragon
+WHERE name IN ('hiccup', 'puff');
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + +
nameyear
+
+
+
+

20.4.2.1 Strings in SQL

+

In Python, there is no distinction between double "" and single quotes ''. SQL, on the other hand, distinguishes double quotes "" as column names and single quotes '' as strings. For example, we can make the call

+
SELECT "birth weight"
+FROM patient
+WHERE "first name" = 'Joey'
+

to select the column "birth weight" from the patient table and only select rows where the column "first name" is equal to 'Joey'.

+
+
+

20.4.2.2 WHERE WITH NULL Values

+

You may have noticed earlier that our table actually has a missing value. In SQL, missing data is given the special value NULL. NULL behaves in a fundamentally different way to other data types. We can’t use the typical operators (=, >, and <) on NULL values (in fact, NULL == NULL returns False!). Instead, we check to see if a value IS or IS NOT NULL.

+
+
%%sql
+SELECT name, cute
+FROM Dragon
+WHERE cute IS NOT NULL;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + +
namecute
+
+
+
+
+
+

20.4.3 Sorting and Restricting Output

+
+

20.4.3.1 Sorting with ORDER BY

+

What if we want the output table to appear in a certain order? The ORDER BY keyword behaves similarly to .sort_values() in pandas.

+
+
%%sql
+SELECT *
+FROM Dragon
+ORDER BY cute;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

By default, ORDER BY will display results in ascending order (ASC) with the lowest values at the top of the table. To sort in descending order, we use the DESC keyword after specifying the column to be used for ordering.

+
+
%%sql
+SELECT *
+FROM Dragon
+ORDER BY cute DESC;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

We can also tell SQL to ORDER BY two columns at once. This will sort the table by the first listed column, then use the values in the second listed column to break any ties.

+
+
%%sql
+SELECT *
+FROM Dragon
+ORDER BY year, cute DESC;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

Note that in this example, year is sorted in ascending order and cute in descending order. If you want year to be ordered in descending order as well, you need to specify year DESC, cute DESC;.

+
+
+

20.4.3.2 LIMIT vs. OFFSET

+

In many instances, we are only concerned with a certain number of rows in the output table (for example, wanting to find the first two dragons in the table). The LIMIT keyword restricts the output to a specified number of rows. It serves a function similar to that of .head() in pandas.

+
+
%%sql
+SELECT *
+FROM Dragon
+LIMIT 2;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

The OFFSET keyword indicates the index at which LIMIT should start. In other words, we can use OFFSET to shift where the LIMITing begins by a specified number of rows. For example, we might care about the dragons that are at positions 2 and 3 in the table.

+
+
%%sql
+SELECT *
+FROM Dragon
+LIMIT 2
+OFFSET 1;
+
+
 * duckdb:///data/example_duck.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

With these keywords in hand, let’s update our SQL order of operations. Remember: every SQL query must list clauses in this order.

+
SELECT <column expression list>
+FROM <table>
+[WHERE <predicate>]
+[ORDER BY <column list>]
+[LIMIT <number of rows>]
+[OFFSET <number of rows>];
+
+
+
+
+

20.5 Summary

+

Let’s summarize what we’ve learned so far. We know that SELECT and FROM are the fundamental building blocks of any SQL query. We can augment these two keywords with additional clauses to refine the data in our output table.

+

Any clauses that we include must follow a strict ordering within the query:

+
SELECT <column list>
+FROM <table>
+[WHERE <predicate>]
+[ORDER BY <column list>]
+[LIMIT <number of rows>]
+[OFFSET <number of rows>]
+

Here, any clause contained in square brackets [ ] is optional —— we only need to use the keyword if it is relevant to the table operation we want to perform. Also note that by convention, we use all caps for keywords in SQL statements and use newlines to make code more readable.

+ + + + +
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/sql_II/images/cats.png b/docs/sql_II/images/cats.png new file mode 100644 index 000000000..090796220 Binary files /dev/null and b/docs/sql_II/images/cats.png differ diff --git a/docs/sql_II/images/cross.png b/docs/sql_II/images/cross.png new file mode 100644 index 000000000..421a0668f Binary files /dev/null and b/docs/sql_II/images/cross.png differ diff --git a/docs/sql_II/images/full.png b/docs/sql_II/images/full.png new file mode 100644 index 000000000..84eb20fef Binary files /dev/null and b/docs/sql_II/images/full.png differ diff --git a/docs/sql_II/images/inner.png b/docs/sql_II/images/inner.png new file mode 100644 index 000000000..ce9830378 Binary files /dev/null and b/docs/sql_II/images/inner.png differ diff --git a/docs/sql_II/images/left.png b/docs/sql_II/images/left.png new file mode 100644 index 000000000..43482170b Binary files /dev/null and b/docs/sql_II/images/left.png differ diff --git a/docs/sql_II/images/multidimensional.png b/docs/sql_II/images/multidimensional.png new file mode 100644 index 000000000..f3e2582fb Binary files /dev/null and b/docs/sql_II/images/multidimensional.png differ diff --git a/docs/sql_II/images/right.png b/docs/sql_II/images/right.png new file mode 100644 index 000000000..53baaeaaa Binary files /dev/null and b/docs/sql_II/images/right.png differ diff --git a/docs/sql_II/images/star.png b/docs/sql_II/images/star.png new file mode 100644 index 000000000..bc9643a26 Binary files /dev/null and b/docs/sql_II/images/star.png differ diff --git a/docs/sql_II/sql_II.html b/docs/sql_II/sql_II.html new file mode 100644 index 000000000..362d8dcdf --- /dev/null +++ b/docs/sql_II/sql_II.html @@ -0,0 +1,1903 @@ + + + + + + + + + +21  SQL II – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+ + + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Perform aggregations using GROUP BY
  • +
  • Introduce the ability to filter groups
  • +
  • Perform data cleaning and text manipulation in SQL
  • +
  • Join data across tables
  • +
+
+
+
+

In this lecture, we’ll continue our work from last time to introduce some advanced SQL syntax.

+

First, let’s load in the basic_examples.db database.

+
+
+Code +
# Load the SQL Alchemy Python library and DuckDB
+import sqlalchemy
+import duckdb
+
+
+
+
# Load %%sql cell magic
+%load_ext sql
+
+
+
# Connect to the database
+%sql duckdb:///data/basic_examples.db --alias basic
+
+
+

21.1 Aggregating with GROUP BY

+

At this point, we’ve seen that SQL offers much of the same functionality that was given to us by pandas. We can extract data from a table, filter it, and reorder it to suit our needs.

+

In pandas, much of our analysis work relied heavily on being able to use .groupby() to aggregate across the rows of our dataset. SQL’s answer to this task is the (very conveniently named) GROUP BY clause. While the outputs of GROUP BY are similar to those of .groupby() —— in both cases, we obtain an output table where some column has been used for grouping —— the syntax and logic used to group data in SQL are fairly different to the pandas implementation.

+

To illustrate GROUP BY, we will consider the Dish table from our database.

+
+
%%sql
+SELECT * 
+FROM Dish;
+
+
 * duckdb:///data/basic_examples.db
+Done.
+
+
+ + + + + + + + + + +
nametypecost
+
+
+

Notice that there are multiple dishes of the same type. What if we wanted to find the total costs of dishes of a certain type? To accomplish this, we would write the following code.

+
+
%%sql
+SELECT type, SUM(cost)
+FROM Dish
+GROUP BY type;
+
+
 * duckdb:///data/basic_examples.db
+Done.
+
+
+ + + + + + + + + +
typesum("cost")
+
+
+

What is going on here? The statement GROUP BY type tells SQL to group the data based on the value contained in the type column (whether a record is an appetizer, entree, or dessert). SUM(cost) sums up the costs of dishes in each type and displays the result in the output table.

+

You may be wondering: why does SUM(cost) come before the command to GROUP BY type? Don’t we need to form groups before we can count the number of entries in each? Remember that SQL is a declarative programming language —— a SQL programmer simply states what end result they would like to see, and leaves the task of figuring out how to obtain this result to SQL itself. This means that SQL queries sometimes don’t follow what a reader sees as a “logical” sequence of thought. Instead, SQL requires that we follow its set order of operations when constructing queries. So long as we follow this order, SQL will handle the underlying logic.

+

In practical terms: our goal with this query was to output the total costs of each type. To communicate this to SQL, we say that we want to SELECT the SUMmed cost values for each type group.

+

There are many aggregation functions that can be used to aggregate the data contained in each group. Some common examples are:

+
    +
  • COUNT: count the number of rows associated with each group
  • +
  • MIN: find the minimum value of each group
  • +
  • MAX: find the maximum value of each group
  • +
  • SUM: sum across all records in each group
  • +
  • AVG: find the average value of each group
  • +
+

We can easily compute multiple aggregations all at once (a task that was very tricky in pandas).

+
+
%%sql
+SELECT type, SUM(cost), MIN(cost), MAX(name)
+FROM Dish
+GROUP BY type;
+
+
 * duckdb:///data/basic_examples.db
+Done.
+
+
+ + + + + + + + + + + +
typesum("cost")min("cost")max("name")
+
+
+

To count the number of rows associated with each group, we use the COUNT keyword. Calling COUNT(*) will compute the total number of rows in each group, including rows with null values. Its pandas equivalent is .groupby().size().

+

Recall the Dragon table from the previous lecture:

+
+
%%sql
+SELECT * FROM Dragon;
+
+
 * duckdb:///data/basic_examples.db
+Done.
+
+
+ + + + + + + + + + +
nameyearcute
+
+
+

Notice that COUNT(*) and COUNT(cute) result in different outputs.

+
+
%%sql
+SELECT year, COUNT(*)
+FROM Dragon
+GROUP BY year;
+
+
 * duckdb:///data/basic_examples.db
+Done.
+
+
+ + + + + + + + + +
yearcount_star()
+
+
+
+
%%sql
+SELECT year, COUNT(cute)
+FROM Dragon
+GROUP BY year;
+
+
 * duckdb:///data/basic_examples.db
+Done.
+
+
+ + + + + + + + + +
yearcount(cute)
+
+
+

With this definition of GROUP BY in hand, let’s update our SQL order of operations. Remember: every SQL query must list clauses in this order.

+
SELECT <column expression list>
+FROM <table>
+[WHERE <predicate>]
+[GROUP BY <column list>]
+[ORDER BY <column list>]
+[LIMIT <number of rows>]
+[OFFSET <number of rows>];
+

Note that we can use the AS keyword to rename columns during the selection process and that column expressions may include aggregation functions (MAX, MIN, etc.).

+
+
+

21.2 Filtering Groups

+

Now, what if we only want groups that meet a certain condition? HAVING filters groups by applying some condition across all rows in each group. We interpret it as a way to keep only the groups HAVING some condition. Note the difference between WHERE and HAVING: we use WHERE to filter rows, whereas we use HAVING to filter groups. WHERE precedes HAVING in terms of how SQL executes a query.

+

Let’s take a look at the Dish table to see how we can use HAVING. Say we want to group dishes with a cost greater than 4 by type and only keep groups where the max cost is less than 10.

+
+
%%sql
+SELECT type, COUNT(*)
+FROM Dish
+WHERE cost > 4
+GROUP BY type
+HAVING MAX(cost) <  10;
+
+
 * duckdb:///data/basic_examples.db
+Done.
+
+
+ + + + + + + + + +
typecount_star()
+
+
+

Here, we first use WHERE to filter for rows with a cost greater than 4. We then group our values by type before applying the HAVING operator. With HAVING, we can filter our groups based on if the max cost is less than 10.

+
+
+

21.3 Summary: SQL

+

With this definition of GROUP BY and HAVING in hand, let’s update our SQL order of operations. Remember: every SQL query must list clauses in this order.

+
SELECT <column expression list>
+FROM <table>
+[WHERE <predicate>]
+[GROUP BY <column list>]
+[ORDER BY <column list>]
+[LIMIT <number of rows>]
+[OFFSET <number of rows>];
+

Note that we can use the AS keyword to rename columns during the selection process and that column expressions may include aggregation functions (MAX, MIN, etc.).

+
+
+

21.4 EDA in SQL

+

In the last lecture, we mostly worked under the assumption that our data had already been cleaned. However, as we saw in our first pass through the data science lifecycle, we’re very unlikely to be given data that is free of formatting issues. With this in mind, we’ll want to learn how to clean and transform data in SQL.

+

Our typical workflow when working with “big data” is:

+
    +
  1. Use SQL to query data from a database
  2. +
  3. Use Python (with pandas) to analyze this data in detail
  4. +
+

We can, however, still perform simple data cleaning and re-structuring using SQL directly. To do so, we’ll use the Title table from the imdb_duck database, which contains information about movies and actors.

+

Let’s load in the imdb_duck database.

+
+
import os
+os.environ["TQDM_DISABLE"] = "1"
+if os.path.exists("/home/jovyan/shared/sql/imdb_duck.db"):
+    imdbpath = "duckdb:////home/jovyan/shared/sql/imdb_duck.db"
+elif os.path.exists("data/imdb_duck.db"):
+    imdbpath =  "duckdb:///data/imdb_duck.db"
+else:
+    import gdown
+    url = 'https://drive.google.com/uc?id=10tKOHGLt9QoOgq5Ii-FhxpB9lDSQgl1O'
+    output_path = 'data/imdb_duck.db'
+    gdown.download(url, output_path, quiet=False)
+    imdbpath = "duckdb:///data/imdb_duck.db"
+
+
+
from sqlalchemy import create_engine
+imdb_engine = create_engine(imdbpath, connect_args={'read_only': True})
+%sql imdb_engine --alias imdb
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.ParserException) Parser Error: syntax error at or near "imdb_engine"
+[SQL: imdb_engine]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+

Since we’ll be working with the Title table, let’s take a quick look at what it contains.

+
+
%%sql imdb 
+    
+SELECT *
+FROM Title
+WHERE primaryTitle IN ('Ginny & Georgia', 'What If...?', 'Succession', 'Veep', 'Tenet')
+LIMIT 10;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.ParserException) Parser Error: syntax error at or near "imdb"
+[SQL: imdb
+    
+SELECT *
+FROM Title
+WHERE primaryTitle IN ('Ginny & Georgia', 'What If...?', 'Succession', 'Veep', 'Tenet')
+LIMIT 10;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+
+

21.4.1 Matching Text using LIKE

+

One common task we encountered in our first look at EDA was needing to match string data. For example, we might want to remove entries beginning with the same prefix as part of the data cleaning process.

+

In SQL, we use the LIKE operator to (you guessed it) look for strings that are like a given string pattern.

+
+
%%sql
+SELECT titleType, primaryTitle
+FROM Title
+WHERE primaryTitle LIKE 'Star Wars: Episode I - The Phantom Menace'
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 2: FROM Title
+             ^
+[SQL: SELECT titleType, primaryTitle
+FROM Title
+WHERE primaryTitle LIKE 'Star Wars: Episode I - The Phantom Menace']
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+

What if we wanted to find all Star Wars movies? % is the wildcard operator, it means “look for any character, any number of times”. This makes it helpful for identifying strings that are similar to our desired pattern, even when we don’t know the full text of what we aim to extract.

+
+
%%sql
+SELECT titleType, primaryTitle
+FROM Title
+WHERE primaryTitle LIKE '%Star Wars%'
+LIMIT 10;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 2: FROM Title
+             ^
+[SQL: SELECT titleType, primaryTitle
+FROM Title
+WHERE primaryTitle LIKE '%Star Wars%'
+LIMIT 10;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+

Alternatively, we can use RegEx! DuckDB and most real DBMSs allow for this. Note that here, we have to use the SIMILAR TO operater rather than LIKE.

+
+
%%sql
+SELECT titleType, primaryTitle
+FROM Title
+WHERE primaryTitle SIMILAR TO '.*Star Wars*.'
+LIMIT 10;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 2: FROM Title
+             ^
+[SQL: SELECT titleType, primaryTitle
+FROM Title
+WHERE primaryTitle SIMILAR TO '.*Star Wars*.'
+LIMIT 10;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+
+
+

21.4.2 CASTing Data Types

+

A common data cleaning task is converting data to the correct variable type. The CAST keyword is used to generate a new output column. Each entry in this output column is the result of converting the data in an existing column to a new data type. For example, we may wish to convert numeric data stored as a string to an integer.

+
+
%%sql
+SELECT primaryTitle, CAST(runtimeMinutes AS INT)
+FROM Title;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 2: FROM Title;
+             ^
+[SQL: SELECT primaryTitle, CAST(runtimeMinutes AS INT)
+FROM Title;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+

We use CAST when SELECTing colunns for our output table. In the example above, we want to SELECT the columns of integer year and runtime data that is created by the CAST.

+

SQL will automatically name a new column according to the command used to SELECT it, which can lead to unwieldy column names. We can rename the CASTed column using the AS keyword.

+
+
%%sql
+SELECT primaryTitle AS title, CAST(runtimeMinutes AS INT) AS minutes, CAST(startYear AS INT) AS year
+FROM Title
+LIMIT 5;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 2: FROM Title
+             ^
+[SQL: SELECT primaryTitle AS title, CAST(runtimeMinutes AS INT) AS minutes, CAST(startYear AS INT) AS year
+FROM Title
+LIMIT 5;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+
+
+

21.4.3 Using Conditional Statements with CASE

+

When working with pandas, we often ran into situations where we wanted to generate new columns using some form of conditional statement. For example, say we wanted to describe a film title as “old,” “mid-aged,” or “new,” depending on the year of its release.

+

In SQL, conditional operations are performed using a CASE clause. Conceptually, CASE behaves much like the CAST operation: it creates a new column that we can then SELECT to appear in the output. The syntax for a CASE clause is as follows:

+
CASE WHEN <condition> THEN <value>
+     WHEN <other condition> THEN <other value>
+     ...
+     ELSE <yet another value>
+     END
+

Scanning through the skeleton code above, you can see that the logic is similar to that of an if statement in Python. The conditional statement is first opened by calling CASE. Each new condition is specified by WHEN, with THEN indicating what value should be filled if the condition is met. ELSE specifies the value that should be filled if no other conditions are met. Lastly, END indicates the end of the conditional statement; once END has been called, SQL will continue evaluating the query as usual.

+

Let’s see this in action. In the example below, we give the new column created by the CASE statement the name movie_age.

+
+
%%sql
+/* If a movie was filmed before 1950, it is "old"
+Otherwise, if a movie was filmed before 2000, it is "mid-aged"
+Else, a movie is "new" */
+
+SELECT titleType, startYear,
+CASE WHEN startYear < 1950 THEN 'old'
+     WHEN startYear < 2000 THEN 'mid-aged'
+     ELSE 'new'
+     END AS movie_age
+FROM Title;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 10: FROM Title;
+              ^
+[SQL: /* If a movie was filmed before 1950, it is "old"
+Otherwise, if a movie was filmed before 2000, it is "mid-aged"
+Else, a movie is "new" */
+
+SELECT titleType, startYear,
+CASE WHEN startYear < 1950 THEN 'old'
+     WHEN startYear < 2000 THEN 'mid-aged'
+     ELSE 'new'
+     END AS movie_age
+FROM Title;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+
+
+
+

21.5 JOINing Tables

+

At this point, we’re well-versed in using SQL as a tool to clean, manipulate, and transform data in a table. Notice that this sentence referred to one table, specifically. What happens if the data we need is distributed across multiple tables? This is an important consideration when using SQL —— recall that we first introduced SQL as a language to query from databases. Databases often store data in a multidimensional structure. In other words, information is stored across several tables, with each table containing a small subset of all the data housed by the database.

+

A common way of organizing a database is by using a star schema. A star schema is composed of two types of tables. A fact table is the central table of the database —— it contains the information needed to link entries across several dimension tables, which contain more detailed information about the data.

+

Say we were working with a database about boba offerings in Berkeley. The dimension tables of the database might contain information about tea varieties and boba toppings. The fact table would be used to link this information across the various dimension tables.

+
+

multidimensional

+
+

If we explicitly mark the relationships between tables, we start to see the star-like structure of the star schema.

+
+

star

+
+

To join data across multiple tables, we’ll use the (creatively named) JOIN keyword. We’ll make things easier for now by first considering the simpler cats dataset, which consists of the tables s and t.

+
+

cats

+
+

To perform a join, we amend the FROM clause. You can think of this as saying, “SELECT my data FROM tables that have been JOINed together.”

+

Remember: SQL does not consider newlines or whitespace when interpreting queries. The indentation given in the example below is to help improve readability. If you wish, you can write code that does not follow this formatting.

+
SELECT <column list>
+FROM table_1 
+    JOIN table_2 
+    ON key_1 = key_2;
+

We also need to specify what column from each table should be used to determine matching entries. By defining these keys, we provide SQL with the information it needs to pair rows of data together.

+

The most commonly used type of SQL JOIN is the inner join. It turns out you’re already familiar with what an inner join does, and how it works – this is the type of join we’ve been using in pandas all along! In an inner join, we combine every row in our first table with its matching entry in the second table. If a row from either table does not have a match in the other table, it is omitted from the output.

+
+

inner

+
+

In a cross join, all possible combinations of rows appear in the output table, regardless of whether or not rows share a matching key. Because all rows are joined, even if there is no matching key, it is not necessary to specify what keys to consider in an ON statement. A cross join is also known as a cartesian product.

+
+

cross

+
+

Conceptually, we can interpret an inner join as a cross join, followed by removing all rows that do not share a matching key. Notice that the output of the inner join above contains all rows of the cross join example that contain a single color across the entire row.

+

In a left outer join, all rows in the left table are kept in the output table. If a row in the right table shares a match with the left table, this row will be kept; otherwise, the rows in the right table are omitted from the output. We can fill in any missing values with NULL.

+
+

left

+
+

A right outer join keeps all rows in the right table. Rows in the left table are only kept if they share a match in the right table. Again, we can fill in any missing values with NULL.

+
+

right

+
+

In a full outer join, all rows that have a match between the two tables are joined together. If a row has no match in the second table, then the values of the columns for that second table are filled with NULL. In other words, a full outer join performs an inner join while still keeping rows that have no match in the other table. This is best understood visually:

+
+

full

+
+

We have kept the same output achieved using an inner join, with the addition of partially null rows for entries in s and t that had no match in the second table.

+
+

21.5.1 Aliasing in JOINs

+

When joining tables, we often create aliases for table names (similarly to what we did with column names in the last lecture). We do this as it is typically easier to refer to aliases, especially when we are working with long table names. We can even reference columns using aliased table names!

+

Let’s say we want to determine the average rating of various movies. We’ll need to JOIN the Title and Rating tables and can create aliases for both tables.

+
+
%%sql
+
+SELECT primaryTitle, averageRating
+FROM Title AS T INNER JOIN Rating AS R
+ON T.tconst = R.tconst;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 2: FROM Title AS T INNER JOIN Rating AS R
+             ^
+[SQL: SELECT primaryTitle, averageRating
+FROM Title AS T INNER JOIN Rating AS R
+ON T.tconst = R.tconst;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+

Note that the AS is actually optional! We can create aliases for our tables even without it, but we usually include it for clarity.

+
+
%%sql
+
+SELECT primaryTitle, averageRating
+FROM Title T INNER JOIN Rating R
+ON T.tconst = R.tconst;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 2: FROM Title T INNER JOIN Rating R
+             ^
+[SQL: SELECT primaryTitle, averageRating
+FROM Title T INNER JOIN Rating R
+ON T.tconst = R.tconst;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+
+
+

21.5.2 Common Table Expressions

+

For more sophisticated data problems, the queries can become very complex. Common table expressions (CTEs) allow us to break down these complex queries into more manageable parts. To do so, we create temporary tables corresponding to different aspects of the problem and then reference them in the final query:

+
WITH 
+table_name1 AS ( 
+    SELECT ...
+),
+table_name2 AS ( 
+    SELECT ...
+)
+SELECT ... 
+FROM 
+table_name1, 
+table_name2, ...
+

Let’s say we want to identify the top 10 action movies that are highly rated (with an average rating greater than 7) and popular (having more than 5000 votes), along with the primary actors who are the most popular. We can use CTEs to break this query down into separate problems. Initially, we can filter to find good action movies and prolific actors separately. This way, in our final join, we only need to change the order.

+
+
%%sql
+WITH 
+good_action_movies AS (
+    SELECT *
+    FROM Title T JOIN Rating R ON T.tconst = R.tconst  
+    WHERE genres LIKE '%Action%' AND averageRating > 7 AND numVotes > 5000
+),
+prolific_actors AS (
+    SELECT N.nconst, primaryName, COUNT(*) as numRoles
+    FROM Name N JOIN Principal P ON N.nconst = P.nconst
+    WHERE category = 'actor'
+    GROUP BY N.nconst, primaryName
+)
+SELECT primaryTitle, primaryName, numRoles, ROUND(averageRating) AS rating
+FROM good_action_movies m, prolific_actors a, principal p
+WHERE p.tconst = m.tconst AND p.nconst = a.nconst
+ORDER BY rating DESC, numRoles DESC
+LIMIT 10;
+
+
 * duckdb:///data/basic_examples.db
+(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist!
+Did you mean "system.information_schema.tables"?
+LINE 4:     F...
+                 ^
+[SQL: WITH 
+good_action_movies AS (
+    SELECT *
+    FROM Title T JOIN Rating R ON T.tconst = R.tconst  
+    WHERE genres LIKE '%Action%' AND averageRating > 7 AND numVotes > 5000
+),
+prolific_actors AS (
+    SELECT N.nconst, primaryName, COUNT(*) as numRoles
+    FROM Name N JOIN Principal P ON N.nconst = P.nconst
+    WHERE category = 'actor'
+    GROUP BY N.nconst, primaryName
+)
+SELECT primaryTitle, primaryName, numRoles, ROUND(averageRating) AS rating
+FROM good_action_movies m, prolific_actors a, principal p
+WHERE p.tconst = m.tconst AND p.nconst = a.nconst
+ORDER BY rating DESC, numRoles DESC
+LIMIT 10;]
+(Background on this error at: https://sqlalche.me/e/20/f405)
+
+
+ + + + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/visualization_1/images/bad_distro.png b/docs/visualization_1/images/bad_distro.png new file mode 100644 index 000000000..da18378e1 Binary files /dev/null and b/docs/visualization_1/images/bad_distro.png differ diff --git a/docs/visualization_1/images/box_plot_diagram.png b/docs/visualization_1/images/box_plot_diagram.png new file mode 100644 index 000000000..1da125972 Binary files /dev/null and b/docs/visualization_1/images/box_plot_diagram.png differ diff --git a/docs/visualization_1/images/good_distro.png b/docs/visualization_1/images/good_distro.png new file mode 100644 index 000000000..ee7be0663 Binary files /dev/null and b/docs/visualization_1/images/good_distro.png differ diff --git a/docs/visualization_1/images/histogram_viz.png b/docs/visualization_1/images/histogram_viz.png new file mode 100644 index 000000000..4a50ec4b9 Binary files /dev/null and b/docs/visualization_1/images/histogram_viz.png differ diff --git a/docs/visualization_1/images/line_chart_viz.png b/docs/visualization_1/images/line_chart_viz.png new file mode 100644 index 000000000..bbec9dc15 Binary files /dev/null and b/docs/visualization_1/images/line_chart_viz.png differ diff --git a/docs/visualization_1/images/scatter.png b/docs/visualization_1/images/scatter.png new file mode 100644 index 000000000..3ee8bb834 Binary files /dev/null and b/docs/visualization_1/images/scatter.png differ diff --git a/docs/visualization_1/images/variable_types_vis_1.png b/docs/visualization_1/images/variable_types_vis_1.png new file mode 100644 index 000000000..0409b3cf1 Binary files /dev/null and b/docs/visualization_1/images/variable_types_vis_1.png differ diff --git a/docs/visualization_1/visualization_1.html b/docs/visualization_1/visualization_1.html new file mode 100644 index 000000000..639fd0833 --- /dev/null +++ b/docs/visualization_1/visualization_1.html @@ -0,0 +1,1650 @@ + + + + + + + + + +7  Visualization I – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

7  Visualization I

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Understand the theories behind effective visualizations and start to generate plots of our own with matplotlib and seaborn.
  • +
  • Analyze histograms and identify the skewness, potential outliers, and the mode.
  • +
  • Use boxplot and violinplot to compare two distributions.
  • +
+
+
+
+

In our journey of the data science lifecycle, we have begun to explore the vast world of exploratory data analysis. More recently, we learned how to pre-process data using various data manipulation techniques. As we work towards understanding our data, there is one key component missing in our arsenal — the ability to visualize and discern relationships in existing data.

+

These next two lectures will introduce you to various examples of data visualizations and their underlying theory. In doing so, we’ll motivate their importance in real-world examples with the use of plotting libraries.

+
+

7.1 Visualizations in Data 8 and Data 100 (so far)

+

You’ve likely encountered several forms of data visualizations in your studies. You may remember two such examples from Data 8: line plots, scatter plots, and histograms. Each of these served a unique purpose. For example, line plots displayed how numerical quantities changed over time, while histograms were useful in understanding a variable’s distribution.

+ ++++ + + + + + + + + + + + + +
Line ChartScatter Plot
bad_distrobad_distro
+ + + + + + + + + + + +
Histogram
bad_distro
+
+
+

7.2 Goals of Visualization

+

Visualizations are useful for a number of reasons. In Data 100, we consider two areas in particular:

+
    +
  1. To broaden your understanding of the data. Summarizing trends visually before in-depth analysis is a key part of exploratory data analysis. Creating these graphs is a lightweight, iterative and flexible process that helps us investigate relationships between variables.
  2. +
  3. To communicate results/conclusions to others. These visualizations are highly editorial, selective, and fine-tuned to achieve a communications goal, so be thoughtful and careful about its clarity, accessibility, and necessary context.
  4. +
+

Altogether, these goals emphasize the fact that visualizations aren’t a matter of making “pretty” pictures; we need to do a lot of thinking about what stylistic choices communicate ideas most effectively.

+

This course note will focus on the first half of visualization topics in Data 100. The goal here is to understand how to choose the “right” plot depending on different variable types and, secondly, how to generate these plots using code.

+
+
+

7.3 An Overview of Distributions

+

A distribution describes both the set of values that a single variable can take and the frequency of unique values in a single variable. For example, if we’re interested in the distribution of students across Data 100 discussion sections, the set of possible values is a list of discussion sections (10-11am, 11-12pm, etc.), and the frequency that each of those values occur is the number of students enrolled in each section. In other words, the we’re interested in how a variable is distributed across it’s possible values. Therefore, distributions must satisfy two properties:

+
    +
  1. The total frequency of all categories must sum to 100%
  2. +
  3. Total count should sum to the total number of datapoints if we’re using raw counts.
  4. +
+ ++++ + + + + + + + + + + + + + + + + +
Not a Valid DistributionValid Distribution
bad_distrogood_distro
This is not a valid distribution since individuals can be associated with more than one category and the bar values demonstrate values in minutes and not probability.This example satisfies the two properties of distributions, so it is a valid distribution.
+
+
+

7.4 Variable Types Should Inform Plot Choice

+

Different plots are more or less suited for displaying particular types of variables, laid out in the diagram below:

+
+ +
+

The first step of any visualization is to identify the type(s) of variables we’re working with. From here, we can select an appropriate plot type:

+
+
+

7.5 Qualitative Variables: Bar Plots

+

A bar plot is one of the most common ways of displaying the distribution of a qualitative (categorical) variable. The length of a bar plot encodes the frequency of a category; the width encodes no useful information. The color could indicate a sub-category, but this is not necessarily the case.

+

Let’s contextualize this in an example. We will use the World Bank dataset (wb) in our analysis.

+
+
+Code +
import pandas as pd
+import numpy as np
+
+wb = pd.read_csv("data/world_bank.csv", index_col=0)
+wb.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ContinentCountryPrimary completion rate: Male: % of relevant age group: 2015Primary completion rate: Female: % of relevant age group: 2015Lower secondary completion rate: Male: % of relevant age group: 2015Lower secondary completion rate: Female: % of relevant age group: 2015Youth literacy rate: Male: % of ages 15-24: 2005-14Youth literacy rate: Female: % of ages 15-24: 2005-14Adult literacy rate: Male: % ages 15 and older: 2005-14Adult literacy rate: Female: % ages 15 and older: 2005-14...Access to improved sanitation facilities: % of population: 1990Access to improved sanitation facilities: % of population: 2015Child immunization rate: Measles: % of children ages 12-23 months: 2015Child immunization rate: DTP3: % of children ages 12-23 months: 2015Children with acute respiratory infection taken to health provider: % of children under age 5 with ARI: 2009-2016Children with diarrhea who received oral rehydration and continuous feeding: % of children under age 5 with diarrhea: 2009-2016Children sleeping under treated bed nets: % of children under age 5: 2009-2016Children with fever receiving antimalarial drugs: % of children under age 5 with fever: 2009-2016Tuberculosis: Treatment success rate: % of new cases: 2014Tuberculosis: Cases detection rate: % of new estimated cases: 2015
0AfricaAlgeria106.0105.068.085.096.092.083.068.0...80.088.095.095.066.042.0NaNNaN88.080.0
1AfricaAngolaNaNNaNNaNNaN79.067.082.060.0...22.052.055.064.0NaNNaN25.928.334.064.0
2AfricaBenin83.073.050.037.055.031.041.018.0...7.020.075.079.023.033.072.725.989.061.0
3AfricaBotswana98.0101.086.087.096.099.087.089.0...39.063.097.095.0NaNNaNNaNNaN77.062.0
5AfricaBurundi58.066.035.030.090.088.089.085.0...42.048.093.094.055.043.053.825.491.051.0
+ +

5 rows × 47 columns

+
+
+
+

We can visualize the distribution of the Continent column using a bar plot. There are a few ways to do this.

+
+

7.5.1 Plotting in Pandas

+
+
wb['Continent'].value_counts().plot(kind='bar');
+
+
+
+

+
+
+
+
+

Recall that .value_counts() returns a Series with the total count of each unique value. We call .plot(kind='bar') on this result to visualize these counts as a bar plot.

+

Plotting methods in pandas are the least preferred and not supported in Data 100, as their functionality is limited. Instead, future examples will focus on other libraries built specifically for visualizing data. The most well-known library here is matplotlib.

+
+
+

7.5.2 Plotting in Matplotlib

+
+
import matplotlib.pyplot as plt # matplotlib is typically given the alias plt
+
+continent = wb['Continent'].value_counts()
+plt.bar(continent.index, continent)
+plt.xlabel('Continent')
+plt.ylabel('Count');
+
+
+
+

+
+
+
+
+

While more code is required to achieve the same result, matplotlib is often used over pandas for its ability to plot more complex visualizations, some of which are discussed shortly.

+

However, note how we needed to label the axes with plt.xlabel and plt.ylabel, as matplotlib does not support automatic axis labeling. To get around these inconveniences, we can use a more efficient plotting library: seaborn.

+
+
+

7.5.3 Plotting in Seaborn

+
+
import seaborn as sns # seaborn is typically given the alias sns
+sns.countplot(data = wb, x = 'Continent');
+
+
+
+

+
+
+
+
+

In contrast to matplotlib, the general structure of a seaborn call involves passing in an entire DataFrame, and then specifying what column(s) to plot. seaborn.countplot both counts and visualizes the number of unique values in a given column. This column is specified by the x argument to sns.countplot, while the DataFrame is specified by the data argument.

+

For the vast majority of visualizations, seaborn is far more concise and aesthetically pleasing than matplotlib. However, the color scheme of this particular bar plot is arbitrary - it encodes no additional information about the categories themselves. This is not always true; color may signify meaningful detail in other visualizations. We’ll explore this more in-depth during the next lecture.

+

By now, you’ll have noticed that each of these plotting libraries have a very different syntax. As with pandas, we’ll teach you the important methods in matplotlib and seaborn, but you’ll learn more through documentation.

+
    +
  1. Matplotlib Documentation
  2. +
  3. Seaborn Documentation
  4. +
+
+
+
+

7.6 Distributions of Quantitative Variables

+

Revisiting our example with the wb DataFrame, let’s plot the distribution of Gross national income per capita.

+
+
+Code +
wb.head(5)
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ContinentCountryPrimary completion rate: Male: % of relevant age group: 2015Primary completion rate: Female: % of relevant age group: 2015Lower secondary completion rate: Male: % of relevant age group: 2015Lower secondary completion rate: Female: % of relevant age group: 2015Youth literacy rate: Male: % of ages 15-24: 2005-14Youth literacy rate: Female: % of ages 15-24: 2005-14Adult literacy rate: Male: % ages 15 and older: 2005-14Adult literacy rate: Female: % ages 15 and older: 2005-14...Access to improved sanitation facilities: % of population: 1990Access to improved sanitation facilities: % of population: 2015Child immunization rate: Measles: % of children ages 12-23 months: 2015Child immunization rate: DTP3: % of children ages 12-23 months: 2015Children with acute respiratory infection taken to health provider: % of children under age 5 with ARI: 2009-2016Children with diarrhea who received oral rehydration and continuous feeding: % of children under age 5 with diarrhea: 2009-2016Children sleeping under treated bed nets: % of children under age 5: 2009-2016Children with fever receiving antimalarial drugs: % of children under age 5 with fever: 2009-2016Tuberculosis: Treatment success rate: % of new cases: 2014Tuberculosis: Cases detection rate: % of new estimated cases: 2015
0AfricaAlgeria106.0105.068.085.096.092.083.068.0...80.088.095.095.066.042.0NaNNaN88.080.0
1AfricaAngolaNaNNaNNaNNaN79.067.082.060.0...22.052.055.064.0NaNNaN25.928.334.064.0
2AfricaBenin83.073.050.037.055.031.041.018.0...7.020.075.079.023.033.072.725.989.061.0
3AfricaBotswana98.0101.086.087.096.099.087.089.0...39.063.097.095.0NaNNaNNaNNaN77.062.0
5AfricaBurundi58.066.035.030.090.088.089.085.0...42.048.093.094.055.043.053.825.491.051.0
+ +

5 rows × 47 columns

+
+
+
+

How should we define our categories for this variable? In the previous example, these were a few unique values of the Continent column. If we use similar logic here, our categories are the different numerical values contained in the Gross national income per capita column.

+

Under this assumption, let’s plot this distribution using the seaborn.countplot function.

+
+
sns.countplot(data = wb, x = 'Gross national income per capita, Atlas method: $: 2016');
+
+
+
+

+
+
+
+
+

What happened? A bar plot (either plt.bar or sns.countplot) will create a separate bar for each unique value of a variable. With a continuous variable, we may not have a finite number of possible values, which can lead to situations like above where we would need many, many bars to display each unique value.

+

Specifically, we can say this histogram suffers from overplotting as we are unable to interpret the plot and gain any meaningful insight.

+

Rather than bar plots, to visualize the distribution of a continuous variable, we use one of the following types of plots:

+
    +
  • Histogram
  • +
  • Box plot
  • +
  • Violin plot
  • +
+
+

7.6.1 Box Plots and Violin Plots

+

Box plots and violin plots are two very similar kinds of visualizations. Both display the distribution of a variable using information about quartiles.

+

In a box plot, the width of the box at any point does not encode meaning. In a violin plot, the width of the plot indicates the density of the distribution at each possible value.

+
+
sns.boxplot(data=wb, y='Gross national income per capita, Atlas method: $: 2016');
+
+
+
+

+
+
+
+
+
+
sns.violinplot(data=wb, y="Gross national income per capita, Atlas method: $: 2016");
+
+
+
+

+
+
+
+
+

A quartile represents a 25% portion of the data. We say that:

+
    +
  • The first quartile (Q1) represents the 25th percentile – 25% of the data is smaller than or equal to the first quartile.
  • +
  • The second quartile (Q2) represents the 50th percentile, also known as the median – 50% of the data is smaller than or equal to the second quartile.
  • +
  • The third quartile (Q3) represents the 75th percentile – 75% of the data is smaller than or equal to the third quartile.
  • +
+

This means that the middle 50% of the data lies between the first and third quartiles. This is demonstrated in the histogram below. The three quartiles are marked with red vertical bars.

+
+
+Code +
gdp = wb['Gross domestic product: % growth : 2016']
+gdp = gdp[~gdp.isna()]
+
+q1, q2, q3 = np.percentile(gdp, [25, 50, 75])
+
+wb_quartiles = wb.copy()
+wb_quartiles['category'] = None
+wb_quartiles.loc[(wb_quartiles['Gross domestic product: % growth : 2016'] < q1) | (wb_quartiles['Gross domestic product: % growth : 2016'] > q3), 'category'] = 'Outside of the middle 50%'
+wb_quartiles.loc[(wb_quartiles['Gross domestic product: % growth : 2016'] > q1) & (wb_quartiles['Gross domestic product: % growth : 2016'] < q3), 'category'] = 'In the middle 50%'
+
+sns.histplot(wb_quartiles, x="Gross domestic product: % growth : 2016", hue="category")
+sns.rugplot([q1, q2, q3], c="firebrick", lw=6, height=0.1);
+
+
+
+
+

+
+
+
+
+

In a box plot, the lower extent of the box lies at Q1, while the upper extent of the box lies at Q3. The horizontal line in the middle of the box corresponds to Q2 (equivalently, the median).

+
+
sns.boxplot(data=wb, y='Gross domestic product: % growth : 2016');
+
+
+
+

+
+
+
+
+

The whiskers of a box-plot are the two points that lie at the [\(1^{st}\) Quartile \(-\) (\(1.5\times\) IQR)], and the [\(3^{rd}\) Quartile \(+\) (\(1.5\times\) IQR)]. They are the lower and upper ranges of “normal” data (the points excluding outliers).

+

The different forms of information contained in a box plot can be summarised as follows:

+
+ +
+

A violin plot displays quartile information, albeit a bit more subtly through smoothed density curves. Look closely at the center vertical bar of the violin plot below; the three quartiles and “whiskers” are still present!

+
+
sns.violinplot(data=wb, y='Gross domestic product: % growth : 2016');
+
+
+
+

+
+
+
+
+
+
+

7.6.2 Side-by-Side Box and Violin Plots

+

Plotting side-by-side box or violin plots allows us to compare distributions across different categories. In other words, they enable us to plot both a qualitative variable and a quantitative continuous variable in one visualization.

+

With seaborn, we can easily create side-by-side plots by specifying both an x and y column.

+
+
sns.boxplot(data=wb, x="Continent", y='Gross domestic product: % growth : 2016');
+
+
+
+

+
+
+
+
+
+
+

7.6.3 Histograms

+

You are likely familiar with histograms from Data 8. A histogram collects continuous data into bins, then plots this binned data. Each bin reflects the density of datapoints with values that lie between the left and right ends of the bin; in other words, the area of each bin is proportional to the percentage of datapoints it contains.

+
+

7.6.3.1 Plotting Histograms

+

Below, we plot a histogram using matplotlib and seaborn. Which graph do you prefer?

+
+
# The `edgecolor` argument controls the color of the bin edges
+gni = wb["Gross national income per capita, Atlas method: $: 2016"]
+plt.hist(gni, density=True, edgecolor="white")
+
+# Add labels
+plt.xlabel("Gross national income per capita")
+plt.ylabel("Density")
+plt.title("Distribution of gross national income per capita");
+
+
+
+

+
+
+
+
+
+
sns.histplot(data=wb, x="Gross national income per capita, Atlas method: $: 2016", stat="density")
+plt.title("Distribution of gross national income per capita");
+
+
+
+

+
+
+
+
+
+
+

7.6.3.2 Overlaid Histograms

+

We can overlay histograms (or density curves) to compare distributions across qualitative categories.

+

The hue parameter of sns.histplot specifies the column that should be used to determine the color of each category. hue can be used in many seaborn plotting functions.

+

Notice that the resulting plot includes a legend describing which color corresponds to each hemisphere – a legend should always be included if color is used to encode information in a visualization!

+
+
# Create a new variable to store the hemisphere in which each country is located
+north = ["Asia", "Europe", "N. America"]
+south = ["Africa", "Oceania", "S. America"]
+wb.loc[wb["Continent"].isin(north), "Hemisphere"] = "Northern"
+wb.loc[wb["Continent"].isin(south), "Hemisphere"] = "Southern"
+
+
+
sns.histplot(data=wb, x="Gross national income per capita, Atlas method: $: 2016", hue="Hemisphere", stat="density")
+plt.title("Distribution of gross national income per capita");
+
+
+
+

+
+
+
+
+

Again, each bin of a histogram is scaled such that its area is proportional to the percentage of all datapoints that it contains.

+
+
densities, bins, _ = plt.hist(gni, density=True, edgecolor="white", bins=5)
+plt.xlabel("Gross national income per capita")
+plt.ylabel("Density")
+
+print(f"First bin has width {bins[1]-bins[0]} and height {densities[0]}")
+print(f"This corresponds to {bins[1]-bins[0]} * {densities[0]} = {(bins[1]-bins[0])*densities[0]*100}% of the data")
+
+
First bin has width 16410.0 and height 4.7741589911386953e-05
+This corresponds to 16410.0 * 4.7741589911386953e-05 = 78.343949044586% of the data
+
+
+
+
+

+
+
+
+
+
+
+

7.6.3.3 Evaluating Histograms

+

Histograms allow us to assess a distribution by their shape. There are a few properties of histograms we can analyze:

+
    +
  1. Skewness and Tails +
      +
    • Skewed left vs skewed right
    • +
    • Left tail vs right tail
    • +
  2. +
  3. Outliers +
      +
    • Using percentiles
    • +
  4. +
  5. Modes +
      +
    • Most commonly occuring data
    • +
  6. +
+
+
7.6.3.3.1 Skewness and Tails
+

The skew of a histogram describes the direction in which its “tail” extends. - A distribution with a long right tail is skewed right (such as Gross national income per capita). In a right-skewed distribution, the few large outliers “pull” the mean to the right of the median.

+
+
sns.histplot(data = wb, x = 'Gross national income per capita, Atlas method: $: 2016', stat = 'density');
+plt.title('Distribution with a long right tail')
+
+
Text(0.5, 1.0, 'Distribution with a long right tail')
+
+
+
+
+

+
+
+
+
+
    +
  • A distribution with a long left tail is skewed left (such as Access to an improved water source). In a left-skewed distribution, the few small outliers “pull” the mean to the left of the median.
  • +
+

In the case where a distribution has equal-sized right and left tails, it is symmetric. The mean is approximately equal to the median. Think of mean as the balancing point of the distribution.

+
+
sns.histplot(data = wb, x = 'Access to an improved water source: % of population: 2015', stat = 'density');
+plt.title('Distribution with a long left tail')
+
+
Text(0.5, 1.0, 'Distribution with a long left tail')
+
+
+
+
+

+
+
+
+
+
+
+
7.6.3.3.2 Outliers
+

Loosely speaking, an outlier is defined as a data point that lies an abnormally large distance away from other values. Let’s make this more concrete. As you may have observed in the box plot infographic earlier, we define outliers to be the data points that fall beyond the whiskers. Specifically, values that are less than the [\(1^{st}\) Quartile \(-\) (\(1.5\times\) IQR)], or greater than [\(3^{rd}\) Quartile \(+\) (\(1.5\times\) IQR).]

+
+
+
7.6.3.3.3 Modes
+

In Data 100, we describe a “mode” of a histogram as a peak in the distribution. Often, however, it is difficult to determine what counts as its own “peak.” For example, the number of peaks in the distribution of HIV rates across different countries varies depending on the number of histogram bins we plot.

+

If we set the number of bins to 5, the distribution appears unimodal.

+
+
# Rename the very long column name for convenience
+wb = wb.rename(columns={'Antiretroviral therapy coverage: % of people living with HIV: 2015':"HIV rate"})
+# With 5 bins, it seems that there is only one peak
+sns.histplot(data=wb, x="HIV rate", stat="density", bins=5)
+plt.title("5 histogram bins");
+
+
+
+

+
+
+
+
+
+
# With 10 bins, there seem to be two peaks
+
+sns.histplot(data=wb, x="HIV rate", stat="density", bins=10)
+plt.title("10 histogram bins");
+
+
+
+

+
+
+
+
+
+
# And with 20 bins, it becomes hard to say what counts as a "peak"!
+
+sns.histplot(data=wb, x ="HIV rate", stat="density", bins=20)
+plt.title("20 histogram bins");
+
+
+
+

+
+
+
+
+

In part, it is these ambiguities that motivate us to consider using Kernel Density Estimation (KDE), which we will explore more in the next lecture.

+ + +
+
+
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-10-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-10-output-1.png new file mode 100644 index 000000000..149c1a50c Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-10-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-11-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-11-output-1.png new file mode 100644 index 000000000..782a8892e Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-11-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-12-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-12-output-1.png new file mode 100644 index 000000000..c49423440 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-12-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-13-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-13-output-1.png new file mode 100644 index 000000000..2ccb2f586 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-13-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-14-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-14-output-1.png new file mode 100644 index 000000000..972ec6499 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-14-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-15-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-15-output-1.png new file mode 100644 index 000000000..f7233df13 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-15-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-17-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-17-output-1.png new file mode 100644 index 000000000..4eabe260e Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-17-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-18-output-2.png b/docs/visualization_1/visualization_1_files/figure-html/cell-18-output-2.png new file mode 100644 index 000000000..0eca3fb68 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-18-output-2.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-19-output-2.png b/docs/visualization_1/visualization_1_files/figure-html/cell-19-output-2.png new file mode 100644 index 000000000..ff2e07709 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-19-output-2.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-20-output-2.png b/docs/visualization_1/visualization_1_files/figure-html/cell-20-output-2.png new file mode 100644 index 000000000..77c3d34a7 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-20-output-2.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-21-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-21-output-1.png new file mode 100644 index 000000000..c9b1611d2 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-21-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-22-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-22-output-1.png new file mode 100644 index 000000000..13b3e0ea1 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-22-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-23-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-23-output-1.png new file mode 100644 index 000000000..eefabddfe Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-23-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-3-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-3-output-1.png new file mode 100644 index 000000000..7ce5b2df4 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-3-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-4-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-4-output-1.png new file mode 100644 index 000000000..d33747cc6 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-4-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-5-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-5-output-1.png new file mode 100644 index 000000000..5d6f07ab3 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-5-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-7-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-7-output-1.png new file mode 100644 index 000000000..27dc887de Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-7-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-8-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-8-output-1.png new file mode 100644 index 000000000..f21ef5460 Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-8-output-1.png differ diff --git a/docs/visualization_1/visualization_1_files/figure-html/cell-9-output-1.png b/docs/visualization_1/visualization_1_files/figure-html/cell-9-output-1.png new file mode 100644 index 000000000..5af52bc2b Binary files /dev/null and b/docs/visualization_1/visualization_1_files/figure-html/cell-9-output-1.png differ diff --git a/docs/visualization_2/images/boxcar_kernel.png b/docs/visualization_2/images/boxcar_kernel.png new file mode 100644 index 000000000..8d652b1e6 Binary files /dev/null and b/docs/visualization_2/images/boxcar_kernel.png differ diff --git a/docs/visualization_2/images/bulge.png b/docs/visualization_2/images/bulge.png new file mode 100644 index 000000000..304f40f14 Binary files /dev/null and b/docs/visualization_2/images/bulge.png differ diff --git a/docs/visualization_2/images/gaussian_0.1.png b/docs/visualization_2/images/gaussian_0.1.png new file mode 100644 index 000000000..5a71d3cc5 Binary files /dev/null and b/docs/visualization_2/images/gaussian_0.1.png differ diff --git a/docs/visualization_2/images/gaussian_1.png b/docs/visualization_2/images/gaussian_1.png new file mode 100644 index 000000000..e51846be2 Binary files /dev/null and b/docs/visualization_2/images/gaussian_1.png differ diff --git a/docs/visualization_2/images/gaussian_10.png b/docs/visualization_2/images/gaussian_10.png new file mode 100644 index 000000000..45d1974d3 Binary files /dev/null and b/docs/visualization_2/images/gaussian_10.png differ diff --git a/docs/visualization_2/images/gaussian_2.png b/docs/visualization_2/images/gaussian_2.png new file mode 100644 index 000000000..6357afff5 Binary files /dev/null and b/docs/visualization_2/images/gaussian_2.png differ diff --git a/docs/visualization_2/images/gaussian_kernel.png b/docs/visualization_2/images/gaussian_kernel.png new file mode 100644 index 000000000..8be7f2dcd Binary files /dev/null and b/docs/visualization_2/images/gaussian_kernel.png differ diff --git a/docs/visualization_2/images/good_viz_scale_1.png b/docs/visualization_2/images/good_viz_scale_1.png new file mode 100644 index 000000000..4576b61e1 Binary files /dev/null and b/docs/visualization_2/images/good_viz_scale_1.png differ diff --git a/docs/visualization_2/images/good_viz_scale_2.png b/docs/visualization_2/images/good_viz_scale_2.png new file mode 100644 index 000000000..ccbda9388 Binary files /dev/null and b/docs/visualization_2/images/good_viz_scale_2.png differ diff --git a/docs/visualization_2/images/horizontal.png b/docs/visualization_2/images/horizontal.png new file mode 100644 index 000000000..afcfa4856 Binary files /dev/null and b/docs/visualization_2/images/horizontal.png differ diff --git a/docs/visualization_2/images/jet_3_images.png b/docs/visualization_2/images/jet_3_images.png new file mode 100644 index 000000000..1067c77c7 Binary files /dev/null and b/docs/visualization_2/images/jet_3_images.png differ diff --git a/docs/visualization_2/images/jet_colormap.png b/docs/visualization_2/images/jet_colormap.png new file mode 100644 index 000000000..93d07c106 Binary files /dev/null and b/docs/visualization_2/images/jet_colormap.png differ diff --git a/docs/visualization_2/images/jet_four_by_four.png b/docs/visualization_2/images/jet_four_by_four.png new file mode 100644 index 000000000..a46062b04 Binary files /dev/null and b/docs/visualization_2/images/jet_four_by_four.png differ diff --git a/docs/visualization_2/images/jet_perceptually_uniform.png b/docs/visualization_2/images/jet_perceptually_uniform.png new file mode 100644 index 000000000..b0490ed8f Binary files /dev/null and b/docs/visualization_2/images/jet_perceptually_uniform.png differ diff --git a/docs/visualization_2/images/kde_function.png b/docs/visualization_2/images/kde_function.png new file mode 100644 index 000000000..392f8656a Binary files /dev/null and b/docs/visualization_2/images/kde_function.png differ diff --git a/docs/visualization_2/images/linearize.png b/docs/visualization_2/images/linearize.png new file mode 100644 index 000000000..14eec3a92 Binary files /dev/null and b/docs/visualization_2/images/linearize.png differ diff --git a/docs/visualization_2/images/male_female_earnings_barplot.png b/docs/visualization_2/images/male_female_earnings_barplot.png new file mode 100644 index 000000000..425ceb383 Binary files /dev/null and b/docs/visualization_2/images/male_female_earnings_barplot.png differ diff --git a/docs/visualization_2/images/male_female_earnings_scatterplot.png b/docs/visualization_2/images/male_female_earnings_scatterplot.png new file mode 100644 index 000000000..827631a08 Binary files /dev/null and b/docs/visualization_2/images/male_female_earnings_scatterplot.png differ diff --git a/docs/visualization_2/images/markings_viz.png b/docs/visualization_2/images/markings_viz.png new file mode 100644 index 000000000..a68e77643 Binary files /dev/null and b/docs/visualization_2/images/markings_viz.png differ diff --git a/docs/visualization_2/images/mutli_dim_encodings.png b/docs/visualization_2/images/mutli_dim_encodings.png new file mode 100644 index 000000000..67ede5ee6 Binary files /dev/null and b/docs/visualization_2/images/mutli_dim_encodings.png differ diff --git a/docs/visualization_2/images/revealed_viz.png b/docs/visualization_2/images/revealed_viz.png new file mode 100644 index 000000000..a5cbf2d83 Binary files /dev/null and b/docs/visualization_2/images/revealed_viz.png differ diff --git a/docs/visualization_2/images/rugplot_encoding.png b/docs/visualization_2/images/rugplot_encoding.png new file mode 100644 index 000000000..e568644eb Binary files /dev/null and b/docs/visualization_2/images/rugplot_encoding.png differ diff --git a/docs/visualization_2/images/small_multiples.png b/docs/visualization_2/images/small_multiples.png new file mode 100644 index 000000000..d624de378 Binary files /dev/null and b/docs/visualization_2/images/small_multiples.png differ diff --git a/docs/visualization_2/images/tukey_mosteller.png b/docs/visualization_2/images/tukey_mosteller.png new file mode 100644 index 000000000..6c322a019 Binary files /dev/null and b/docs/visualization_2/images/tukey_mosteller.png differ diff --git a/docs/visualization_2/images/unrevealed_viz.png b/docs/visualization_2/images/unrevealed_viz.png new file mode 100644 index 000000000..f371ed74d Binary files /dev/null and b/docs/visualization_2/images/unrevealed_viz.png differ diff --git a/docs/visualization_2/images/viridis_colormap.png b/docs/visualization_2/images/viridis_colormap.png new file mode 100644 index 000000000..37496838f Binary files /dev/null and b/docs/visualization_2/images/viridis_colormap.png differ diff --git a/docs/visualization_2/images/viridis_perceptually_uniform.png b/docs/visualization_2/images/viridis_perceptually_uniform.png new file mode 100644 index 000000000..266f869ec Binary files /dev/null and b/docs/visualization_2/images/viridis_perceptually_uniform.png differ diff --git a/docs/visualization_2/images/wrong_scale_viz.png b/docs/visualization_2/images/wrong_scale_viz.png new file mode 100644 index 000000000..c6cda3d97 Binary files /dev/null and b/docs/visualization_2/images/wrong_scale_viz.png differ diff --git a/docs/visualization_2/visualization_2.html b/docs/visualization_2/visualization_2.html new file mode 100644 index 000000000..dc4bd3087 --- /dev/null +++ b/docs/visualization_2/visualization_2.html @@ -0,0 +1,1958 @@ + + + + + + + + + +8  Visualization II – Principles and Techniques of Data Science + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+ +
+ +
+ + +
+ + + +
+ +
+
+

8  Visualization II

+
+ + + +
+ + + + +
+ + + +
+ + +
+
+
+ +
+
+Learning Outcomes +
+
+
+
+
+
    +
  • Understanding KDE for plotting distributions and estimating density curves.
  • +
  • Using transformations to analyze the relationship between two variables.
  • +
  • Evaluating the quality of a visualization based on visualization theory concepts.
  • +
+
+
+
+
+

8.1 Kernel Density Estimation

+

Often, we want to identify general trends across a distribution, rather than focus on detail. Smoothing a distribution helps generalize the structure of the data and eliminate noise.

+
+

8.1.1 KDE Theory

+

A kernel density estimate (KDE) is a smooth, continuous function that approximates a curve. It allows us to represent general trends in a distribution without focusing on the details, which is useful for analyzing the broad structure of a dataset.

+

More formally, a KDE attempts to approximate the underlying probability distribution from which our dataset was drawn. You may have encountered the idea of a probability distribution in your other classes; if not, we’ll discuss it at length in the next lecture. For now, you can think of a probability distribution as a description of how likely it is for us to sample a particular value in our dataset.

+

A KDE curve estimates the probability density function of a random variable. Consider the example below, where we have used sns.displot to plot both a histogram (containing the data points we actually collected) and a KDE curve (representing the approximated probability distribution from which this data was drawn) using data from the World Bank dataset (wb).

+
+
+Code +
import pandas as pd
+import numpy as np
+import matplotlib.pyplot as plt
+import seaborn as sns
+
+wb = pd.read_csv("data/world_bank.csv", index_col=0)
+wb = wb.rename(columns={'Antiretroviral therapy coverage: % of people living with HIV: 2015':"HIV rate",
+                       'Gross national income per capita, Atlas method: $: 2016':'gni'})
+wb.head()
+
+
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ContinentCountryPrimary completion rate: Male: % of relevant age group: 2015Primary completion rate: Female: % of relevant age group: 2015Lower secondary completion rate: Male: % of relevant age group: 2015Lower secondary completion rate: Female: % of relevant age group: 2015Youth literacy rate: Male: % of ages 15-24: 2005-14Youth literacy rate: Female: % of ages 15-24: 2005-14Adult literacy rate: Male: % ages 15 and older: 2005-14Adult literacy rate: Female: % ages 15 and older: 2005-14...Access to improved sanitation facilities: % of population: 1990Access to improved sanitation facilities: % of population: 2015Child immunization rate: Measles: % of children ages 12-23 months: 2015Child immunization rate: DTP3: % of children ages 12-23 months: 2015Children with acute respiratory infection taken to health provider: % of children under age 5 with ARI: 2009-2016Children with diarrhea who received oral rehydration and continuous feeding: % of children under age 5 with diarrhea: 2009-2016Children sleeping under treated bed nets: % of children under age 5: 2009-2016Children with fever receiving antimalarial drugs: % of children under age 5 with fever: 2009-2016Tuberculosis: Treatment success rate: % of new cases: 2014Tuberculosis: Cases detection rate: % of new estimated cases: 2015
0AfricaAlgeria106.0105.068.085.096.092.083.068.0...80.088.095.095.066.042.0NaNNaN88.080.0
1AfricaAngolaNaNNaNNaNNaN79.067.082.060.0...22.052.055.064.0NaNNaN25.928.334.064.0
2AfricaBenin83.073.050.037.055.031.041.018.0...7.020.075.079.023.033.072.725.989.061.0
3AfricaBotswana98.0101.086.087.096.099.087.089.0...39.063.097.095.0NaNNaNNaNNaN77.062.0
5AfricaBurundi58.066.035.030.090.088.089.085.0...42.048.093.094.055.043.053.825.491.051.0
+ +

5 rows × 47 columns

+
+
+
+
+
import seaborn as sns
+import matplotlib.pyplot as plt
+
+sns.displot(data = wb, x = 'HIV rate', \
+                       kde = True, stat = "density")
+
+plt.title("Distribution of HIV rates");
+
+
+
+

+
+
+
+
+

Notice that the smooth KDE curve is higher when the histogram bins are taller. You can think of the height of the KDE curve as representing how “probable” it is that we randomly sample a datapoint with the corresponding value. This intuitively makes sense – if we have already collected more datapoints with a particular value (resulting in a tall histogram bin), it is more likely that, if we randomly sample another datapoint, we will sample one with a similar value (resulting in a high KDE curve).

+

The area under a probability density function should always integrate to 1, representing the fact that the total probability of a distribution should always sum to 100%. Hence, a KDE curve will always have an area under the curve of 1.

+
+
+

8.1.2 Constructing a KDE

+

We perform kernel density estimation using three steps.

+
    +
  1. Place a kernel at each datapoint.
  2. +
  3. Normalize the kernels to have a total area of 1 (across all kernels).
  4. +
  5. Sum the normalized kernels.
  6. +
+

We’ll explain what a “kernel” is momentarily.

+

To make things simpler, let’s construct a KDE for a small, artificially generated dataset of 5 datapoints: \([2.2, 2.8, 3.7, 5.3, 5.7]\). In the plot below, each vertical bar represents one data point.

+
+
+Code +
data = [2.2, 2.8, 3.7, 5.3, 5.7]
+
+sns.rugplot(data, height=0.3)
+
+plt.xlabel("Data")
+plt.ylabel("Density")
+plt.xlim(-3, 10)
+plt.ylim(0, 0.5);
+
+
+
+
+

+
+
+
+
+

Our goal is to create the following KDE curve, which was generated automatically by sns.kdeplot.

+
+
+Code +
sns.kdeplot(data)
+
+plt.xlabel("Data")
+plt.xlim(-3, 10)
+plt.ylim(0, 0.5);
+
+
+
+
+

+
+
+
+
+
+

8.1.2.1 Step 1: Place a Kernel at Each Data Point

+

To begin generating a density curve, we need to choose a kernel and bandwidth value (\(\alpha\)). What are these exactly?

+

A kernel is a density curve. It is the mathematical function that attempts to capture the randomness of each data point in our sampled data. To explain what this means, consider just one of the datapoints in our dataset: \(2.2\). We obtained this datapoint by randomly sampling some information out in the real world (you can imagine \(2.2\) as representing a single measurement taken in an experiment, for example). If we were to sample a new datapoint, we may obtain a slightly different value. It could be higher than \(2.2\); it could also be lower than \(2.2\). We make the assumption that any future sampled datapoints will likely be similar in value to the data we’ve already drawn. This means that our kernel – our description of the probability of randomly sampling any new value – will be greatest at the datapoint we’ve already drawn but still have non-zero probability above and below it. The area under any kernel should integrate to 1, representing the total probability of drawing a new datapoint.

+

A bandwidth value, usually denoted by \(\alpha\), represents the width of the kernel. A large value of \(\alpha\) will result in a wide, short kernel function, while a small value with result in a narrow, tall kernel.

+

Below, we place a Gaussian kernel, plotted in orange, over the datapoint \(2.2\). A Gaussian kernel is simply the normal distribution, which you may have called a bell curve in Data 8.

+
+
+Code +
def gaussian_kernel(x, z, a):
+    # We'll discuss where this mathematical formulation came from later
+    return (1/np.sqrt(2*np.pi*a**2)) * np.exp((-(x - z)**2 / (2 * a**2)))
+
+# Plot our datapoint
+sns.rugplot([2.2], height=0.3)
+
+# Plot the kernel
+x = np.linspace(-3, 10, 1000)
+plt.plot(x, gaussian_kernel(x, 2.2, 1))
+
+plt.xlabel("Data")
+plt.ylabel("Density")
+plt.xlim(-3, 10)
+plt.ylim(0, 0.5);
+
+
+
+
+

+
+
+
+
+

To begin creating our KDE, we place a kernel on each datapoint in our dataset. For our dataset of 5 points, we will have 5 kernels.

+
+
+Code +
# You will work with the functions below in Lab 4
+def create_kde(kernel, pts, a):
+    # Takes in a kernel, set of points, and alpha
+    # Returns the KDE as a function
+    def f(x):
+        output = 0
+        for pt in pts:
+            output += kernel(x, pt, a)
+        return output / len(pts) # Normalization factor
+    return f
+
+def plot_kde(kernel, pts, a):
+    # Calls create_kde and plots the corresponding KDE
+    f = create_kde(kernel, pts, a)
+    x = np.linspace(min(pts) - 5, max(pts) + 5, 1000)
+    y = [f(xi) for xi in x]
+    plt.plot(x, y);
+    
+def plot_separate_kernels(kernel, pts, a, norm=False):
+    # Plots individual kernels, which are then summed to create the KDE
+    x = np.linspace(min(pts) - 5, max(pts) + 5, 1000)
+    for pt in pts:
+        y = kernel(x, pt, a)
+        if norm:
+            y /= len(pts)
+        plt.plot(x, y)
+    
+    plt.show();
+    
+plt.xlim(-3, 10)
+plt.ylim(0, 0.5)
+plt.xlabel("Data")
+plt.ylabel("Density")
+
+plot_separate_kernels(gaussian_kernel, data, a = 1)
+
+
+
+
+

+
+
+
+
+
+
+

8.1.2.2 Step 2: Normalize Kernels to Have a Total Area of 1

+

Above, we said that each kernel has an area of 1. Earlier, we also said that our goal is to construct a KDE curve using these kernels with a total area of 1. If we were to directly sum the kernels as they are, we would produce a KDE curve with an integrated area of (5 kernels) \(\times\) (area of 1 each) = 5. To avoid this, we will normalize each of our kernels. This involves multiplying each kernel by \(\frac{1}{\#\:\text{datapoints}}\).

+

In the cell below, we multiply each of our 5 kernels by \(\frac{1}{5}\) to apply normalization.

+
+
+Code +
plt.xlim(-3, 10)
+plt.ylim(0, 0.5)
+plt.xlabel("Data")
+plt.ylabel("Density")
+
+# The `norm` argument specifies whether or not to normalize the kernels
+plot_separate_kernels(gaussian_kernel, data, a = 1, norm = True)
+
+
+
+
+

+
+
+
+
+
+
+

8.1.2.3 Step 3: Sum the Normalized Kernels

+

Our KDE curve is the sum of the normalized kernels. Notice that the final curve is identical to the plot generated by sns.kdeplot we saw earlier!

+
+
+Code +
plt.xlim(-3, 10)
+plt.ylim(0, 0.5)
+plt.xlabel("Data")
+plt.ylabel("Density")
+
+plot_kde(gaussian_kernel, data, a = 1)
+
+
+
+
+

+
+
+
+
+
+
+
+

8.1.3 Kernel Functions and Bandwidths

+
+ +
+

A general “KDE formula” function is given above.

+
    +
  1. \(K_{\alpha}(x, x_i)\) is the kernel centered on the observation i. +
      +
    • Each kernel individually has area 1.
    • +
    • x represents any number on the number line. It is the input to our function.
    • +
  2. +
  3. \(n\) is the number of observed datapoints that we have. +
      +
    • We multiply by \(\frac{1}{n}\) so that the total area of the KDE is still 1.
    • +
  4. +
  5. Each \(x_i \in \{x_1, x_2, \dots, x_n\}\) represents an observed datapoint. +
      +
    • These are what we use to create our KDE by summing multiple shifted kernels centered at these points.
    • +
  6. +
+
    +
  • \(\alpha\) (alpha) is the bandwidth or smoothing parameter.
  • +
+

A kernel (for our purposes) is a valid density function. This means it:

+
    +
  • Must be non-negative for all inputs.
  • +
  • Must integrate to 1.
  • +
+
+

8.1.3.1 Gaussian Kernel

+

The most common kernel is the Gaussian kernel. The Gaussian kernel is equivalent to the Gaussian probability density function (the Normal distribution), centered at the observed value with a standard deviation of (this is known as the bandwidth parameter).

+

\[K_a(x, x_i) = \frac{1}{\sqrt{2\pi\alpha^{2}}}e^{-\frac{(x-x_i)^{2}}{2\alpha^{2}}}\]

+

In this formula:

+
    +
  • \(x\) (no subscript) represents any value along the x-axis of our plot
  • +
  • \(x_i\) represents the \(i\) -th datapoint in our dataset. It is one of the values that we have actually collected in our data sampling process. In our example earlier, \(x_i=2.2\). Those of you who have taken a probability class may recognize \(x_i\) as the mean of the normal distribution.
  • +
  • Each kernel is centered on our observed values, so its distribution mean is \(x_i\).
  • +
  • \(\alpha\) is the bandwidth parameter, representing the width of our kernel. More formally, \(\alpha\) is the standard deviation of the Gaussian curve. +
      +
    • A large value of \(\alpha\) will produce a kernel that is wider and shorter – this leads to a smoother KDE when the kernels are summed together.
    • +
    • A small value of \(\alpha\) will produce a narrower, taller kernel, and, with it, a noisier KDE.
    • +
  • +
+

The details of this (admittedly intimidating) formula are less important than understanding its role in kernel density estimation – this equation gives us the shape of each kernel.

+ ++++ + + + + + + + + + + + + +
Gaussian Kernel, \(\alpha\) = 0.1Gaussian Kernel, \(\alpha\) = 1
gaussian_0.1gaussian_1
+ ++++ + + + + + + + + + + + + +
Gaussian Kernel, \(\alpha\) = 2Gaussian Kernel, \(\alpha\) = 10
gaussian_2gaussian_10
+
+
+

8.1.3.2 Boxcar Kernel

+

Another example of a kernel is the Boxcar kernel. The boxcar kernel assigns a uniform density to points within a “window” of the observation, and a density of 0 elsewhere. The equation below is a boxcar kernel with the center at \(x_i\) and the bandwidth of \(\alpha\).

+

\[K_a(x, x_i) = \begin{cases} + \frac{1}{\alpha}, & |x - x_i| \le \frac{\alpha}{2}\\ + 0, & \text{else } + \end{cases}\]

+

The boxcar kernel is seldom used in practice – we include it here to demonstrate that a kernel function can take whatever form you would like, provided it integrates to 1 and does not output negative values.

+
+
+Code +
def boxcar_kernel(alpha, x, z):
+    return (((x-z)>=-alpha/2)&((x-z)<=alpha/2))/alpha
+
+xs = np.linspace(-5, 5, 200)
+alpha=1
+kde_curve = [boxcar_kernel(alpha, x, 0) for x in xs]
+plt.plot(xs, kde_curve);
+
+
+
+
+

+
The Boxcar kernel centered at 0 with bandwidth \(\alpha\) = 1.
+
+
+
+
+

The diagram on the right is how the density curve for our 5 point dataset would have looked had we used the Boxcar kernel with bandwidth \(\alpha\) = 1.

+ ++++ + + + + + + + + + + + + +
KDEBoxcar
kde_step_3boxcar_kernel
+
+
+
+
+

8.2 Diving Deeper into displot

+

As we saw earlier, we can use seaborn’s displot function to plot various distributions. In particular, displot allows you to specify the kind of plot and is a wrapper for histplot, kdeplot, and ecdfplot.

+

Below, we can see a couple of examples of how sns.displot can be used to plot various distributions.

+

First, we can plot a histogram by setting kind to "hist". Note that here we’ve specified stat = density to normalize the histogram such that the area under the histogram is equal to 1.

+
+
sns.displot(data=wb, 
+            x="gni", 
+            kind="hist", 
+            stat="density") # default: stat=count and density integrates to 1
+plt.title("Distribution of gross national income per capita");
+
+
+
+

+
+
+
+
+

Now, what if we want to generate a KDE plot? We can set kind = to "kde"!

+
+
sns.displot(data=wb, 
+            x="gni", 
+            kind='kde')
+plt.title("Distribution of gross national income per capita");
+
+
+
+

+
+
+
+
+

And finally, if we want to generate an Empirical Cumulative Distribution Function (ECDF), we can specify kind = "ecdf".

+
+
sns.displot(data=wb, 
+            x="gni", 
+            kind='ecdf')
+plt.title("Cumulative Distribution of gross national income per capita");
+
+
+
+

+
+
+
+
+
+
+

8.3 Relationships Between Quantitative Variables

+

Up until now, we’ve discussed how to visualize single-variable distributions. Going beyond this, we want to understand the relationship between pairs of numerical variables.

+
+

8.3.0.1 Scatter Plots

+

Scatter plots are one of the most useful tools in representing the relationship between pairs of quantitative variables. They are particularly important in gauging the strength, or correlation, of the relationship between variables. Knowledge of these relationships can then motivate decisions in our modeling process.

+

In matplotlib, we use the function plt.scatter to generate a scatter plot. Notice that, unlike our examples of plotting single-variable distributions, now we specify sequences of values to be plotted along the x-axis and the y-axis.

+
+
plt.scatter(wb["per capita: % growth: 2016"], \
+            wb['Adult literacy rate: Female: % ages 15 and older: 2005-14'])
+
+plt.xlabel("% growth per capita")
+plt.ylabel("Female adult literacy rate")
+plt.title("Female adult literacy against % growth");
+
+
+
+

+
+
+
+
+

In seaborn, we call the function sns.scatterplot. We use the x and y parameters to indicate the values to be plotted along the x and y axes, respectively. By using the hue parameter, we can specify a third variable to be used for coloring each scatter point.

+
+
sns.scatterplot(data = wb, x = "per capita: % growth: 2016", \
+               y = "Adult literacy rate: Female: % ages 15 and older: 2005-14", 
+               hue = "Continent")
+
+plt.title("Female adult literacy against % growth");
+
+
+
+

+
+
+
+
+
+
8.3.0.1.1 Overplotting
+

Although the plots above communicate the general relationship between the two plotted variables, they both suffer a major limitation – overplotting. Overplotting occurs when scatter points with similar values are stacked on top of one another, making it difficult to see the number of scatter points actually plotted in the visualization. Notice how in the upper righthand region of the plots, we cannot easily tell just how many points have been plotted. This makes our visualizations difficult to interpret.

+

We have a few methods to help reduce overplotting:

+
    +
  • Decreasing the size of the scatter point markers can improve readability. We do this by setting a new value to the size parameter, s, of plt.scatter or sns.scatterplot.
  • +
  • Jittering is the process of adding a small amount of random noise to all x and y values to slightly shift the position of each datapoint. By randomly shifting all the data by some small distance, we can discern individual points more clearly without modifying the major trends of the original dataset.
  • +
+

In the cell below, we first jitter the data using np.random.uniform, then re-plot it with smaller markers. The resulting plot is much easier to interpret.

+
+
# Setting a seed ensures that we produce the same plot each time
+# This means that the course notes will not change each time you access them
+np.random.seed(150)
+
+# This call to np.random.uniform generates random numbers between -1 and 1
+# We add these random numbers to the original x data to jitter it slightly
+x_noise = np.random.uniform(-1, 1, len(wb))
+jittered_x = wb["per capita: % growth: 2016"] + x_noise
+
+# Repeat for y data
+y_noise = np.random.uniform(-5, 5, len(wb))
+jittered_y = wb["Adult literacy rate: Female: % ages 15 and older: 2005-14"] + y_noise
+
+# Setting the size parameter `s` changes the size of each point
+plt.scatter(jittered_x, jittered_y, s=15)
+
+plt.xlabel("% growth per capita (jittered)")
+plt.ylabel("Female adult literacy rate (jittered)")
+plt.title("Female adult literacy against % growth");
+
+
+
+

+
+
+
+
+
+
+
+

8.3.0.2 lmplot and jointplot

+

seaborn also includes several built-in functions for creating more sophisticated scatter plots. Two of the most commonly used examples are sns.lmplot and sns.jointplot.

+

sns.lmplot plots both a scatter plot and a linear regression line, all in one function call. We’ll discuss linear regression in a few lectures.

+
+
sns.lmplot(data = wb, x = "per capita: % growth: 2016", \
+           y = "Adult literacy rate: Female: % ages 15 and older: 2005-14")
+
+plt.title("Female adult literacy against % growth");
+
+
+
+

+
+
+
+
+

sns.jointplot creates a visualization with three components: a scatter plot, a histogram of the distribution of x values, and a histogram of the distribution of y values.

+
+
sns.jointplot(data = wb, x = "per capita: % growth: 2016", \
+           y = "Adult literacy rate: Female: % ages 15 and older: 2005-14")
+
+# plt.suptitle allows us to shift the title up so it does not overlap with the histogram
+plt.suptitle("Female adult literacy against % growth")
+plt.subplots_adjust(top=0.9);
+
+
+
+

+
+
+
+
+
+
+

8.3.0.3 Hex plots

+

For datasets with a very large number of datapoints, jittering is unlikely to fully resolve the issue of overplotting. In these cases, we can attempt to visualize our data by its density, rather than displaying each individual datapoint.

+

Hex plots can be thought of as two-dimensional histograms that show the joint distribution between two variables. This is particularly useful when working with very dense data. In a hex plot, the x-y plane is binned into hexagons. Hexagons that are darker in color indicate a greater density of data – that is, there are more data points that lie in the region enclosed by the hexagon.

+

We can generate a hex plot using sns.jointplot modified with the kind parameter.

+
+
sns.jointplot(data = wb, x = "per capita: % growth: 2016", \
+              y = "Adult literacy rate: Female: % ages 15 and older: 2005-14", \
+              kind = "hex")
+
+# plt.suptitle allows us to shift the title up so it does not overlap with the histogram
+plt.suptitle("Female adult literacy against % growth")
+plt.subplots_adjust(top=0.9);
+
+
+
+

+
+
+
+
+
+
+

8.3.0.4 Contour Plots

+

Contour plots are an alternative way of plotting the joint distribution of two variables. You can think of them as the 2-dimensional versions of KDE plots. A contour plot can be interpreted in a similar way to a topographic map. Each contour line represents an area that has the same density of datapoints throughout the region. Contours marked with darker colors contain more datapoints (a higher density) in that region.

+

sns.kdeplot will generate a contour plot if we specify both x and y data.

+
+
sns.kdeplot(data = wb, x = "per capita: % growth: 2016", \
+            y = "Adult literacy rate: Female: % ages 15 and older: 2005-14", \
+            fill = True)
+
+plt.title("Female adult literacy against % growth");
+
+
+
+

+
+
+
+
+
+
+
+

8.4 Transformations

+

We have now covered visualizations in great depth, looking into various forms of visualizations, plotting libraries, and high-level theory.

+

Much of this was done to uncover insights in data, which will prove necessary when we begin building models of data later in the course. A strong graphical correlation between two variables hints at an underlying relationship that we may want to study in greater detail. However, relying on visual relationships alone is limiting - not all plots show association. The presence of outliers and other statistical anomalies makes it hard to interpret data.

+

Transformations are the process of manipulating data to find significant relationships between variables. These are often found by applying mathematical functions to variables that “transform” their range of possible values and highlight some previously hidden associations between data.

+

To see why we may want to transform data, consider the following plot of adult literacy rates against gross national income.

+
+
+Code +
# Some data cleaning to help with the next example
+df = pd.DataFrame(index=wb.index)
+df['lit'] = wb['Adult literacy rate: Female: % ages 15 and older: 2005-14'] \
+            + wb["Adult literacy rate: Male: % ages 15 and older: 2005-14"]
+df['inc'] = wb['gni']
+df.dropna(inplace=True)
+
+plt.scatter(df["inc"], df["lit"])
+plt.xlabel("Gross national income per capita")
+plt.ylabel("Adult literacy rate")
+plt.title("Adult literacy rate against GNI per capita");
+
+
+
+
+

+
+
+
+
+

This plot is difficult to interpret for two reasons:

+
    +
  • The data shown in the visualization appears almost “smushed” – it is heavily concentrated in the upper lefthand region of the plot. Even if we jittered the dataset, we likely would not be able to fully assess all datapoints in that area.
  • +
  • It is hard to generalize a clear relationship between the two plotted variables. While adult literacy rate appears to share some positive relationship with gross national income, we are not able to describe the specifics of this trend in much detail.
  • +
+

A transformation would allow us to visualize this data more clearly, which, in turn, would enable us to describe the underlying relationship between our variables of interest.

+

We will most commonly apply a transformation to linearize a relationship between variables. If we find a transformation to make a scatter plot of two variables linear, we can “backtrack” to find the exact relationship between the variables. This helps us in two major ways. Firstly, linear relationships are particularly simple to interpret – we have an intuitive sense of what the slope and intercept of a linear trend represent, and how they can help us understand the relationship between two variables. Secondly, linear relationships are the backbone of linear models. We will begin exploring linear modeling in great detail next week. As we’ll soon see, linear models become much more effective when we are working with linearized data.

+

In the remainder of this note, we will discuss how to linearize a dataset to produce the result below. Notice that the resulting plot displays a rough linear relationship between the values plotted on the x and y axes.

+

linearize

+
+

8.4.1 Linearization and Applying Transformations

+

To linearize a relationship, begin by asking yourself: what makes the data non-linear? It is helpful to repeat this question for each variable in your visualization.

+

Let’s start by considering the gross national income variable in our plot above. Looking at the y values in the scatter plot, we can see that many large y values are all clumped together, compressing the vertical axis. The scale of the horizontal axis is also being distorted by the few large outlying x values on the right.

+

horizontal

+

If we decreased the size of these outliers relative to the bulk of the data, we could reduce the distortion of the horizontal axis. How can we do this? We need a transformation that will:

+
    +
  • Decrease the magnitude of large x values by a significant amount.
  • +
  • Not drastically change the magnitude of small x values.
  • +
+

One function that produces this result is the log transformation. When we take the logarithm of a large number, the original number will decrease in magnitude dramatically. Conversely, when we take the logarithm of a small number, the original number does not change its value by as significant of an amount (to illustrate this, consider the difference between \(\log{(100)} = 4.61\) and \(\log{(10)} = 2.3\)).

+

In Data 100 (and most upper-division STEM classes), \(\log\) is used to refer to the natural logarithm with base \(e\).

+
+
# np.log takes the logarithm of an array or Series
+plt.scatter(np.log(df["inc"]), df["lit"])
+
+plt.xlabel("Log(gross national income per capita)")
+plt.ylabel("Adult literacy rate")
+plt.title("Adult literacy rate against Log(GNI per capita)");
+
+
+
+

+
+
+
+
+

After taking the logarithm of our x values, our plot appears much more balanced in its horizontal scale. We no longer have many datapoints clumped on one end and a few outliers out at extreme values.

+

Let’s repeat this reasoning for the y values. Considering only the vertical axis of the plot, notice how there are many datapoints concentrated at large y values. Only a few datapoints lie at smaller values of y.

+

If we were to “spread out” these large values of y more, we would no longer see the dense concentration in one region of the y-axis. We need a transformation that will:

+
    +
  • Increase the magnitude of large values of y so these datapoints are distributed more broadly on the vertical scale,
  • +
  • Not substantially alter the scaling of small values of y (we do not want to drastically modify the lower end of the y axis, which is already distributed evenly on the vertical scale).
  • +
+

In this case, it is helpful to apply a power transformation – that is, raise our y values to a power. Let’s try raising our adult literacy rate values to the power of 4. Large values raised to the power of 4 will increase in magnitude proportionally much more than small values raised to the power of 4 (consider the difference between \(2^4 = 16\) and \(200^4 = 1600000000\)).

+
+
# Apply a log transformation to the x values and a power transformation to the y values
+plt.scatter(np.log(df["inc"]), df["lit"]**4)
+
+plt.xlabel("Log(gross national income per capita)")
+plt.ylabel("Adult literacy rate (4th power)")
+plt.suptitle("Adult literacy rate (4th power) against Log(GNI per capita)")
+plt.subplots_adjust(top=0.9);
+
+
+
+

+
+
+
+
+

Our scatter plot is looking a lot better! Now, we are plotting the log of our original x values on the horizontal axis, and the 4th power of our original y values on the vertical axis. We start to see an approximate linear relationship between our transformed variables.

+

What can we take away from this? We now know that the log of gross national income and adult literacy to the power of 4 are roughly linearly related. If we denote the original, untransformed gross national income values as \(x\) and the original adult literacy rate values as \(y\), we can use the standard form of a linear fit to express this relationship:

+

\[y^4 = m(\log{x}) + b\]

+

Where \(m\) represents the slope of the linear fit, while \(b\) represents the intercept.

+

The cell below computes \(m\) and \(b\) for our transformed data. We’ll discuss how this code was generated in a future lecture.

+
+
+Code +
# The code below fits a linear regression model. We'll discuss it at length in a future lecture
+from sklearn.linear_model import LinearRegression
+
+model = LinearRegression()
+model.fit(np.log(df[["inc"]]), df["lit"]**4)
+m, b = model.coef_[0], model.intercept_
+
+print(f"The slope, m, of the transformed data is: {m}")
+print(f"The intercept, b, of the transformed data is: {b}")
+
+df = df.sort_values("inc")
+plt.scatter(np.log(df["inc"]), df["lit"]**4, label="Transformed data")
+plt.plot(np.log(df["inc"]), m*np.log(df["inc"])+b, c="red", label="Linear regression")
+plt.xlabel("Log(gross national income per capita)")
+plt.ylabel("Adult literacy rate (4th power)")
+plt.legend();
+
+
+
The slope, m, of the transformed data is: 336400693.43172705
+The intercept, b, of the transformed data is: -1802204836.0479987
+
+
+
+
+

+
+
+
+
+

What if we want to understand the underlying relationship between our original variables, before they were transformed? We can simply rearrange our linear expression above!

+

Recall our linear relationship between the transformed variables \(\log{x}\) and \(y^4\).

+

\[y^4 = m(\log{x}) + b\]

+

By rearranging the equation, we find a relationship between the untransformed variables \(x\) and \(y\).

+

\[y = [m(\log{x}) + b]^{(1/4)}\]

+

When we plug in the values for \(m\) and \(b\) computed above, something interesting happens.

+
+
+Code +
# Now, plug the values for m and b into the relationship between the untransformed x and y
+plt.scatter(df["inc"], df["lit"], label="Untransformed data")
+plt.plot(df["inc"], (m*np.log(df["inc"])+b)**(1/4), c="red", label="Modeled relationship")
+plt.xlabel("Gross national income per capita")
+plt.ylabel("Adult literacy rate")
+plt.legend();
+
+
+
+
+

+
+
+
+
+

We have found a relationship between our original variables – gross national income and adult literacy rate!

+

Transformations are powerful tools for understanding our data in greater detail. To summarize what we just achieved:

+
    +
  • We identified appropriate transformations to linearize the original data.
  • +
  • We used our knowledge of linear curves to compute the slope and intercept of the transformed data.
  • +
  • We used this slope and intercept information to derive a relationship in the untransformed data.
  • +
+

Linearization will be an important tool as we begin our work on linear modeling next week.

+
+

8.4.1.1 Tukey-Mosteller Bulge Diagram

+

The Tukey-Mosteller Bulge Diagram is a good guide when determining possible transformations to achieve linearity. It is a visual summary of the reasoning we just worked through above.

+

tukey_mosteller

+

How does it work? Each curved “bulge” represents a possible shape of non-linear data. To use the diagram, find which of the four bulges resembles your dataset the most closely. Then, look at the axes of the quadrant for this bulge. The horizontal axis will list possible transformations that could be applied to your x data for linearization. Similarly, the vertical axis will list possible transformations that could be applied to your y data. Note that each axis lists two possible transformations. While either of these transformations has the potential to linearize your dataset, note that this is an iterative process. It’s important to try out these transformations and look at the results to see whether you’ve actually achieved linearity. If not, you’ll need to continue testing other possible transformations.

+

Generally:

+
    +
  • \(\sqrt{}\) and \(\log{}\) will reduce the magnitude of large values.
  • +
  • Powers (\(^2\) and \(^3\)) will increase the spread in magnitude of large values.
  • +
+
+bulge +
+

Important: You should still understand the logic we worked through to determine how best to transform the data. The bulge diagram is just a summary of this same reasoning. You will be expected to be able to explain why a given transformation is or is not appropriate for linearization.

+
+
+
+

8.4.2 Additional Remarks

+

Visualization requires a lot of thought!

+
    +
  • There are many tools for visualizing distributions. +
      +
    • Distribution of a single variable: +
        +
      1. Rugplot
      2. +
      3. Histogram
      4. +
      5. Density plot
      6. +
      7. Box plot
      8. +
      9. Violin plot
      10. +
    • +
    • Joint distribution of two quantitative variables: +
        +
      1. Scatter plot
      2. +
      3. Hex plot
      4. +
      5. Contour plot
      6. +
    • +
  • +
+

This class primarily uses seaborn and matplotlib, but pandas also has basic built-in plotting methods. Many other visualization libraries exist, and plotly is one of them.

+
    +
  • plotly creates very easily creates interactive plots.
  • +
  • plotly will occasionally appear in lecture code, labs, and assignments!
  • +
+

Next, we’ll go deeper into the theory behind visualization.

+
+
+
+

8.5 Visualization Theory

+

This section marks a pivot to the second major topic of this lecture - visualization theory. We’ll discuss the abstract nature of visualizations and analyze how they convey information.

+

Remember, we had two goals for visualizing data. This section is particularly important in:

+
    +
  1. Helping us understand the data and results,
  2. +
  3. Communicating our results and conclusions with others.
  4. +
+
+

8.5.1 Information Channels

+

Visualizations are able to convey information through various encodings. In the remainder of this lecture, we’ll look at the use of color, scale, and depth, to name a few.

+
+

8.5.1.1 Encodings in Rugplots

+

One detail that we may have overlooked in our earlier discussion of rugplots is the importance of encodings. Rugplots are effective visuals because they utilize line thickness to encode frequency. Consider the following diagram:

+

rugplot_encoding

+
+
+

8.5.1.2 Multi-Dimensional Encodings

+

Encodings are also useful for representing multi-dimensional data. Notice how the following visual highlights four distinct “dimensions” of data:

+
    +
  • X-axis
  • +
  • Y-axis
  • +
  • Area
  • +
  • Color
  • +
+

multi_dim_encoding

+

The human visual perception system is only capable of visualizing data in a three-dimensional plane, but as you’ve seen, we can encode many more channels of information.

+
+
+
+

8.5.2 Harnessing the Axes

+
+

8.5.2.1 Consider the Scale of the Data

+

However, we should be careful to not misrepresent relationships in our data by manipulating the scale or axes. The visualization below improperly portrays two seemingly independent relationships on the same plot. The authors have clearly changed the scale of the y-axis to mislead their audience.

+

wrong_scale_viz

+

Notice how the downwards-facing line segment contains values in the millions, while the upwards-trending segment only contains values near three hundred thousand. These lines should not be intersecting.

+

When there is a large difference in the magnitude of the data, it’s advised to analyze percentages instead of counts. The following diagrams correctly display the trends in cancer screening and abortion rates.

+
+
+

good_viz_scale_1

+
+ +
+

good_viz_scale_2

+
+
+
+
+

8.5.2.2 Reveal the Data

+

Great visualizations not only consider the scale of the data but also utilize the axes in a way that best conveys information. For example, data scientists commonly set certain axes limits to highlight parts of the visualization they are most interested in.

+
+
+

unrevealed_viz

+
+ +
+

revealed_viz

+
+
+

The visualization on the right captures the trend in coronavirus cases during March of 2020. From only looking at the visualization on the left, a viewer may incorrectly believe that coronavirus began to skyrocket on March 4th, 2020. However, the second illustration tells a different story - cases rose closer to March 21th, 2020.

+
+
+
+

8.5.3 Harnessing Color

+

Color is another important feature in visualizations that does more than what meets the eye.

+

We already explored using color to encode a categorical variable in our scatter plot. Let’s now discuss the uses of color in novel visualizations like colormaps and heatmaps.

+

5-8% of the world is red-green color blind, so we have to be very particular about our color scheme. We want to make these as accessible as possible. Choosing a set of colors that work together is evidently a challenging task!

+
+

8.5.3.1 Colormaps

+

Colormaps are mappings from pixel data to color values, and they’re often used to highlight distinct parts of an image. Let’s investigate a few properties of colormaps.

+
+
+

Jet Colormap jet_colormap

+
+ +
+

Viridis Colormap viridis_colormap

+
+
+

The jet colormap is infamous for being misleading. While it seems more vibrant than viridis, the aggressive colors poorly encode numerical data. To understand why, let’s analyze the following images.

+
+
+

four_by_four_colormap

+
+ +
+

jet_3_colormap

+
+
+

The diagram on the left compares how a variety of colormaps represent pixel data that transitions from a high to low intensity. These include the jet colormap (row a) and grayscale (row b). Notice how the grayscale images do the best job in smoothly transitioning between pixel data. The jet colormap is the worst at this - the four images in row (a) look like a conglomeration of individual colors.

+

The difference is also evident in the images labeled (a) and (b) on the left side. The grayscale image is better at preserving finer detail in the vertical line strokes. Additionally, grayscale is preferred in X-ray scans for being more neutral. The intensity of the dark red color in the jet colormap is frightening and indicates something is wrong.

+

Why is the jet colormap so much worse? The answer lies in how its color composition is perceived to the human eye.

+
+
+

Jet Colormap Perception jet_perceptually_uniform

+
+ +
+

Viridis Colormap Perception viridis_perceptually_uniform

+
+
+

The jet colormap is largely misleading because it is not perceptually uniform. Perceptually uniform colormaps have the property that if the pixel data goes from 0.1 to 0.2, the perceptual change is the same as when the data goes from 0.8 to 0.9.

+

Notice how the said uniformity is present within the linear trend displayed in the viridis colormap. On the other hand, the jet colormap is largely non-linear - this is precisely why it’s considered a worse colormap.

+
+
+
+

8.5.4 Harnessing Markings

+

In our earlier discussion of multi-dimensional encodings, we analyzed a scatter plot with four pseudo-dimensions: the two axes, area, and color. Were these appropriate to use? The following diagram analyzes how well the human eye can distinguish between these “markings”.

+

markings_viz

+

There are a few key takeaways from this diagram

+
    +
  • Lengths are easy to discern. Don’t use plots with jiggled baselines - keep everything axis-aligned.
  • +
  • Avoid pie charts! Angle judgments are inaccurate.
  • +
  • Areas and volumes are hard to distinguish (area charts, word clouds, etc.).
  • +
+
+
+

8.5.5 Harnessing Conditioning

+

Conditioning is the process of comparing data that belong to separate groups. We’ve seen this before in overlayed distributions, side-by-side box plots, and scatter plots with categorical encodings. Here, we’ll introduce terminology that formalizes these examples.

+

Consider an example where we want to analyze income earnings for males and females with varying levels of education. There are multiple ways to compare this data.

+
+
+

jet_perceptually_uniform

+
+ +
+

viridis_perceptually_uniform

+
+
+

The barplot is an example of juxtaposition: placing multiple plots side by side, with the same scale. The scatter plot is an example of superposition: placing multiple density curves and scatter plots on top of each other.

+

Which is better depends on the problem at hand. Here, superposition makes the precise wage difference very clear from a quick glance. However, many sophisticated plots convey information that favors the use of juxtaposition. Below is one example.

+

small_multiples

+
+
+

8.5.6 Harnessing Context

+

The last component of a great visualization is perhaps the most critical - the use of context. Adding informative titles, axis labels, and descriptive captions are all best practices that we’ve heard repeatedly in Data 8.

+

A publication-ready plot (and every Data 100 plot) needs:

+
    +
  • Informative title (takeaway, not description),
  • +
  • Axis labels,
  • +
  • Reference lines, markers, etc,
  • +
  • Legends, if appropriate,
  • +
  • Captions that describe data,
  • +
+

Captions should:

+
    +
  • Be comprehensive and self-contained,
  • +
  • Describe what has been graphed,
  • +
  • Draw attention to important features,
  • +
  • Describe conclusions drawn from graphs.
  • +
+ + +
+
+ +
+ + +
+ + + + + \ No newline at end of file diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-10-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-10-output-1.png new file mode 100644 index 000000000..8cdddb00c Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-10-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-11-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-11-output-1.png new file mode 100644 index 000000000..3cc0010d9 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-11-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-12-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-12-output-1.png new file mode 100644 index 000000000..431c8a1ad Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-12-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-13-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-13-output-1.png new file mode 100644 index 000000000..5935e3a58 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-13-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-14-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-14-output-1.png new file mode 100644 index 000000000..3b8074da9 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-14-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-15-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-15-output-1.png new file mode 100644 index 000000000..ad5fbd481 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-15-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-16-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-16-output-1.png new file mode 100644 index 000000000..9b7c35c43 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-16-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-17-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-17-output-1.png new file mode 100644 index 000000000..55b19b0f5 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-17-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-18-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-18-output-1.png new file mode 100644 index 000000000..ed584c158 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-18-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-19-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-19-output-1.png new file mode 100644 index 000000000..8f44d2294 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-19-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-20-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-20-output-1.png new file mode 100644 index 000000000..c8b7be606 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-20-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-21-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-21-output-1.png new file mode 100644 index 000000000..c9c68e708 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-21-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-22-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-22-output-1.png new file mode 100644 index 000000000..5086f1e6d Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-22-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-23-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-23-output-1.png new file mode 100644 index 000000000..77da1e472 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-23-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-24-output-2.png b/docs/visualization_2/visualization_2_files/figure-html/cell-24-output-2.png new file mode 100644 index 000000000..0b131dc78 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-24-output-2.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-25-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-25-output-1.png new file mode 100644 index 000000000..7e8e8e7d0 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-25-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-3-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-3-output-1.png new file mode 100644 index 000000000..fc5869003 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-3-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-4-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-4-output-1.png new file mode 100644 index 000000000..b89461f12 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-4-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-5-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-5-output-1.png new file mode 100644 index 000000000..4c1f4a18b Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-5-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-6-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-6-output-1.png new file mode 100644 index 000000000..000a478c5 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-6-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-7-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-7-output-1.png new file mode 100644 index 000000000..c53d86c08 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-7-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-8-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-8-output-1.png new file mode 100644 index 000000000..8fbd02e62 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-8-output-1.png differ diff --git a/docs/visualization_2/visualization_2_files/figure-html/cell-9-output-1.png b/docs/visualization_2/visualization_2_files/figure-html/cell-9-output-1.png new file mode 100644 index 000000000..da50f6784 Binary files /dev/null and b/docs/visualization_2/visualization_2_files/figure-html/cell-9-output-1.png differ diff --git a/index.aux b/index.aux new file mode 100644 index 000000000..e5b9e15bc --- /dev/null +++ b/index.aux @@ -0,0 +1,1263 @@ +\relax +\providecommand\hyper@newdestlabel[2]{} +\providecommand*\new@tpo@label[2]{} +\providecommand\HyperFirstAtBeginDocument{\AtBeginDocument} +\HyperFirstAtBeginDocument{\ifx\hyper@anchor\@undefined +\global\let\oldcontentsline\contentsline +\gdef\contentsline#1#2#3#4{\oldcontentsline{#1}{#2}{#3}} +\global\let\oldnewlabel\newlabel +\gdef\newlabel#1#2{\newlabelxx{#1}#2} +\gdef\newlabelxx#1#2#3#4#5#6{\oldnewlabel{#1}{{#2}{#3}}} +\AtEndDocument{\ifx\hyper@anchor\@undefined +\let\contentsline\oldcontentsline +\let\newlabel\oldnewlabel +\fi} +\fi} +\global\let\hyper@last\relax +\gdef\HyperFirstAtBeginDocument#1{#1} +\providecommand*\HyPL@Entry[1]{} +\HyPL@Entry{0<>} +\newlabel{welcome}{{}{3}{Welcome}{chapter*.2}{}} +\@writefile{toc}{\contentsline {chapter}{Welcome}{3}{chapter*.2}\protected@file@percent } +\newlabel{about-the-course-notes}{{}{3}{About the Course Notes}{section*.3}{}} +\@writefile{toc}{\contentsline {section}{About the Course Notes}{3}{section*.3}\protected@file@percent } +\@writefile{toc}{\contentsline {chapter}{\numberline {1}Introduction}{4}{chapter.1}\protected@file@percent } +\@writefile{lof}{\addvspace {10\p@ }} +\@writefile{lot}{\addvspace {10\p@ }} +\@writefile{lop}{\addvspace {10\p@ }} +\newlabel{introduction}{{1}{4}{Introduction}{chapter.1}{}} +\@writefile{toc}{\contentsline {section}{\numberline {1.1}Data Science Lifecycle}{6}{section.1.1}\protected@file@percent } +\newlabel{data-science-lifecycle}{{1.1}{6}{Data Science Lifecycle}{section.1.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.1}Ask a Question}{6}{subsection.1.1.1}\protected@file@percent } +\newlabel{ask-a-question}{{1.1.1}{6}{Ask a Question}{subsection.1.1.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.2}Obtain Data}{6}{subsection.1.1.2}\protected@file@percent } +\newlabel{obtain-data}{{1.1.2}{6}{Obtain Data}{subsection.1.1.2}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.3}Understand the Data}{7}{subsection.1.1.3}\protected@file@percent } +\newlabel{understand-the-data}{{1.1.3}{7}{Understand the Data}{subsection.1.1.3}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {1.1.4}Understand the World}{8}{subsection.1.1.4}\protected@file@percent } +\newlabel{understand-the-world}{{1.1.4}{8}{Understand the World}{subsection.1.1.4}{}} +\@writefile{toc}{\contentsline {section}{\numberline {1.2}Conclusion}{8}{section.1.2}\protected@file@percent } +\newlabel{conclusion}{{1.2}{8}{Conclusion}{section.1.2}{}} +\@writefile{toc}{\contentsline {chapter}{\numberline {2}Pandas I}{9}{chapter.2}\protected@file@percent } +\@writefile{lof}{\addvspace {10\p@ }} +\@writefile{lot}{\addvspace {10\p@ }} +\@writefile{lop}{\addvspace {10\p@ }} +\newlabel{pandas-i}{{2}{9}{Pandas I}{chapter.2}{}} +\@writefile{toc}{\contentsline {section}{\numberline {2.1}Tabular Data}{9}{section.2.1}\protected@file@percent } +\newlabel{tabular-data}{{2.1}{9}{Tabular Data}{section.2.1}{}} +\@writefile{toc}{\contentsline {section}{\numberline {2.2}\texttt {Series}, \texttt {DataFrame}s, and Indices}{10}{section.2.2}\protected@file@percent } +\newlabel{series-dataframes-and-indices}{{2.2}{10}{\texorpdfstring {\texttt {Series}, \texttt {DataFrame}s, and Indices}{Series, DataFrames, and Indices}}{section.2.2}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.1}Series}{10}{subsection.2.2.1}\protected@file@percent } +\newlabel{series}{{2.2.1}{10}{Series}{subsection.2.2.1}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.2.1.1}Selection in \texttt {Series}}{12}{subsubsection.2.2.1.1}\protected@file@percent } +\newlabel{selection-in-series}{{2.2.1.1}{12}{\texorpdfstring {Selection in \texttt {Series}}{Selection in Series}}{subsubsection.2.2.1.1}{}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {2.2.1.1.1}A Single Label}{12}{paragraph.2.2.1.1.1}\protected@file@percent } +\newlabel{a-single-label}{{2.2.1.1.1}{12}{A Single Label}{paragraph.2.2.1.1.1}{}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {2.2.1.1.2}A List of Labels}{12}{paragraph.2.2.1.1.2}\protected@file@percent } +\newlabel{a-list-of-labels}{{2.2.1.1.2}{12}{A List of Labels}{paragraph.2.2.1.1.2}{}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {2.2.1.1.3}A Filtering Condition}{13}{paragraph.2.2.1.1.3}\protected@file@percent } +\newlabel{a-filtering-condition}{{2.2.1.1.3}{13}{A Filtering Condition}{paragraph.2.2.1.1.3}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.2}\texttt {DataFrames}}{13}{subsection.2.2.2}\protected@file@percent } +\newlabel{dataframes}{{2.2.2}{13}{\texorpdfstring {\texttt {DataFrames}}{DataFrames}}{subsection.2.2.2}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.2.2.1}Creating a \texttt {DataFrame}}{13}{subsubsection.2.2.2.1}\protected@file@percent } +\newlabel{creating-a-dataframe}{{2.2.2.1}{13}{\texorpdfstring {Creating a \texttt {DataFrame}}{Creating a DataFrame}}{subsubsection.2.2.2.1}{}} +\gdef \LT@i {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {2.2.2.1.1}From a CSV file}{14}{paragraph.2.2.2.1.1}\protected@file@percent } +\newlabel{from-a-csv-file}{{2.2.2.1.1}{14}{From a CSV file}{paragraph.2.2.2.1.1}{}} +\gdef \LT@ii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{48.97876pt}} +\gdef \LT@iii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{50.66446pt}\LT@entry + {1}{60.69525pt}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {2.2.2.1.2}Using a List and Column Name(s)}{15}{paragraph.2.2.2.1.2}\protected@file@percent } +\newlabel{using-a-list-and-column-names}{{2.2.2.1.2}{15}{Using a List and Column Name(s)}{paragraph.2.2.2.1.2}{}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {2.2.2.1.3}From a Dictionary}{15}{paragraph.2.2.2.1.3}\protected@file@percent } +\newlabel{from-a-dictionary}{{2.2.2.1.3}{15}{From a Dictionary}{paragraph.2.2.2.1.3}{}} +\gdef \LT@iv {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{65.33745pt}\LT@entry + {1}{30.51706pt}} +\gdef \LT@v {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{65.33745pt}\LT@entry + {1}{30.51706pt}} +\gdef \LT@vi {\LT@entry + {3}{15.76741pt}\LT@entry + {3}{16.95001pt}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {2.2.2.1.4}From a \texttt {Series}}{16}{paragraph.2.2.2.1.4}\protected@file@percent } +\newlabel{from-a-series}{{2.2.2.1.4}{16}{\texorpdfstring {From a \texttt {Series}}{From a Series}}{paragraph.2.2.2.1.4}{}} +\gdef \LT@vii {\LT@entry + {3}{15.76741pt}\LT@entry + {3}{17.56322pt}} +\gdef \LT@viii {\LT@entry + {3}{15.76741pt}\LT@entry + {1}{58.5375pt}\LT@entry + {1}{52.0776pt}} +\gdef \LT@ix {\LT@entry + {3}{103.86014pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {2.2.3}Indices}{17}{subsection.2.2.3}\protected@file@percent } +\newlabel{indices}{{2.2.3}{17}{Indices}{subsection.2.2.3}{}} +\gdef \LT@x {\LT@entry + {3}{118.27034pt}\LT@entry + {3}{109.86014pt}\LT@entry + {1}{33.93286pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\@writefile{toc}{\contentsline {section}{\numberline {2.3}\texttt {DataFrame} Attributes: Index, Columns, and Shape}{19}{section.2.3}\protected@file@percent } +\newlabel{dataframe-attributes-index-columns-and-shape}{{2.3}{19}{\texorpdfstring {\texttt {DataFrame} Attributes: Index, Columns, and Shape}{DataFrame Attributes: Index, Columns, and Shape}}{section.2.3}{}} +\gdef \LT@xi {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\@writefile{toc}{\contentsline {section}{\numberline {2.4}Slicing in \texttt {DataFrame}s}{20}{section.2.4}\protected@file@percent } +\newlabel{slicing-in-dataframes}{{2.4}{20}{\texorpdfstring {Slicing in \texttt {DataFrame}s}{Slicing in DataFrames}}{section.2.4}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {2.4.1}Extracting data with \texttt {.head} and \texttt {.tail}}{20}{subsection.2.4.1}\protected@file@percent } +\newlabel{extracting-data-with-.head-and-.tail}{{2.4.1}{20}{\texorpdfstring {Extracting data with \texttt {.head} and \texttt {.tail}}{Extracting data with .head and .tail}}{subsection.2.4.1}{}} +\gdef \LT@xii {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{92.99713pt}\LT@entry + {3}{66.9252pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {2.4.2}Label-based Extraction: Indexing with \texttt {.loc}}{21}{subsection.2.4.2}\protected@file@percent } +\newlabel{label-based-extraction-indexing-with-.loc}{{2.4.2}{21}{\texorpdfstring {Label-based Extraction: Indexing with \texttt {.loc}}{Label-based Extraction: Indexing with .loc}}{subsection.2.4.2}{}} +\gdef \LT@xiii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{67.33095pt}} +\gdef \LT@xiv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@xv {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {1}{36.62715pt}} +\gdef \LT@xvi {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{67.33095pt}} +\gdef \LT@xvii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@xviii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{67.33095pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {2.4.3}Integer-based Extraction: Indexing with \texttt {.iloc}}{24}{subsection.2.4.3}\protected@file@percent } +\newlabel{integer-based-extraction-indexing-with-.iloc}{{2.4.3}{24}{\texorpdfstring {Integer-based Extraction: Indexing with \texttt {.iloc}}{Integer-based Extraction: Indexing with .iloc}}{subsection.2.4.3}{}} +\gdef \LT@xix {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{67.33095pt}} +\gdef \LT@xx {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{118.27034pt}} +\gdef \LT@xxi {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@xxii {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{67.33095pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {2.4.4}Context-dependent Extraction: Indexing with \texttt {{[}{]}}}{26}{subsection.2.4.4}\protected@file@percent } +\newlabel{context-dependent-extraction-indexing-with}{{2.4.4}{26}{\texorpdfstring {Context-dependent Extraction: Indexing with \texttt {{[}{]}}}{Context-dependent Extraction: Indexing with {[}{]}}}{subsection.2.4.4}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.4.4.1}A slice of row numbers}{26}{subsubsection.2.4.4.1}\protected@file@percent } +\newlabel{a-slice-of-row-numbers}{{2.4.4.1}{26}{A slice of row numbers}{subsubsection.2.4.4.1}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.4.4.2}A list of column labels}{26}{subsubsection.2.4.4.2}\protected@file@percent } +\newlabel{a-list-of-column-labels}{{2.4.4.2}{26}{A list of column labels}{subsubsection.2.4.4.2}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {2.4.4.3}A single-column label}{27}{subsubsection.2.4.4.3}\protected@file@percent } +\newlabel{a-single-column-label}{{2.4.4.3}{27}{A single-column label}{subsubsection.2.4.4.3}{}} +\@writefile{toc}{\contentsline {section}{\numberline {2.5}Parting Note}{27}{section.2.5}\protected@file@percent } +\newlabel{parting-note}{{2.5}{27}{Parting Note}{section.2.5}{}} +\@writefile{toc}{\contentsline {chapter}{\numberline {3}Pandas II}{28}{chapter.3}\protected@file@percent } +\@writefile{lof}{\addvspace {10\p@ }} +\@writefile{lot}{\addvspace {10\p@ }} +\@writefile{lop}{\addvspace {10\p@ }} +\newlabel{pandas-ii}{{3}{28}{Pandas II}{chapter.3}{}} +\gdef \LT@xxiii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxiv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{51.43095pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {section}{\numberline {3.1}Conditional Selection}{29}{section.3.1}\protected@file@percent } +\newlabel{conditional-selection}{{3.1}{29}{Conditional Selection}{section.3.1}{}} +\gdef \LT@xxv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{51.43095pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxvi {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxvii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxviii {\LT@entry + {1}{41.59846pt}\LT@entry + {1}{40.33861pt}\LT@entry + {3}{117.55858pt}} +\gdef \LT@xxix {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxx {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxxi {\LT@entry + {3}{33.37502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {1}{39.67065pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxxii {\LT@entry + {3}{33.37502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {1}{39.67065pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxxiii {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{44.57626pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxxiv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{73.89pt}} +\@writefile{toc}{\contentsline {section}{\numberline {3.2}Adding, Removing, and Modifying Columns}{34}{section.3.2}\protected@file@percent } +\newlabel{adding-removing-and-modifying-columns}{{3.2}{34}{Adding, Removing, and Modifying Columns}{section.3.2}{}} +\gdef \LT@xxxv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{73.89pt}} +\gdef \LT@xxxvi {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{39.61652pt}} +\gdef \LT@xxxvii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xxxviii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {section}{\numberline {3.3}Useful Utility Functions}{36}{section.3.3}\protected@file@percent } +\newlabel{useful-utility-functions}{{3.3}{36}{Useful Utility Functions}{section.3.3}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3.1}\texttt {NumPy}}{37}{subsection.3.3.1}\protected@file@percent } +\newlabel{numpy}{{3.3.1}{37}{\texorpdfstring {\texttt {NumPy}}{NumPy}}{subsection.3.3.1}{}} +\gdef \LT@xxxix {\LT@entry + {3}{32.46616pt}\LT@entry + {3}{80.7441pt}\LT@entry + {3}{74.7441pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3.2}\texttt {.shape} and \texttt {.size}}{38}{subsection.3.3.2}\protected@file@percent } +\newlabel{shape-and-.size}{{3.3.2}{38}{\texorpdfstring {\texttt {.shape} and \texttt {.size}}{.shape and .size}}{subsection.3.3.2}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3.3}\texttt {.describe()}}{38}{subsection.3.3.3}\protected@file@percent } +\newlabel{describe}{{3.3.3}{38}{\texorpdfstring {\texttt {.describe()}}{.describe()}}{subsection.3.3.3}{}} +\gdef \LT@xl {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {1}{39.67065pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xli {\LT@entry + {3}{38.85pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{63.87015pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3.4}\texttt {.sample()}}{39}{subsection.3.3.4}\protected@file@percent } +\newlabel{sample}{{3.3.4}{39}{\texorpdfstring {\texttt {.sample()}}{.sample()}}{subsection.3.3.4}{}} +\gdef \LT@xlii {\LT@entry + {3}{38.85pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{52.24126pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3.5}\texttt {.value\_counts()}}{40}{subsection.3.3.5}\protected@file@percent } +\newlabel{value_counts}{{3.3.5}{40}{\texorpdfstring {\texttt {.value\_counts()}}{.value\_counts()}}{subsection.3.3.5}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3.6}\texttt {.unique()}}{40}{subsection.3.3.6}\protected@file@percent } +\newlabel{unique}{{3.3.6}{40}{\texorpdfstring {\texttt {.unique()}}{.unique()}}{subsection.3.3.6}{}} +\gdef \LT@xliii {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{49.10956pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {3.3.7}\texttt {.sort\_values()}}{41}{subsection.3.3.7}\protected@file@percent } +\newlabel{sort_values}{{3.3.7}{41}{\texorpdfstring {\texttt {.sort\_values()}}{.sort\_values()}}{subsection.3.3.7}{}} +\@writefile{toc}{\contentsline {section}{\numberline {3.4}Parting Note}{41}{section.3.4}\protected@file@percent } +\newlabel{parting-note-1}{{3.4}{41}{Parting Note}{section.3.4}{}} +\@writefile{toc}{\contentsline {chapter}{\numberline {4}Pandas III}{42}{chapter.4}\protected@file@percent } +\@writefile{lof}{\addvspace {10\p@ }} +\@writefile{lot}{\addvspace {10\p@ }} +\@writefile{lop}{\addvspace {10\p@ }} +\newlabel{pandas-iii}{{4}{42}{Pandas III}{chapter.4}{}} +\@writefile{toc}{\contentsline {section}{\numberline {4.1}Custom Sorts}{42}{section.4.1}\protected@file@percent } +\newlabel{custom-sorts}{{4.1}{42}{Custom Sorts}{section.4.1}{}} +\gdef \LT@xliv {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{47.62036pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xlv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{56.1723pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{73.89pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.1.1}Approach 1: Create a Temporary Column}{43}{subsection.4.1.1}\protected@file@percent } +\newlabel{approach-1-create-a-temporary-column}{{4.1.1}{43}{Approach 1: Create a Temporary Column}{subsection.4.1.1}{}} +\gdef \LT@xlvi {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{90.15015pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{73.89pt}} +\gdef \LT@xlvii {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{90.15015pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.1.2}Approach 2: Sorting using the \texttt {key} Argument}{44}{subsection.4.1.2}\protected@file@percent } +\newlabel{approach-2-sorting-using-the-key-argument}{{4.1.2}{44}{\texorpdfstring {Approach 2: Sorting using the \texttt {key} Argument}{Approach 2: Sorting using the key Argument}}{subsection.4.1.2}{}} +\gdef \LT@xlviii {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{90.15015pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@xlix {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{57.50821pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{69.60855pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.1.3}Approach 3: Sorting using the \texttt {map} Function}{45}{subsection.4.1.3}\protected@file@percent } +\newlabel{approach-3-sorting-using-the-map-function}{{4.1.3}{45}{\texorpdfstring {Approach 3: Sorting using the \texttt {map} Function}{Approach 3: Sorting using the map Function}}{subsection.4.1.3}{}} +\gdef \LT@l {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{57.50821pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {section}{\numberline {4.2}Aggregating Data with \texttt {.groupby}}{46}{section.4.2}\protected@file@percent } +\newlabel{aggregating-data-with-.groupby}{{4.2}{46}{\texorpdfstring {Aggregating Data with \texttt {.groupby}}{Aggregating Data with .groupby}}{section.4.2}{}} +\gdef \LT@li {\LT@entry + {1}{27.93286pt}\LT@entry + {1}{35.51025pt}} +\@writefile{lof}{\contentsline {figure}{\numberline {4.1}{\ignorespaces Performing an aggregation\relax }}{47}{figure.caption.4}\protected@file@percent } +\gdef \LT@lii {\LT@entry + {1}{27.93286pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@liii {\LT@entry + {1}{27.93286pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@liv {\LT@entry + {1}{27.93286pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@lv {\LT@entry + {3}{45.9456pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2.1}Aggregation Functions}{49}{subsection.4.2.1}\protected@file@percent } +\newlabel{aggregation-functions}{{4.2.1}{49}{Aggregation Functions}{subsection.4.2.1}{}} +\gdef \LT@lvi {\LT@entry + {3}{45.9456pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@lvii {\LT@entry + {3}{45.9456pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@lviii {\LT@entry + {3}{38.85pt}\LT@entry + {3}{57.50821pt}\LT@entry + {1}{68.0859pt}\LT@entry + {1}{27.93286pt}} +\gdef \LT@lix {\LT@entry + {3}{45.9456pt}\LT@entry + {1}{68.0859pt}\LT@entry + {1}{27.93286pt}} +\@writefile{lof}{\contentsline {figure}{\numberline {4.2}{\ignorespaces Aggregating using ``first''\relax }}{52}{figure.caption.5}\protected@file@percent } +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2.2}Plotting Birth Counts}{52}{subsection.4.2.2}\protected@file@percent } +\newlabel{plotting-birth-counts}{{4.2.2}{52}{Plotting Birth Counts}{subsection.4.2.2}{}} +\gdef \LT@lx {\LT@entry + {1}{27.93286pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2.3}Summary of the \texttt {.groupby()} Function}{53}{subsection.4.2.3}\protected@file@percent } +\newlabel{summary-of-the-.groupby-function}{{4.2.3}{53}{\texorpdfstring {Summary of the \texttt {.groupby()} Function}{Summary of the .groupby() Function}}{subsection.4.2.3}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2.4}Revisiting the \texttt {.agg()} Function}{54}{subsection.4.2.4}\protected@file@percent } +\newlabel{revisiting-the-.agg-function}{{4.2.4}{54}{\texorpdfstring {Revisiting the \texttt {.agg()} Function}{Revisiting the .agg() Function}}{subsection.4.2.4}{}} +\gdef \LT@lxi {\LT@entry + {3}{44.67542pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{47.36911pt}} +\gdef \LT@lxii {\LT@entry + {3}{44.67542pt}\LT@entry + {1}{33.93286pt}\LT@entry + {1}{61.6698pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2.5}Nuisance Columns}{55}{subsection.4.2.5}\protected@file@percent } +\newlabel{nuisance-columns}{{4.2.5}{55}{Nuisance Columns}{subsection.4.2.5}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2.6}Renaming Columns After Grouping}{55}{subsection.4.2.6}\protected@file@percent } +\newlabel{renaming-columns-after-grouping}{{4.2.6}{55}{Renaming Columns After Grouping}{subsection.4.2.6}{}} +\gdef \LT@lxiii {\LT@entry + {3}{42.18976pt}\LT@entry + {1}{33.93286pt}\LT@entry + {1}{61.6698pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.2.7}Some Data Science Payoff}{56}{subsection.4.2.7}\protected@file@percent } +\newlabel{some-data-science-payoff}{{4.2.7}{56}{Some Data Science Payoff}{subsection.4.2.7}{}} +\gdef \LT@lxiv {\LT@entry + {3}{45.9456pt}\LT@entry + {1}{35.51025pt}} +\gdef \LT@lxv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\@writefile{toc}{\contentsline {section}{\numberline {4.3}\texttt {.groupby()}, Continued}{57}{section.4.3}\protected@file@percent } +\newlabel{groupby-continued}{{4.3}{57}{\texorpdfstring {\texttt {.groupby()}, Continued}{.groupby(), Continued}}{section.4.3}{}} +\gdef \LT@lxvi {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{92.69055pt}\LT@entry + {3}{51.91275pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{47.36911pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.3.1}Raw \texttt {GroupBy} Objects}{58}{subsection.4.3.1}\protected@file@percent } +\newlabel{raw-groupby-objects}{{4.3.1}{58}{\texorpdfstring {Raw \texttt {GroupBy} Objects}{Raw GroupBy Objects}}{subsection.4.3.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.3.2}Other \texttt {GroupBy} Methods}{58}{subsection.4.3.2}\protected@file@percent } +\newlabel{other-groupby-methods}{{4.3.2}{58}{\texorpdfstring {Other \texttt {GroupBy} Methods}{Other GroupBy Methods}}{subsection.4.3.2}{}} +\gdef \LT@lxvii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{37.57921pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{29.17021pt}} +\gdef \LT@lxviii {\LT@entry + {1}{31.57921pt}\LT@entry + {1}{32.99117pt}\LT@entry + {1}{29.17021pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.3.3}Filtering by Group}{60}{subsection.4.3.3}\protected@file@percent } +\newlabel{filtering-by-group}{{4.3.3}{60}{Filtering by Group}{subsection.4.3.3}{}} +\gdef \LT@lxix {\LT@entry + {3}{16.95001pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{113.65979pt}\LT@entry + {3}{114.14159pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@lxx {\LT@entry + {3}{118.27034pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{111.76544pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {4.3.4}Aggregation with \texttt {lambda} Functions}{62}{subsection.4.3.4}\protected@file@percent } +\newlabel{aggregation-with-lambda-functions}{{4.3.4}{62}{\texorpdfstring {Aggregation with \texttt {lambda} Functions}{Aggregation with lambda Functions}}{subsection.4.3.4}{}} +\gdef \LT@lxxi {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{101.83379pt}\LT@entry + {3}{66.9252pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@lxxii {\LT@entry + {3}{118.27034pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{96.50114pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@lxxiii {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{95.58134pt}\LT@entry + {3}{120.93059pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@lxxiv {\LT@entry + {3}{22.42502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{113.97734pt}\LT@entry + {3}{78.8826pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{47.36911pt}} +\gdef \LT@lxxv {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{57.50821pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{62.0859pt}} +\gdef \LT@lxxvi {\LT@entry + {3}{452.76006pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{35.51025pt}} +\@writefile{toc}{\contentsline {section}{\numberline {4.4}Aggregating Data with Pivot Tables}{65}{section.4.4}\protected@file@percent } +\newlabel{aggregating-data-with-pivot-tables}{{4.4}{65}{Aggregating Data with Pivot Tables}{section.4.4}{}} +\gdef \LT@lxxvii {\LT@entry + {1}{27.93286pt}\LT@entry + {3}{39.37502pt}\LT@entry + {3}{33.37502pt}} +\gdef \LT@lxxviii {\LT@entry + {1}{27.93286pt}\LT@entry + {3}{28.42502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {3}{47.29185pt}\LT@entry + {3}{51.36586pt}} +\gdef \LT@lxxix {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{52.8441pt}} +\gdef \LT@lxxx {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{58.8441pt}\LT@entry + {1}{60.3777pt}} +\@writefile{toc}{\contentsline {section}{\numberline {4.5}Joining Tables}{68}{section.4.5}\protected@file@percent } +\newlabel{joining-tables}{{4.5}{68}{Joining Tables}{section.4.5}{}} +\gdef \LT@lxxxi {\LT@entry + {3}{38.85pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{52.0332pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{62.0859pt}} +\gdef \LT@lxxxii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{47.92696pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{58.8441pt}\LT@entry + {1}{66.3777pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{28.73161pt}\LT@entry + {1}{47.92696pt}\LT@entry + {3}{49.449pt}\LT@entry + {1}{41.51025pt}\LT@entry + {1}{62.0859pt}} +\@writefile{toc}{\contentsline {section}{\numberline {4.6}Parting Note}{69}{section.4.6}\protected@file@percent } +\newlabel{parting-note-2}{{4.6}{69}{Parting Note}{section.4.6}{}} +\@writefile{toc}{\contentsline {chapter}{\numberline {5}Data Cleaning and EDA}{71}{chapter.5}\protected@file@percent } +\@writefile{lof}{\addvspace {10\p@ }} +\@writefile{lot}{\addvspace {10\p@ }} +\@writefile{lop}{\addvspace {10\p@ }} +\newlabel{data-cleaning-and-eda}{{5}{71}{Data Cleaning and EDA}{chapter.5}{}} +\@writefile{toc}{\contentsline {section}{\numberline {5.1}Structure}{72}{section.5.1}\protected@file@percent } +\newlabel{structure}{{5.1}{72}{Structure}{section.5.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.1.1}File Formats}{72}{subsection.5.1.1}\protected@file@percent } +\newlabel{file-formats}{{5.1.1}{72}{File Formats}{subsection.5.1.1}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.1.1.1}CSV}{72}{subsubsection.5.1.1.1}\protected@file@percent } +\newlabel{csv}{{5.1.1.1}{72}{CSV}{subsubsection.5.1.1.1}{}} +\gdef \LT@lxxxiii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{30.9441pt}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.1.1.2}TSV}{73}{subsubsection.5.1.1.2}\protected@file@percent } +\newlabel{tsv}{{5.1.1.2}{73}{TSV}{subsubsection.5.1.1.2}{}} +\gdef \LT@lxxxiv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{30.9441pt}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.1.1.3}JSON}{74}{subsubsection.5.1.1.3}\protected@file@percent } +\newlabel{json}{{5.1.1.3}{74}{JSON}{subsubsection.5.1.1.3}{}} +\gdef \LT@lxxxv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{33.93286pt}\LT@entry + {3}{109.86014pt}\LT@entry + {3}{124.27034pt}\LT@entry + {1}{73.33095pt}\LT@entry + {1}{42.62715pt}\LT@entry + {3}{30.9441pt}} +\@writefile{toc}{\contentsline {paragraph}{\numberline {5.1.1.3.1}EDA with JSON: Berkeley COVID-19 Data}{75}{paragraph.5.1.1.3.1}\protected@file@percent } +\newlabel{eda-with-json-berkeley-covid-19-data}{{5.1.1.3.1}{75}{EDA with JSON: Berkeley COVID-19 Data}{paragraph.5.1.1.3.1}{}} +\@writefile{toc}{\contentsline {subparagraph}{\numberline {5.1.1.3.1.1}File Size}{76}{subparagraph.5.1.1.3.1.1}\protected@file@percent } +\newlabel{file-size}{{5.1.1.3.1.1}{76}{File Size}{subparagraph.5.1.1.3.1.1}{}} +\@writefile{toc}{\contentsline {subparagraph}{\numberline {5.1.1.3.1.2}Unix Commands}{76}{subparagraph.5.1.1.3.1.2}\protected@file@percent } +\newlabel{unix-commands}{{5.1.1.3.1.2}{76}{Unix Commands}{subparagraph.5.1.1.3.1.2}{}} +\@writefile{toc}{\contentsline {subparagraph}{\numberline {5.1.1.3.1.3}File Contents}{77}{subparagraph.5.1.1.3.1.3}\protected@file@percent } +\newlabel{file-contents}{{5.1.1.3.1.3}{77}{File Contents}{subparagraph.5.1.1.3.1.3}{}} +\@writefile{toc}{\contentsline {subparagraph}{\numberline {5.1.1.3.1.4}Examining the Data Field for Records}{79}{subparagraph.5.1.1.3.1.4}\protected@file@percent } +\newlabel{examining-the-data-field-for-records}{{5.1.1.3.1.4}{79}{Examining the Data Field for Records}{subparagraph.5.1.1.3.1.4}{}} +\gdef \LT@lxxxvi {\LT@entry + {3}{22.42502pt}\LT@entry + {3}{115.14899pt}\LT@entry + {3}{221.25446pt}\LT@entry + {1}{50.09506pt}\LT@entry + {3}{66.75pt}\LT@entry + {1}{78.63075pt}\LT@entry + {1}{69.2028pt}\LT@entry + {1}{83.18594pt}\LT@entry + {1}{35.71771pt}\LT@entry + {3}{109.9368pt}\LT@entry + {1}{63.49785pt}\LT@entry + {1}{91.57425pt}} +\@writefile{toc}{\contentsline {subparagraph}{\numberline {5.1.1.3.1.5}Summary of exploring the JSON file}{80}{subparagraph.5.1.1.3.1.5}\protected@file@percent } +\newlabel{summary-of-exploring-the-json-file}{{5.1.1.3.1.5}{80}{Summary of exploring the JSON file}{subparagraph.5.1.1.3.1.5}{}} +\@writefile{toc}{\contentsline {subparagraph}{\numberline {5.1.1.3.1.6}Loading COVID Data into \texttt {pandas}}{80}{subparagraph.5.1.1.3.1.6}\protected@file@percent } +\newlabel{loading-covid-data-into-pandas}{{5.1.1.3.1.6}{80}{\texorpdfstring {Loading COVID Data into \texttt {pandas}}{Loading COVID Data into pandas}}{subparagraph.5.1.1.3.1.6}{}} +\gdef \LT@lxxxvii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{66.75pt}\LT@entry + {1}{39.67065pt}\LT@entry + {3}{92.40645pt}} +\gdef \LT@lxxxviii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{70.6044pt}\LT@entry + {3}{66.75pt}\LT@entry + {3}{52.23091pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.1.2}Primary and Foreign Keys}{81}{subsection.5.1.2}\protected@file@percent } +\newlabel{primary-and-foreign-keys}{{5.1.2}{81}{Primary and Foreign Keys}{subsection.5.1.2}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.1.3}Variable Types}{81}{subsection.5.1.3}\protected@file@percent } +\newlabel{variable-types}{{5.1.3}{81}{Variable Types}{subsection.5.1.3}{}} +\@writefile{lof}{\contentsline {figure}{\numberline {5.1}{\ignorespaces Classification of variable types\relax }}{82}{figure.caption.6}\protected@file@percent } +\@writefile{toc}{\contentsline {section}{\numberline {5.2}Granularity, Scope, and Temporality}{83}{section.5.2}\protected@file@percent } +\newlabel{granularity-scope-and-temporality}{{5.2}{83}{Granularity, Scope, and Temporality}{section.5.2}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.2.1}Granularity}{83}{subsection.5.2.1}\protected@file@percent } +\newlabel{granularity}{{5.2.1}{83}{Granularity}{subsection.5.2.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.2.2}Scope}{83}{subsection.5.2.2}\protected@file@percent } +\newlabel{scope}{{5.2.2}{83}{Scope}{subsection.5.2.2}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.2.3}Temporality}{83}{subsection.5.2.3}\protected@file@percent } +\newlabel{temporality}{{5.2.3}{83}{Temporality}{subsection.5.2.3}{}} +\gdef \LT@lxxxix {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{58.39516pt}\LT@entry + {3}{170.32603pt}\LT@entry + {3}{131.23453pt}\LT@entry + {1}{69.19185pt}\LT@entry + {3}{135.71307pt}\LT@entry + {1}{55.95331pt}\LT@entry + {3}{131.23453pt}\LT@entry + {3}{294.22523pt}\LT@entry + {3}{153.50682pt}\LT@entry + {3}{52.9311pt}\LT@entry + {1}{30.9441pt}} +\gdef \LT@xc {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{58.39516pt}\LT@entry + {3}{170.32603pt}\LT@entry + {1}{67.5165pt}\LT@entry + {1}{69.19185pt}\LT@entry + {3}{135.71307pt}\LT@entry + {1}{55.95331pt}\LT@entry + {3}{131.23453pt}\LT@entry + {3}{294.22523pt}\LT@entry + {3}{153.50682pt}\LT@entry + {3}{52.9311pt}\LT@entry + {1}{30.9441pt}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {5.2.3.1}Temporality with \texttt {pandas}' \texttt {dt} accessors}{84}{subsubsection.5.2.3.1}\protected@file@percent } +\newlabel{temporality-with-pandas-dt-accessors}{{5.2.3.1}{84}{\texorpdfstring {Temporality with \texttt {pandas}' \texttt {dt} accessors}{Temporality with pandas' dt accessors}}{subsubsection.5.2.3.1}{}} +\gdef \LT@xci {\LT@entry + {3}{27.90001pt}\LT@entry + {1}{58.39516pt}\LT@entry + {3}{163.78888pt}\LT@entry + {1}{67.5165pt}\LT@entry + {1}{69.19185pt}\LT@entry + {3}{164.68677pt}\LT@entry + {1}{55.95331pt}\LT@entry + {3}{131.23453pt}\LT@entry + {3}{298.86804pt}\LT@entry + {3}{169.09961pt}\LT@entry + {3}{52.9311pt}\LT@entry + {1}{30.9441pt}} +\@writefile{toc}{\contentsline {section}{\numberline {5.3}Faithfulness}{86}{section.5.3}\protected@file@percent } +\newlabel{faithfulness}{{5.3}{86}{Faithfulness}{section.5.3}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.3.1}Missing Values}{86}{subsection.5.3.1}\protected@file@percent } +\newlabel{missing-values}{{5.3.1}{86}{Missing Values}{subsection.5.3.1}{}} +\@writefile{toc}{\contentsline {section}{\numberline {5.4}EDA Demo 1: Tuberculosis in the United States}{87}{section.5.4}\protected@file@percent } +\newlabel{eda-demo-1-tuberculosis-in-the-united-states}{{5.4}{87}{EDA Demo 1: Tuberculosis in the United States}{section.5.4}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.4.1}CSVs and Field Names}{87}{subsection.5.4.1}\protected@file@percent } +\newlabel{csvs-and-field-names}{{5.4.1}{87}{CSVs and Field Names}{subsection.5.4.1}{}} +\gdef \LT@xcii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{89.9859pt}\LT@entry + {1}{89.19748pt}\LT@entry + {1}{71.31615pt}\LT@entry + {1}{71.31615pt}\LT@entry + {1}{75.10484pt}\LT@entry + {1}{71.31615pt}\LT@entry + {1}{65.31615pt}} +\gdef \LT@xciii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {3}{36.9441pt}\LT@entry + {3}{36.9441pt}\LT@entry + {3}{36.9441pt}\LT@entry + {1}{42.41911pt}\LT@entry + {1}{42.41911pt}\LT@entry + {1}{36.41911pt}} +\gdef \LT@xciv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{94.65118pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.4.2}Record Granularity}{90}{subsection.5.4.2}\protected@file@percent } +\newlabel{record-granularity}{{5.4.2}{90}{Record Granularity}{subsection.5.4.2}{}} +\gdef \LT@xcv {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{94.65118pt}} +\gdef \LT@xcvi {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{94.65118pt}} +\gdef \LT@xcvii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{92.74529pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.4.3}Gather Census Data}{92}{subsection.5.4.3}\protected@file@percent } +\newlabel{gather-census-data}{{5.4.3}{92}{Gather Census Data}{subsection.5.4.3}{}} +\gdef \LT@xcviii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{92.74529pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.4.4}Joining Data (Merging \texttt {DataFrame}s)}{93}{subsection.5.4.4}\protected@file@percent } +\newlabel{joining-data-merging-dataframes}{{5.4.4}{93}{\texorpdfstring {Joining Data (Merging \texttt {DataFrame}s)}{Joining Data (Merging DataFrames)}}{subsection.5.4.4}{}} +\gdef \LT@xcix {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{106.73938pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {1}{106.73938pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{49.8pt}} +\gdef \LT@c {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{49.8pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.4.5}Reproducing Data: Compute Incidence}{94}{subsection.5.4.5}\protected@file@percent } +\newlabel{reproducing-data-compute-incidence}{{5.4.5}{94}{Reproducing Data: Compute Incidence}{subsection.5.4.5}{}} +\gdef \LT@ci {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {1}{128.90279pt}} +\gdef \LT@cii {\LT@entry + {3}{11.47502pt}\LT@entry + {1}{89.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {3}{55.8pt}\LT@entry + {1}{134.90279pt}\LT@entry + {1}{134.90279pt}\LT@entry + {1}{128.90279pt}} +\gdef \LT@ciii {\LT@entry + {3}{32.46616pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {3}{69.7941pt}\LT@entry + {3}{69.7941pt}\LT@entry + {3}{69.7941pt}\LT@entry + {1}{134.90279pt}\LT@entry + {1}{134.90279pt}\LT@entry + {1}{128.90279pt}} +\gdef \LT@civ {\LT@entry + {1}{83.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{94.65118pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.4.6}Bonus EDA: Reproducing the Reported Statistic}{96}{subsection.5.4.6}\protected@file@percent } +\newlabel{bonus-eda-reproducing-the-reported-statistic}{{5.4.6}{96}{Bonus EDA: Reproducing the Reported Statistic}{subsection.5.4.6}{}} +\gdef \LT@cv {\LT@entry + {1}{86.74529pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\gdef \LT@cvi {\LT@entry + {1}{86.74529pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\gdef \LT@cvii {\LT@entry + {1}{83.9859pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{94.65118pt}} +\gdef \LT@cviii {\LT@entry + {1}{86.74529pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\gdef \LT@cix {\LT@entry + {1}{86.74529pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\gdef \LT@cx {\LT@entry + {1}{86.74529pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\gdef \LT@cxi {\LT@entry + {3}{49.95331pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{55.27501pt}} +\gdef \LT@cxii {\LT@entry + {3}{49.95331pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{80.67839pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {1}{100.65118pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {3}{61.27501pt}\LT@entry + {1}{134.90279pt}\LT@entry + {1}{134.90279pt}\LT@entry + {1}{128.90279pt}} +\@writefile{toc}{\contentsline {section}{\numberline {5.5}EDA Demo 2: Mauna Loa CO2 Data -- A Lesson in Data Faithfulness}{100}{section.5.5}\protected@file@percent } +\newlabel{eda-demo-2-mauna-loa-co2-data-a-lesson-in-data-faithfulness}{{5.5}{100}{EDA Demo 2: Mauna Loa CO2 Data -- A Lesson in Data Faithfulness}{section.5.5}{}} +\gdef \LT@cxiii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{17.47502pt}\LT@entry + {3}{47.8941pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{15.12137pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.1}Reading this file into \texttt {Pandas}?}{101}{subsection.5.5.1}\protected@file@percent } +\newlabel{reading-this-file-into-pandas}{{5.5.1}{101}{\texorpdfstring {Reading this file into \texttt {Pandas}?}{Reading this file into Pandas?}}{subsection.5.5.1}{}} +\gdef \LT@cxiv {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{27.51616pt}\LT@entry + {1}{53.05156pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {1}{29.63011pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.2}Exploring Variable Feature Types}{102}{subsection.5.5.2}\protected@file@percent } +\newlabel{exploring-variable-feature-types}{{5.5.2}{102}{Exploring Variable Feature Types}{subsection.5.5.2}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.3}Visualizing CO2}{102}{subsection.5.5.3}\protected@file@percent } +\newlabel{visualizing-co2}{{5.5.3}{102}{Visualizing CO2}{subsection.5.5.3}{}} +\gdef \LT@cxv {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{27.51616pt}\LT@entry + {1}{53.05156pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {1}{29.63011pt}} +\gdef \LT@cxvi {\LT@entry + {3}{22.42502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{27.51616pt}\LT@entry + {1}{53.05156pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {1}{29.63011pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.4}Sanity Checks: Reasoning about the data}{104}{subsection.5.5.4}\protected@file@percent } +\newlabel{sanity-checks-reasoning-about-the-data}{{5.5.4}{104}{Sanity Checks: Reasoning about the data}{subsection.5.5.4}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.5}Understanding Missing Value 1: \texttt {Days}}{105}{subsection.5.5.5}\protected@file@percent } +\newlabel{understanding-missing-value-1-days}{{5.5.5}{105}{\texorpdfstring {Understanding Missing Value 1: \texttt {Days}}{Understanding Missing Value 1: Days}}{subsection.5.5.5}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.6}Understanding Missing Value 2: \texttt {Avg}}{107}{subsection.5.5.6}\protected@file@percent } +\newlabel{understanding-missing-value-2-avg}{{5.5.6}{107}{\texorpdfstring {Understanding Missing Value 2: \texttt {Avg}}{Understanding Missing Value 2: Avg}}{subsection.5.5.6}{}} +\gdef \LT@cxvii {\LT@entry + {3}{22.42502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{27.51616pt}\LT@entry + {1}{53.05156pt}\LT@entry + {3}{40.59045pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {1}{29.63011pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.7}Drop, \texttt {NaN}, or Impute Missing \texttt {Avg} Data?}{109}{subsection.5.5.7}\protected@file@percent } +\newlabel{drop-nan-or-impute-missing-avg-data}{{5.5.7}{109}{\texorpdfstring {Drop, \texttt {NaN}, or Impute Missing \texttt {Avg} Data?}{Drop, NaN, or Impute Missing Avg Data?}}{subsection.5.5.7}{}} +\gdef \LT@cxviii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{27.51616pt}\LT@entry + {1}{53.05156pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {1}{29.63011pt}} +\gdef \LT@cxix {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{27.51616pt}\LT@entry + {1}{53.05156pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {1}{29.63011pt}} +\gdef \LT@cxx {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {1}{27.51616pt}\LT@entry + {1}{53.05156pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {3}{42.41911pt}\LT@entry + {1}{29.63011pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.5.8}Presenting the Data: A Discussion on Data Granularity}{114}{subsection.5.5.8}\protected@file@percent } +\newlabel{presenting-the-data-a-discussion-on-data-granularity}{{5.5.8}{114}{Presenting the Data: A Discussion on Data Granularity}{subsection.5.5.8}{}} +\@writefile{toc}{\contentsline {section}{\numberline {5.6}Summary}{115}{section.5.6}\protected@file@percent } +\newlabel{summary}{{5.6}{115}{Summary}{section.5.6}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.6.1}Dealing with Missing Values}{116}{subsection.5.6.1}\protected@file@percent } +\newlabel{dealing-with-missing-values}{{5.6.1}{116}{Dealing with Missing Values}{subsection.5.6.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {5.6.2}EDA and Data Wrangling}{116}{subsection.5.6.2}\protected@file@percent } +\newlabel{eda-and-data-wrangling}{{5.6.2}{116}{EDA and Data Wrangling}{subsection.5.6.2}{}} +\@writefile{toc}{\contentsline {chapter}{\numberline {6}Regular Expressions}{117}{chapter.6}\protected@file@percent } +\@writefile{lof}{\addvspace {10\p@ }} +\@writefile{lot}{\addvspace {10\p@ }} +\@writefile{lop}{\addvspace {10\p@ }} +\newlabel{regular-expressions}{{6}{117}{Regular Expressions}{chapter.6}{}} +\@writefile{toc}{\contentsline {section}{\numberline {6.1}Why Work with Text?}{117}{section.6.1}\protected@file@percent } +\newlabel{why-work-with-text}{{6.1}{117}{Why Work with Text?}{section.6.1}{}} +\@writefile{toc}{\contentsline {section}{\numberline {6.2}Python String Methods}{117}{section.6.2}\protected@file@percent } +\newlabel{python-string-methods}{{6.2}{117}{Python String Methods}{section.6.2}{}} +\gdef \LT@cxxi {\LT@entry + {1}{146.90495pt}\LT@entry + {1}{117.69pt}\LT@entry + {1}{170.4117pt}} +\gdef \LT@cxxii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{140.98004pt}\LT@entry + {1}{30.9441pt}} +\gdef \LT@cxxiii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{111.22888pt}\LT@entry + {1}{58.1877pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.2.1}Canonicalization}{118}{subsection.6.2.1}\protected@file@percent } +\newlabel{canonicalization}{{6.2.1}{118}{Canonicalization}{subsection.6.2.1}{}} +\gdef \LT@cxxiv {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{140.98004pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{112.17119pt}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.2.1.1}Canonicalization with Python String Manipulation}{119}{subsubsection.6.2.1.1}\protected@file@percent } +\newlabel{canonicalization-with-python-string-manipulation}{{6.2.1.1}{119}{Canonicalization with Python String Manipulation}{subsubsection.6.2.1.1}{}} +\gdef \LT@cxxv {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{111.22888pt}\LT@entry + {1}{64.1877pt}\LT@entry + {1}{112.17119pt}} +\gdef \LT@cxxvi {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{111.22888pt}\LT@entry + {1}{64.1877pt}\LT@entry + {1}{118.17119pt}\LT@entry + {1}{112.22594pt}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.2.1.2}Canonicalization with Pandas Series Methods}{120}{subsubsection.6.2.1.2}\protected@file@percent } +\newlabel{canonicalization-with-pandas-series-methods}{{6.2.1.2}{120}{Canonicalization with Pandas Series Methods}{subsubsection.6.2.1.2}{}} +\gdef \LT@cxxvii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{140.98004pt}\LT@entry + {1}{36.9441pt}\LT@entry + {1}{118.17119pt}\LT@entry + {1}{112.22594pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.2.2}Extraction}{121}{subsection.6.2.2}\protected@file@percent } +\newlabel{extraction}{{6.2.2}{121}{Extraction}{subsection.6.2.2}{}} +\@writefile{toc}{\contentsline {section}{\numberline {6.3}RegEx Basics}{122}{section.6.3}\protected@file@percent } +\newlabel{regex-basics}{{6.3}{122}{RegEx Basics}{section.6.3}{}} +\gdef \LT@cxxviii {\LT@entry + {1}{105.69pt}\LT@entry + {1}{86.7675pt}\LT@entry + {1}{82.61781pt}\LT@entry + {1}{70.1383pt}\LT@entry + {1}{89.0608pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.3.1}Basics RegEx Syntax}{123}{subsection.6.3.1}\protected@file@percent } +\newlabel{basics-regex-syntax}{{6.3.1}{123}{Basics RegEx Syntax}{subsection.6.3.1}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.3.1.1}Examples}{123}{subsubsection.6.3.1.1}\protected@file@percent } +\newlabel{examples}{{6.3.1.1}{123}{Examples}{subsubsection.6.3.1.1}{}} +\gdef \LT@cxxix {\LT@entry + {1}{197.70389pt}\LT@entry + {1}{82.40508pt}\LT@entry + {1}{78.5003pt}\LT@entry + {1}{80.34746pt}} +\@writefile{toc}{\contentsline {section}{\numberline {6.4}RegEx Expanded}{124}{section.6.4}\protected@file@percent } +\newlabel{regex-expanded}{{6.4}{124}{RegEx Expanded}{section.6.4}{}} +\gdef \LT@cxxx {\LT@entry + {1}{211.29303pt}\LT@entry + {1}{211.29303pt}} +\gdef \LT@cxxxi {\LT@entry + {1}{197.70389pt}\LT@entry + {1}{82.40508pt}\LT@entry + {1}{78.5003pt}\LT@entry + {1}{80.34746pt}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.4.0.1}Examples}{125}{subsubsection.6.4.0.1}\protected@file@percent } +\newlabel{examples-1}{{6.4.0.1}{125}{Examples}{subsubsection.6.4.0.1}{}} +\@writefile{toc}{\contentsline {section}{\numberline {6.5}Convenient RegEx}{125}{section.6.5}\protected@file@percent } +\newlabel{convenient-regex}{{6.5}{125}{Convenient RegEx}{section.6.5}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.5.1}Greediness}{126}{subsection.6.5.1}\protected@file@percent } +\newlabel{greediness}{{6.5.1}{126}{Greediness}{subsection.6.5.1}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.5.2}Examples}{126}{subsection.6.5.2}\protected@file@percent } +\newlabel{examples-2}{{6.5.2}{126}{Examples}{subsection.6.5.2}{}} +\@writefile{toc}{\contentsline {section}{\numberline {6.6}Regex in Python and Pandas (RegEx Groups)}{127}{section.6.6}\protected@file@percent } +\newlabel{regex-in-python-and-pandas-regex-groups}{{6.6}{127}{Regex in Python and Pandas (RegEx Groups)}{section.6.6}{}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.6.1}Canonicalization}{127}{subsection.6.6.1}\protected@file@percent } +\newlabel{canonicalization-1}{{6.6.1}{127}{Canonicalization}{subsection.6.6.1}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.6.1.1}Canonicalization with RegEx}{127}{subsubsection.6.6.1.1}\protected@file@percent } +\newlabel{canonicalization-with-regex}{{6.6.1.1}{127}{Canonicalization with RegEx}{subsubsection.6.6.1.1}{}} +\gdef \LT@cxxxii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{219.295pt}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.6.1.2}Canonicalization with \texttt {pandas}}{128}{subsubsection.6.6.1.2}\protected@file@percent } +\newlabel{canonicalization-with-pandas}{{6.6.1.2}{128}{\texorpdfstring {Canonicalization with \texttt {pandas}}{Canonicalization with pandas}}{subsubsection.6.6.1.2}{}} +\gdef \LT@cxxxiii {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{155.69742pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.6.2}Extraction}{129}{subsection.6.6.2}\protected@file@percent } +\newlabel{extraction-1}{{6.6.2}{129}{Extraction}{subsection.6.6.2}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.6.2.1}Extraction with RegEx}{129}{subsubsection.6.6.2.1}\protected@file@percent } +\newlabel{extraction-with-regex}{{6.6.2.1}{129}{Extraction with RegEx}{subsubsection.6.6.2.1}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.6.2.2}Extraction with \texttt {pandas}}{129}{subsubsection.6.6.2.2}\protected@file@percent } +\newlabel{extraction-with-pandas}{{6.6.2.2}{129}{\texorpdfstring {Extraction with \texttt {pandas}}{Extraction with pandas}}{subsubsection.6.6.2.2}{}} +\gdef \LT@cxxxiv {\LT@entry + {3}{11.47502pt}\LT@entry + {3}{33.90001pt}\LT@entry + {3}{33.90001pt}\LT@entry + {3}{27.90001pt}} +\gdef \LT@cxxxv {\LT@entry + {3}{452.76006pt}\LT@entry + {1}{41.49931pt}\LT@entry + {3}{28.42502pt}\LT@entry + {3}{22.95001pt}\LT@entry + {3}{27.90001pt}} +\@writefile{toc}{\contentsline {subsection}{\numberline {6.6.3}Regular Expression Capture Groups}{130}{subsection.6.6.3}\protected@file@percent } +\newlabel{regular-expression-capture-groups}{{6.6.3}{130}{Regular Expression Capture Groups}{subsection.6.6.3}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.6.3.1}Example 1}{131}{subsubsection.6.6.3.1}\protected@file@percent } +\newlabel{example-1}{{6.6.3.1}{131}{Example 1}{subsubsection.6.6.3.1}{}} +\@writefile{toc}{\contentsline {subsubsection}{\numberline {6.6.3.2}Example 2}{131}{subsubsection.6.6.3.2}\protected@file@percent } +\newlabel{example-2}{{6.6.3.2}{131}{Example 2}{subsubsection.6.6.3.2}{}} diff --git a/index.log b/index.log new file mode 100644 index 000000000..0b2de934e --- /dev/null +++ b/index.log @@ -0,0 +1,2767 @@ +This is XeTeX, Version 3.141592653-2.6-0.999994 (TeX Live 2022) (preloaded format=xelatex 2022.8.26) 27 AUG 2024 03:46 +entering extended mode + restricted \write18 enabled. + %&-line parsing enabled. +**index.tex +(./index.tex +LaTeX2e <2021-11-15> patch level 1 +L3 programming layer <2022-02-24> (/usr/local/texlive/2022/texmf-dist/tex/latex +/koma-script/scrreprt.cls +Document Class: scrreprt 2021/11/13 v3.35 KOMA-Script document class (report) +(/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/scrkbase.sty +Package: scrkbase 2021/11/13 v3.35 KOMA-Script package (KOMA-Script-dependent b +asics and keyval usage) +(/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/scrbase.sty +Package: scrbase 2021/11/13 v3.35 KOMA-Script package (KOMA-Script-independent +basics and keyval usage) +(/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/scrlfile.sty +Package: scrlfile 2021/11/13 v3.35 KOMA-Script package (file load hooks) +(/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/scrlfile-hook.sty +Package: scrlfile-hook 2021/11/13 v3.35 KOMA-Script package (using LaTeX hooks) + +(/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/scrlogo.sty +Package: scrlogo 2021/11/13 v3.35 KOMA-Script package (logo) +))) (/usr/local/texlive/2022/texmf-dist/tex/latex/graphics/keyval.sty +Package: keyval 2014/10/28 v1.15 key=value parser (DPC) +\KV@toks@=\toks16 +) +Applying: [2021/05/01] Usage of raw or classic option list on input line 252. +Already applied: [0000/00/00] Usage of raw or classic option list on input line + 368. +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/tocbasic.sty +Package: tocbasic 2021/11/13 v3.35 KOMA-Script package (handling toc-files) +\scr@dte@tocline@numberwidth=\skip47 +\scr@dte@tocline@numbox=\box50 +) +Package tocbasic Info: babel extension for `toc' omitted +(tocbasic) because of missing \bbl@set@language on input line 137. +Class scrreprt Info: File `scrsize11pt.clo' used instead of +(scrreprt) file `scrsize11.clo' to setup font sizes on input line 248 +7. +(/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/scrsize11pt.clo +File: scrsize11pt.clo 2021/11/13 v3.35 KOMA-Script font size class option (11pt +) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/koma-script/typearea.sty +Package: typearea 2021/11/13 v3.35 KOMA-Script package (type area) +\ta@bcor=\skip48 +\ta@div=\count181 +Package typearea Info: You've used standard option `letterpaper'. +(typearea) This is correct! +(typearea) Internally I'm using `paper=letter'. +(typearea) If you'd like to set the option with \KOMAoptions, +(typearea) you'd have to use `paper=letter' there +(typearea) instead of `letterpaper', too. +\ta@hblk=\skip49 +\ta@vblk=\skip50 +\ta@temp=\skip51 +\footheight=\skip52 +Package typearea Info: These are the values describing the layout: +(typearea) DIV = 11 +(typearea) BCOR = 0.0pt +(typearea) \paperwidth = 614.295pt +(typearea) \textwidth = 446.76004pt +(typearea) DIV departure = -14% +(typearea) \evensidemargin = 11.49748pt +(typearea) \oddsidemargin = 11.49748pt +(typearea) \paperheight = 794.96999pt +(typearea) \textheight = 582.20026pt +(typearea) \topmargin = -37.40001pt +(typearea) \headheight = 17.0pt +(typearea) \headsep = 20.40001pt +(typearea) \topskip = 11.0pt +(typearea) \footskip = 47.6pt +(typearea) \baselineskip = 13.6pt +(typearea) on input line 1743. +) +\c@part=\count182 +\c@chapter=\count183 +\c@section=\count184 +\c@subsection=\count185 +\c@subsubsection=\count186 +\c@paragraph=\count187 +\c@subparagraph=\count188 +\scr@dte@chapter@maxnumwidth=\skip53 +Class scrreprt Info: using compatibility default `afterindent=bysign' +(scrreprt) for `\chapter on input line 5717. +\scr@dte@section@maxnumwidth=\skip54 +Class scrreprt Info: using compatibility default `runin=bysign' +(scrreprt) for `\section on input line 5728. +Class scrreprt Info: using compatibility default `afterindent=bysign' +(scrreprt) for `\section on input line 5728. +\scr@dte@part@maxnumwidth=\skip55 +Class scrreprt Info: using compatibility default `afterindent=true' +(scrreprt) for `\part on input line 5737. +\scr@dte@subsection@maxnumwidth=\skip56 +Class scrreprt Info: using compatibility default `runin=bysign' +(scrreprt) for `\subsection on input line 5747. +Class scrreprt Info: using compatibility default `afterindent=bysign' +(scrreprt) for `\subsection on input line 5747. +\scr@dte@subsubsection@maxnumwidth=\skip57 +Class scrreprt Info: using compatibility default `runin=bysign' +(scrreprt) for `\subsubsection on input line 5757. +Class scrreprt Info: using compatibility default `afterindent=bysign' +(scrreprt) for `\subsubsection on input line 5757. +\scr@dte@paragraph@maxnumwidth=\skip58 +Class scrreprt Info: using compatibility default `runin=bysign' +(scrreprt) for `\paragraph on input line 5768. +Class scrreprt Info: using compatibility default `afterindent=bysign' +(scrreprt) for `\paragraph on input line 5768. +\scr@dte@subparagraph@maxnumwidth=\skip59 +Class scrreprt Info: using compatibility default `runin=bysign' +(scrreprt) for `\subparagraph on input line 5778. +Class scrreprt Info: using compatibility default `afterindent=bysign' +(scrreprt) for `\subparagraph on input line 5778. +\abovecaptionskip=\skip60 +\belowcaptionskip=\skip61 +\c@pti@nb@sid@b@x=\box51 +Package tocbasic Info: babel extension for `lof' omitted +(tocbasic) because of missing \bbl@set@language on input line 6958. + +\scr@dte@figure@maxnumwidth=\skip62 +\c@figure=\count189 +Package tocbasic Info: babel extension for `lot' omitted +(tocbasic) because of missing \bbl@set@language on input line 6974. + +\scr@dte@table@maxnumwidth=\skip63 +\c@table=\count190 +Class scrreprt Info: Redefining `\numberline' on input line 7142. +\bibindent=\dimen138 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/amsmath/amsmath.sty +Package: amsmath 2021/10/15 v2.17l AMS math features +\@mathmargin=\skip64 +For additional information on amsmath, use the `?' option. +(/usr/local/texlive/2022/texmf-dist/tex/latex/amsmath/amstext.sty +Package: amstext 2021/08/26 v2.01 AMS text +(/usr/local/texlive/2022/texmf-dist/tex/latex/amsmath/amsgen.sty +File: amsgen.sty 1999/11/30 v2.0 generic functions +\@emptytoks=\toks17 +\ex@=\dimen139 +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/amsmath/amsbsy.sty +Package: amsbsy 1999/11/29 v1.2d Bold Symbols +\pmbraise@=\dimen140 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/amsmath/amsopn.sty +Package: amsopn 2021/08/26 v2.02 operator names +) +\inf@bad=\count191 +LaTeX Info: Redefining \frac on input line 234. +\uproot@=\count192 +\leftroot@=\count193 +LaTeX Info: Redefining \overline on input line 399. +\classnum@=\count194 +\DOTSCASE@=\count195 +LaTeX Info: Redefining \ldots on input line 496. +LaTeX Info: Redefining \dots on input line 499. +LaTeX Info: Redefining \cdots on input line 620. +\Mathstrutbox@=\box52 +\strutbox@=\box53 +\big@size=\dimen141 +LaTeX Font Info: Redeclaring font encoding OML on input line 743. +LaTeX Font Info: Redeclaring font encoding OMS on input line 744. +\macc@depth=\count196 +\c@MaxMatrixCols=\count197 +\dotsspace@=\muskip16 +\c@parentequation=\count198 +\dspbrk@lvl=\count199 +\tag@help=\toks18 +\row@=\count266 +\column@=\count267 +\maxfields@=\count268 +\andhelp@=\toks19 +\eqnshift@=\dimen142 +\alignsep@=\dimen143 +\tagshift@=\dimen144 +\tagwidth@=\dimen145 +\totwidth@=\dimen146 +\lineht@=\dimen147 +\@envbody=\toks20 +\multlinegap=\skip65 +\multlinetaggap=\skip66 +\mathdisplay@stack=\toks21 +LaTeX Info: Redefining \[ on input line 2938. +LaTeX Info: Redefining \] on input line 2939. +) (/usr/local/texlive/2022/texmf-dist/tex/latex/amsfonts/amssymb.sty +Package: amssymb 2013/01/14 v3.01 AMS font symbols +(/usr/local/texlive/2022/texmf-dist/tex/latex/amsfonts/amsfonts.sty +Package: amsfonts 2013/01/14 v3.01 Basic AMSFonts support +\symAMSa=\mathgroup4 +\symAMSb=\mathgroup5 +LaTeX Font Info: Redeclaring math symbol \hbar on input line 98. +LaTeX Font Info: Overwriting math alphabet `\mathfrak' in version `bold' +(Font) U/euf/m/n --> U/euf/b/n on input line 106. +)) (/usr/local/texlive/2022/texmf-dist/tex/generic/iftex/iftex.sty +Package: iftex 2022/02/03 v1.0f TeX engine tests +) (/usr/local/texlive/2022/texmf-dist/tex/latex/unicode-math/unicode-math.sty ( +/usr/local/texlive/2022/texmf-dist/tex/latex/l3kernel/expl3.sty +Package: expl3 2022-02-24 L3 programming layer (loader) +(/usr/local/texlive/2022/texmf-dist/tex/latex/l3backend/l3backend-xetex.def +File: l3backend-xetex.def 2022-02-07 L3 backend support: XeTeX +(|extractbb --version) +\c__kernel_sys_dvipdfmx_version_int=\count269 +\l__color_backend_stack_int=\count270 +\g__color_backend_stack_int=\count271 +\g__graphics_track_int=\count272 +\l__pdf_internal_box=\box54 +\g__pdf_backend_object_int=\count273 +\g__pdf_backend_annotation_int=\count274 +\g__pdf_backend_link_int=\count275 +)) +Package: unicode-math 2020/01/31 v0.8q Unicode maths in XeLaTeX and LuaLaTeX + +(/usr/local/texlive/2022/texmf-dist/tex/latex/unicode-math/unicode-math-xetex.s +ty +Package: unicode-math-xetex 2020/01/31 v0.8q Unicode maths in XeLaTeX and LuaLa +TeX +(/usr/local/texlive/2022/texmf-dist/tex/latex/l3packages/xparse/xparse.sty +Package: xparse 2022-01-12 L3 Experimental document command parser +) (/usr/local/texlive/2022/texmf-dist/tex/latex/l3packages/l3keys2e/l3keys2e.st +y +Package: l3keys2e 2022-01-12 LaTeX2e option processing using LaTeX3 keys +) (/usr/local/texlive/2022/texmf-dist/tex/latex/fontspec/fontspec.sty +Package: fontspec 2022/01/15 v2.8a Font selection for XeLaTeX and LuaLaTeX +(/usr/local/texlive/2022/texmf-dist/tex/latex/fontspec/fontspec-xetex.sty +Package: fontspec-xetex 2022/01/15 v2.8a Font selection for XeLaTeX and LuaLaTe +X +\l__fontspec_script_int=\count276 +\l__fontspec_language_int=\count277 +\l__fontspec_strnum_int=\count278 +\l__fontspec_tmp_int=\count279 +\l__fontspec_tmpa_int=\count280 +\l__fontspec_tmpb_int=\count281 +\l__fontspec_tmpc_int=\count282 +\l__fontspec_em_int=\count283 +\l__fontspec_emdef_int=\count284 +\l__fontspec_strong_int=\count285 +\l__fontspec_strongdef_int=\count286 +\l__fontspec_tmpa_dim=\dimen148 +\l__fontspec_tmpb_dim=\dimen149 +\l__fontspec_tmpc_dim=\dimen150 +(/usr/local/texlive/2022/texmf-dist/tex/latex/base/fontenc.sty +Package: fontenc 2021/04/29 v2.0v Standard LaTeX package +) (/usr/local/texlive/2022/texmf-dist/tex/latex/fontspec/fontspec.cfg))) (/usr/ +local/texlive/2022/texmf-dist/tex/latex/base/fix-cm.sty +Package: fix-cm 2020/11/24 v1.1t fixes to LaTeX +(/usr/local/texlive/2022/texmf-dist/tex/latex/base/ts1enc.def +File: ts1enc.def 2001/06/05 v3.0e (jk/car/fm) Standard LaTeX file +LaTeX Font Info: Redeclaring font encoding TS1 on input line 47. +)) +\g__um_fam_int=\count287 +\g__um_fonts_used_int=\count288 +\l__um_primecount_int=\count289 +\g__um_primekern_muskip=\muskip17 + +(/usr/local/texlive/2022/texmf-dist/tex/latex/unicode-math/unicode-math-table.t +ex))) (/usr/local/texlive/2022/texmf-dist/tex/latex/lm/lmodern.sty +Package: lmodern 2015/05/01 v1.6.1 Latin Modern Fonts +LaTeX Font Info: Overwriting symbol font `operators' in version `normal' +(Font) OT1/cmr/m/n --> OT1/lmr/m/n on input line 22. +LaTeX Font Info: Overwriting symbol font `letters' in version `normal' +(Font) OML/cmm/m/it --> OML/lmm/m/it on input line 23. +LaTeX Font Info: Overwriting symbol font `symbols' in version `normal' +(Font) OMS/cmsy/m/n --> OMS/lmsy/m/n on input line 24. +LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal' +(Font) OMX/cmex/m/n --> OMX/lmex/m/n on input line 25. +LaTeX Font Info: Overwriting symbol font `operators' in version `bold' +(Font) OT1/cmr/bx/n --> OT1/lmr/bx/n on input line 26. +LaTeX Font Info: Overwriting symbol font `letters' in version `bold' +(Font) OML/cmm/b/it --> OML/lmm/b/it on input line 27. +LaTeX Font Info: Overwriting symbol font `symbols' in version `bold' +(Font) OMS/cmsy/b/n --> OMS/lmsy/b/n on input line 28. +LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold' +(Font) OMX/cmex/m/n --> OMX/lmex/m/n on input line 29. +LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal' +(Font) OT1/cmr/bx/n --> OT1/lmr/bx/n on input line 31. +LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal' +(Font) OT1/cmss/m/n --> OT1/lmss/m/n on input line 32. +LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal' +(Font) OT1/cmr/m/it --> OT1/lmr/m/it on input line 33. +LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal' +(Font) OT1/cmtt/m/n --> OT1/lmtt/m/n on input line 34. +LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `bold' +(Font) OT1/cmr/bx/n --> OT1/lmr/bx/n on input line 35. +LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' +(Font) OT1/cmss/bx/n --> OT1/lmss/bx/n on input line 36. +LaTeX Font Info: Overwriting math alphabet `\mathit' in version `bold' +(Font) OT1/cmr/bx/it --> OT1/lmr/bx/it on input line 37. +LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' +(Font) OT1/cmtt/m/n --> OT1/lmtt/m/n on input line 38. +) (/usr/local/texlive/2022/texmf-dist/tex/latex/upquote/upquote.sty +Package: upquote 2012/04/19 v1.3 upright-quote and grave-accent glyphs in verba +tim +(/usr/local/texlive/2022/texmf-dist/tex/latex/base/textcomp.sty +Package: textcomp 2020/02/02 v2.0n Standard LaTeX package +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/microtype/microtype.sty +Package: microtype 2022/03/14 v3.0d Micro-typographical refinements (RS) +(/usr/local/texlive/2022/texmf-dist/tex/latex/etoolbox/etoolbox.sty +Package: etoolbox 2020/10/05 v2.5k e-TeX tools for LaTeX (JAW) +\etb@tempcnta=\count290 +) +\MT@toks=\toks22 +\MT@tempbox=\box55 +\MT@count=\count291 +LaTeX Info: Redefining \noprotrusionifhmode on input line 1027. +LaTeX Info: Redefining \leftprotrusion on input line 1028. +LaTeX Info: Redefining \rightprotrusion on input line 1036. +LaTeX Info: Redefining \textls on input line 1195. +\MT@outer@kern=\dimen151 +LaTeX Info: Redefining \textmicrotypecontext on input line 1781. +\MT@listname@count=\count292 +(/usr/local/texlive/2022/texmf-dist/tex/latex/microtype/microtype-xetex.def +File: microtype-xetex.def 2022/03/14 v3.0d Definitions specific to xetex (RS) +LaTeX Info: Redefining \lsstyle on input line 236. +) +Package microtype Info: Loading configuration file microtype.cfg. +(/usr/local/texlive/2022/texmf-dist/tex/latex/microtype/microtype.cfg +File: microtype.cfg 2022/03/14 v3.0d microtype main configuration file (RS) +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/xcolor/xcolor.sty +Package: xcolor 2021/10/31 v2.13 LaTeX color extensions (UK) +(/usr/local/texlive/2022/texmf-dist/tex/latex/graphics-cfg/color.cfg +File: color.cfg 2016/01/02 v1.6 sample color configuration +) +Package xcolor Info: Driver file: xetex.def on input line 227. +(/usr/local/texlive/2022/texmf-dist/tex/latex/graphics-def/xetex.def +File: xetex.def 2021/03/18 v5.0k Graphics/color driver for xetex +) +Package xcolor Info: Model `cmy' substituted by `cmy0' on input line 1352. +Package xcolor Info: Model `RGB' extended on input line 1368. +Package xcolor Info: Model `HTML' substituted by `rgb' on input line 1370. +Package xcolor Info: Model `Hsb' substituted by `hsb' on input line 1371. +Package xcolor Info: Model `tHsb' substituted by `hsb' on input line 1372. +Package xcolor Info: Model `HSB' substituted by `hsb' on input line 1373. +Package xcolor Info: Model `Gray' substituted by `gray' on input line 1374. +Package xcolor Info: Model `wave' substituted by `hsb' on input line 1375. +(/usr/local/texlive/2022/texmf-dist/tex/latex/graphics/dvipsnam.def +File: dvipsnam.def 2016/06/17 v3.0m Driver-dependent file (DPC,SPQR) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/xcolor/svgnam.def +File: svgnam.def 2021/10/31 v2.13 Predefined colors according to SVG 1.1 (UK) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/xcolor/x11nam.def +File: x11nam.def 2021/10/31 v2.13 Predefined colors according to Unix/X11 (UK) +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/fancyvrb/fancyvrb.sty +Package: fancyvrb 2021/12/21 4.1b verbatim text (tvz,hv) +\FV@CodeLineNo=\count293 +\FV@InFile=\read2 +\FV@TabBox=\box56 +\c@FancyVerbLine=\count294 +\FV@StepNumber=\count295 +\FV@OutFile=\write3 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/framed/framed.sty +Package: framed 2011/10/22 v 0.96: framed or shaded text with page breaks +\OuterFrameSep=\skip67 +\fb@frw=\dimen152 +\fb@frh=\dimen153 +\FrameRule=\dimen154 +\FrameSep=\dimen155 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/tools/longtable.sty +Package: longtable 2021-09-01 v4.17 Multi-page Table package (DPC) +\LTleft=\skip68 +\LTright=\skip69 +\LTpre=\skip70 +\LTpost=\skip71 +\LTchunksize=\count296 +\LTcapwidth=\dimen156 +\LT@head=\box57 +\LT@firsthead=\box58 +\LT@foot=\box59 +\LT@lastfoot=\box60 +\LT@gbox=\box61 +\LT@cols=\count297 +\LT@rows=\count298 +\c@LT@tables=\count299 +\c@LT@chunks=\count300 +\LT@p@ftn=\toks23 +) +Class scrreprt Info: longtable captions redefined on input line 112. +(/usr/local/texlive/2022/texmf-dist/tex/latex/booktabs/booktabs.sty +Package: booktabs 2020/01/12 v1.61803398 Publication quality tables +\heavyrulewidth=\dimen157 +\lightrulewidth=\dimen158 +\cmidrulewidth=\dimen159 +\belowrulesep=\dimen160 +\belowbottomsep=\dimen161 +\aboverulesep=\dimen162 +\abovetopsep=\dimen163 +\cmidrulesep=\dimen164 +\cmidrulekern=\dimen165 +\defaultaddspace=\dimen166 +\@cmidla=\count301 +\@cmidlb=\count302 +\@aboverulesep=\dimen167 +\@belowrulesep=\dimen168 +\@thisruleclass=\count303 +\@lastruleclass=\count304 +\@thisrulewidth=\dimen169 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/tools/array.sty +Package: array 2021/10/04 v2.5f Tabular extension package (FMi) +\col@sep=\dimen170 +\ar@mcellbox=\box62 +\extrarowheight=\dimen171 +\NC@list=\toks24 +\extratabsurround=\skip72 +\backup@length=\skip73 +\ar@cellbox=\box63 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/multirow/multirow.sty +Package: multirow 2021/03/15 v2.8 Span multiple rows of a table +\multirow@colwidth=\skip74 +\multirow@cntb=\count305 +\multirow@dima=\skip75 +\bigstrutjot=\dimen172 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/tools/calc.sty +Package: calc 2017/05/25 v4.3 Infix arithmetic (KKT,FJ) +\calc@Acount=\count306 +\calc@Bcount=\count307 +\calc@Adimen=\dimen173 +\calc@Bdimen=\dimen174 +\calc@Askip=\skip76 +\calc@Bskip=\skip77 +LaTeX Info: Redefining \setlength on input line 80. +LaTeX Info: Redefining \addtolength on input line 81. +\calc@Ccount=\count308 +\calc@Cskip=\skip78 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/footnotehyper/footnotehyper.sty +Package: footnotehyper 2021/08/13 v1.1e hyperref aware footnote.sty (JFB) +\FNH@notes=\box64 +\FNH@width=\dimen175 +\FNH@toks=\toks25 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/graphics/graphicx.sty +Package: graphicx 2021/09/16 v1.2d Enhanced LaTeX Graphics (DPC,SPQR) +(/usr/local/texlive/2022/texmf-dist/tex/latex/graphics/graphics.sty +Package: graphics 2021/03/04 v1.4d Standard LaTeX Graphics (DPC,SPQR) +(/usr/local/texlive/2022/texmf-dist/tex/latex/graphics/trig.sty +Package: trig 2021/08/11 v1.11 sin cos tan (DPC) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/graphics-cfg/graphics.cfg +File: graphics.cfg 2016/06/04 v1.11 sample graphics configuration +) +Package graphics Info: Driver file: xetex.def on input line 107. +) +\Gin@req@height=\dimen176 +\Gin@req@width=\dimen177 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/tcolorbox/tcolorbox.sty +Package: tcolorbox 2022/01/07 version 5.0.2 text color boxes +(/usr/local/texlive/2022/texmf-dist/tex/latex/pgf/basiclayer/pgf.sty (/usr/loca +l/texlive/2022/texmf-dist/tex/latex/pgf/utilities/pgfrcs.sty +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/utilities/pgfutil-common.te +x +\pgfutil@everybye=\toks26 +\pgfutil@tempdima=\dimen178 +\pgfutil@tempdimb=\dimen179 + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/utilities/pgfutil-common-li +sts.tex)) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/utilities/pgfutil-latex.def +\pgfutil@abb=\box65 +) (/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/utilities/pgfrcs.code.tex +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/pgf.revision.tex) +Package: pgfrcs 2021/05/15 v3.1.9a (3.1.9a) +)) +Package: pgf 2021/05/15 v3.1.9a (3.1.9a) +(/usr/local/texlive/2022/texmf-dist/tex/latex/pgf/basiclayer/pgfcore.sty (/usr/ +local/texlive/2022/texmf-dist/tex/latex/pgf/systemlayer/pgfsys.sty +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/systemlayer/pgfsys.code.tex +Package: pgfsys 2021/05/15 v3.1.9a (3.1.9a) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex +\pgfkeys@pathtoks=\toks27 +\pgfkeys@temptoks=\toks28 + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/utilities/pgfkeysfiltered.c +ode.tex +\pgfkeys@tmptoks=\toks29 +)) +\pgf@x=\dimen180 +\pgf@y=\dimen181 +\pgf@xa=\dimen182 +\pgf@ya=\dimen183 +\pgf@xb=\dimen184 +\pgf@yb=\dimen185 +\pgf@xc=\dimen186 +\pgf@yc=\dimen187 +\pgf@xd=\dimen188 +\pgf@yd=\dimen189 +\w@pgf@writea=\write4 +\r@pgf@reada=\read3 +\c@pgf@counta=\count309 +\c@pgf@countb=\count310 +\c@pgf@countc=\count311 +\c@pgf@countd=\count312 +\t@pgf@toka=\toks30 +\t@pgf@tokb=\toks31 +\t@pgf@tokc=\toks32 +\pgf@sys@id@count=\count313 +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/systemlayer/pgf.cfg +File: pgf.cfg 2021/05/15 v3.1.9a (3.1.9a) +) +Driver file for pgf: pgfsys-xetex.def + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-xetex.de +f +File: pgfsys-xetex.def 2021/05/15 v3.1.9a (3.1.9a) + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-dvipdfmx +.def +File: pgfsys-dvipdfmx.def 2021/05/15 v3.1.9a (3.1.9a) + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/systemlayer/pgfsys-common-p +df.def +File: pgfsys-common-pdf.def 2021/05/15 v3.1.9a (3.1.9a) +) +\pgfsys@objnum=\count314 +))) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/systemlayer/pgfsyssoftpath. +code.tex +File: pgfsyssoftpath.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgfsyssoftpath@smallbuffer@items=\count315 +\pgfsyssoftpath@bigbuffer@items=\count316 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/systemlayer/pgfsysprotocol. +code.tex +File: pgfsysprotocol.code.tex 2021/05/15 v3.1.9a (3.1.9a) +)) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcore.code.tex +Package: pgfcore 2021/05/15 v3.1.9a (3.1.9a) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex (/usr +/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathcalc.code.tex (/usr/ +local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathutil.code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathparser.code.tex +\pgfmath@dimen=\dimen190 +\pgfmath@count=\count317 +\pgfmath@box=\box66 +\pgfmath@toks=\toks33 +\pgfmath@stack@operand=\toks34 +\pgfmath@stack@operation=\toks35 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.code. +tex +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.basic +.code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.trigo +nometric.code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.rando +m.code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.compa +rison.code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.base. +code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.round +.code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.misc. +code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmathfunctions.integ +erarithmetics.code.tex))) (/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/m +ath/pgfmathfloat.code.tex +\c@pgfmathroundto@lastzeros=\count318 +)) (/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfint.code.tex) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepoints.co +de.tex +File: pgfcorepoints.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgf@picminx=\dimen191 +\pgf@picmaxx=\dimen192 +\pgf@picminy=\dimen193 +\pgf@picmaxy=\dimen194 +\pgf@pathminx=\dimen195 +\pgf@pathmaxx=\dimen196 +\pgf@pathminy=\dimen197 +\pgf@pathmaxy=\dimen198 +\pgf@xx=\dimen199 +\pgf@xy=\dimen256 +\pgf@yx=\dimen257 +\pgf@yy=\dimen258 +\pgf@zx=\dimen259 +\pgf@zy=\dimen260 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathconst +ruct.code.tex +File: pgfcorepathconstruct.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgf@path@lastx=\dimen261 +\pgf@path@lasty=\dimen262 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathusage +.code.tex +File: pgfcorepathusage.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgf@shorten@end@additional=\dimen263 +\pgf@shorten@start@additional=\dimen264 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorescopes.co +de.tex +File: pgfcorescopes.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgfpic=\box67 +\pgf@hbox=\box68 +\pgf@layerbox@main=\box69 +\pgf@picture@serial@count=\count319 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcoregraphicst +ate.code.tex +File: pgfcoregraphicstate.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgflinewidth=\dimen265 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretransform +ations.code.tex +File: pgfcoretransformations.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgf@pt@x=\dimen266 +\pgf@pt@y=\dimen267 +\pgf@pt@temp=\dimen268 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorequick.cod +e.tex +File: pgfcorequick.code.tex 2021/05/15 v3.1.9a (3.1.9a) +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreobjects.c +ode.tex +File: pgfcoreobjects.code.tex 2021/05/15 v3.1.9a (3.1.9a) +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepathproce +ssing.code.tex +File: pgfcorepathprocessing.code.tex 2021/05/15 v3.1.9a (3.1.9a) +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorearrows.co +de.tex +File: pgfcorearrows.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgfarrowsep=\dimen269 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreshade.cod +e.tex +File: pgfcoreshade.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgf@max=\dimen270 +\pgf@sys@shading@range@num=\count320 +\pgf@shadingcount=\count321 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreimage.cod +e.tex +File: pgfcoreimage.code.tex 2021/05/15 v3.1.9a (3.1.9a) + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcoreexternal. +code.tex +File: pgfcoreexternal.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgfexternal@startupbox=\box70 +)) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorelayers.co +de.tex +File: pgfcorelayers.code.tex 2021/05/15 v3.1.9a (3.1.9a) +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcoretranspare +ncy.code.tex +File: pgfcoretransparency.code.tex 2021/05/15 v3.1.9a (3.1.9a) +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorepatterns. +code.tex +File: pgfcorepatterns.code.tex 2021/05/15 v3.1.9a (3.1.9a) +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/basiclayer/pgfcorerdf.code. +tex +File: pgfcorerdf.code.tex 2021/05/15 v3.1.9a (3.1.9a) +))) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/modules/pgfmoduleshapes.cod +e.tex +File: pgfmoduleshapes.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgfnodeparttextbox=\box71 +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/modules/pgfmoduleplot.code. +tex +File: pgfmoduleplot.code.tex 2021/05/15 v3.1.9a (3.1.9a) +) +(/usr/local/texlive/2022/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version +-0-65.sty +Package: pgfcomp-version-0-65 2021/05/15 v3.1.9a (3.1.9a) +\pgf@nodesepstart=\dimen271 +\pgf@nodesepend=\dimen272 +) +(/usr/local/texlive/2022/texmf-dist/tex/latex/pgf/compatibility/pgfcomp-version +-1-18.sty +Package: pgfcomp-version-1-18 2021/05/15 v3.1.9a (3.1.9a) +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/tools/verbatim.sty +Package: verbatim 2020-07-07 v1.5u LaTeX2e package for verbatim enhancements +\every@verbatim=\toks36 +\verbatim@line=\toks37 +\verbatim@in@stream=\read4 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/environ/environ.sty +Package: environ 2014/05/04 v0.3 A new way to define environments +(/usr/local/texlive/2022/texmf-dist/tex/latex/trimspaces/trimspaces.sty +Package: trimspaces 2009/09/17 v1.1 Trim spaces around a token list +)) +\tcb@titlebox=\box72 +\tcb@upperbox=\box73 +\tcb@lowerbox=\box74 +\tcb@phantombox=\box75 +\c@tcbbreakpart=\count322 +\c@tcblayer=\count323 +\c@tcolorbox@number=\count324 +\tcb@temp=\box76 +\tcb@temp=\box77 +\tcb@temp=\box78 +\tcb@temp=\box79 +(/usr/local/texlive/2022/texmf-dist/tex/latex/tcolorbox/tcbskins.code.tex +Library (tcolorbox): 'tcbskins.code.tex' version '5.0.2' +(/usr/local/texlive/2022/texmf-dist/tex/latex/pgf/frontendlayer/tikz.sty (/usr/ +local/texlive/2022/texmf-dist/tex/latex/pgf/utilities/pgffor.sty (/usr/local/te +xlive/2022/texmf-dist/tex/latex/pgf/utilities/pgfkeys.sty (/usr/local/texlive/2 +022/texmf-dist/tex/generic/pgf/utilities/pgfkeys.code.tex)) (/usr/local/texlive +/2022/texmf-dist/tex/latex/pgf/math/pgfmath.sty (/usr/local/texlive/2022/texmf- +dist/tex/generic/pgf/math/pgfmath.code.tex)) (/usr/local/texlive/2022/texmf-dis +t/tex/generic/pgf/utilities/pgffor.code.tex +Package: pgffor 2021/05/15 v3.1.9a (3.1.9a) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/math/pgfmath.code.tex) +\pgffor@iter=\dimen273 +\pgffor@skip=\dimen274 +\pgffor@stack=\toks38 +\pgffor@toks=\toks39 +)) +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/frontendlayer/tikz/tikz.cod +e.tex +Package: tikz 2021/05/15 v3.1.9a (3.1.9a) + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/libraries/pgflibraryplothan +dlers.code.tex +File: pgflibraryplothandlers.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgf@plot@mark@count=\count325 +\pgfplotmarksize=\dimen275 +) +\tikz@lastx=\dimen276 +\tikz@lasty=\dimen277 +\tikz@lastxsaved=\dimen278 +\tikz@lastysaved=\dimen279 +\tikz@lastmovetox=\dimen280 +\tikz@lastmovetoy=\dimen281 +\tikzleveldistance=\dimen282 +\tikzsiblingdistance=\dimen283 +\tikz@figbox=\box80 +\tikz@figbox@bg=\box81 +\tikz@tempbox=\box82 +\tikz@tempbox@bg=\box83 +\tikztreelevel=\count326 +\tikznumberofchildren=\count327 +\tikznumberofcurrentchild=\count328 +\tikz@fig@count=\count329 + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/modules/pgfmodulematrix.cod +e.tex +File: pgfmodulematrix.code.tex 2021/05/15 v3.1.9a (3.1.9a) +\pgfmatrixcurrentrow=\count330 +\pgfmatrixcurrentcolumn=\count331 +\pgf@matrix@numberofcolumns=\count332 +) +\tikz@expandcount=\count333 + +(/usr/local/texlive/2022/texmf-dist/tex/generic/pgf/frontendlayer/tikz/librarie +s/tikzlibrarytopaths.code.tex +File: tikzlibrarytopaths.code.tex 2021/05/15 v3.1.9a (3.1.9a) +))) +\tcb@waterbox=\box84 + +(/usr/local/texlive/2022/texmf-dist/tex/latex/tcolorbox/tcbskinsjigsaw.code.tex +Library (tcolorbox): 'tcbskinsjigsaw.code.tex' version '5.0.2' +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/tcolorbox/tcbbreakable.code.te +x +Library (tcolorbox): 'tcbbreakable.code.tex' version '5.0.2' +(/usr/local/texlive/2022/texmf-dist/tex/generic/oberdiek/pdfcol.sty +Package: pdfcol 2019/12/29 v1.6 Handle new color stacks for pdfTeX (HO) +(/usr/local/texlive/2022/texmf-dist/tex/generic/ltxcmds/ltxcmds.sty +Package: ltxcmds 2020-05-10 v1.25 LaTeX kernel commands for general use (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/generic/infwarerr/infwarerr.sty +Package: infwarerr 2019/12/03 v1.5 Providing info/warning/error messages (HO) +) +Package pdfcol Info: Interface disabled because of missing PDF mode of pdfTeX. +) +Package pdfcol Info: pdfTeX's color stacks are not available. +\tcb@testbox=\box85 +\tcb@totalupperbox=\box86 +\tcb@totallowerbox=\box87 +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/fontawesome5/fontawesome5.sty +Package: fontawesome5 2021/06/04 v5.15.3 Font Awesome 5 + +(/usr/local/texlive/2022/texmf-dist/tex/latex/fontawesome5/fontawesome5-utex-he +lper.sty +Package: fontawesome5-utex-helper 2021/06/04 v5.15.3 uTeX helper for fontawesom +e5 +LaTeX Font Info: Trying to load font information for TU+fontawesomefree on i +nput line 69. + +(/usr/local/texlive/2022/texmf-dist/tex/latex/fontawesome5/tufontawesomefree.fd +) +LaTeX Font Info: Trying to load font information for TU+fontawesomebrands on + input line 70. + +(/usr/local/texlive/2022/texmf-dist/tex/latex/fontawesome5/tufontawesomebrands. +fd))) (/usr/local/texlive/2022/texmf-dist/tex/latex/bookmark/bookmark.sty +Package: bookmark 2020-11-06 v1.29 PDF bookmarks (HO) +(/usr/local/texlive/2022/texmf-dist/tex/latex/hyperref/hyperref.sty +Package: hyperref 2022-02-21 v7.00n Hypertext links for LaTeX +(/usr/local/texlive/2022/texmf-dist/tex/generic/pdftexcmds/pdftexcmds.sty +Package: pdftexcmds 2020-06-27 v0.33 Utility functions of pdfTeX for LuaTeX (HO +) +Package pdftexcmds Info: \pdf@primitive is available. +Package pdftexcmds Info: \pdf@ifprimitive is available. +Package pdftexcmds Info: \pdfdraftmode not found. +) (/usr/local/texlive/2022/texmf-dist/tex/generic/kvsetkeys/kvsetkeys.sty +Package: kvsetkeys 2019/12/15 v1.18 Key value parser (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/generic/kvdefinekeys/kvdefinekeys.sty +Package: kvdefinekeys 2019-12-19 v1.6 Define keys (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/generic/pdfescape/pdfescape.sty +Package: pdfescape 2019/12/09 v1.15 Implements pdfTeX's escape features (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/hycolor/hycolor.sty +Package: hycolor 2020-01-27 v1.10 Color options for hyperref/bookmark (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/letltxmacro/letltxmacro.sty +Package: letltxmacro 2019/12/03 v1.6 Let assignment for LaTeX macros (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/auxhook/auxhook.sty +Package: auxhook 2019-12-17 v1.6 Hooks for auxiliary files (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/latex/kvoptions/kvoptions.sty +Package: kvoptions 2020-10-07 v3.14 Key value format for package options (HO) +) +\@linkdim=\dimen284 +\Hy@linkcounter=\count334 +\Hy@pagecounter=\count335 +(/usr/local/texlive/2022/texmf-dist/tex/latex/hyperref/pd1enc.def +File: pd1enc.def 2022-02-21 v7.00n Hyperref: PDFDocEncoding definition (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/generic/intcalc/intcalc.sty +Package: intcalc 2019/12/15 v1.3 Expandable calculations with integers (HO) +) (/usr/local/texlive/2022/texmf-dist/tex/generic/etexcmds/etexcmds.sty +Package: etexcmds 2019/12/15 v1.7 Avoid name clashes with e-TeX commands (HO) +) +\Hy@SavedSpaceFactor=\count336 +(/usr/local/texlive/2022/texmf-dist/tex/latex/hyperref/puenc.def +File: puenc.def 2022-02-21 v7.00n Hyperref: PDF Unicode definition (HO) +) +Package hyperref Info: Option `unicode' set `true' on input line 4018. +Package hyperref Info: Hyper figures OFF on input line 4137. +Package hyperref Info: Link nesting OFF on input line 4142. +Package hyperref Info: Hyper index ON on input line 4145. +Package hyperref Info: Plain pages OFF on input line 4152. +Package hyperref Info: Backreferencing OFF on input line 4157. +Package hyperref Info: Implicit mode ON; LaTeX internals redefined. +Package hyperref Info: Bookmarks ON on input line 4390. +\c@Hy@tempcnt=\count337 +(/usr/local/texlive/2022/texmf-dist/tex/latex/url/url.sty +\Urlmuskip=\muskip18 +Package: url 2013/09/16 ver 3.4 Verb mode for urls, etc. +) +LaTeX Info: Redefining \url on input line 4749. +\XeTeXLinkMargin=\dimen285 +(/usr/local/texlive/2022/texmf-dist/tex/generic/bitset/bitset.sty +Package: bitset 2019/12/09 v1.3 Handle bit-vector datatype (HO) +(/usr/local/texlive/2022/texmf-dist/tex/generic/bigintcalc/bigintcalc.sty +Package: bigintcalc 2019/12/15 v1.5 Expandable calculations on big integers (HO +) +)) +\Fld@menulength=\count338 +\Field@Width=\dimen286 +\Fld@charsize=\dimen287 +Package hyperref Info: Hyper figures OFF on input line 6027. +Package hyperref Info: Link nesting OFF on input line 6032. +Package hyperref Info: Hyper index ON on input line 6035. +Package hyperref Info: backreferencing OFF on input line 6042. +Package hyperref Info: Link coloring OFF on input line 6047. +Package hyperref Info: Link coloring with OCG OFF on input line 6052. +Package hyperref Info: PDF/A mode OFF on input line 6057. +LaTeX Info: Redefining \ref on input line 6097. +LaTeX Info: Redefining \pageref on input line 6101. +(/usr/local/texlive/2022/texmf-dist/tex/latex/base/atbegshi-ltx.sty +Package: atbegshi-ltx 2021/01/10 v1.0c Emulation of the original atbegshi +package with kernel methods +) +\Hy@abspage=\count339 +\c@Item=\count340 +\c@Hfootnote=\count341 +) +Package hyperref Info: Driver (autodetected): hxetex. +(/usr/local/texlive/2022/texmf-dist/tex/latex/hyperref/hxetex.def +File: hxetex.def 2022-02-21 v7.00n Hyperref driver for XeTeX +(/usr/local/texlive/2022/texmf-dist/tex/generic/stringenc/stringenc.sty +Package: stringenc 2019/11/29 v1.12 Convert strings between diff. encodings (HO +) +) +\pdfm@box=\box88 +\c@Hy@AnnotLevel=\count342 +\HyField@AnnotCount=\count343 +\Fld@listcount=\count344 +\c@bookmark@seq@number=\count345 + +(/usr/local/texlive/2022/texmf-dist/tex/latex/rerunfilecheck/rerunfilecheck.sty +Package: rerunfilecheck 2019/12/05 v1.9 Rerun checks for auxiliary files (HO) +(/usr/local/texlive/2022/texmf-dist/tex/latex/base/atveryend-ltx.sty +Package: atveryend-ltx 2020/08/19 v1.0a Emulation of the original atveryend pac +kage +with kernel methods +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/uniquecounter/uniquecounter.sty +Package: uniquecounter 2019/12/15 v1.4 Provide unlimited unique counter (HO) +) +Package uniquecounter Info: New unique counter `rerunfilecheck' on input line 2 +86. +) +\Hy@SectionHShift=\skip79 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/bookmark/bkm-dvipdfm.def +File: bkm-dvipdfm.def 2020-11-06 v1.29 bookmark driver for dvipdfm (HO) +\BKM@id=\count346 +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/caption/caption.sty +Package: caption 2022/03/01 v3.6b Customizing captions (AR) +(/usr/local/texlive/2022/texmf-dist/tex/latex/caption/caption3.sty +Package: caption3 2022/03/17 v2.3b caption3 kernel (AR) +\caption@tempdima=\dimen288 +\captionmargin=\dimen289 +\caption@leftmargin=\dimen290 +\caption@rightmargin=\dimen291 +\caption@width=\dimen292 +\caption@indent=\dimen293 +\caption@parindent=\dimen294 +\caption@hangindent=\dimen295 +Package caption Info: KOMA-Script document class detected. +(/usr/local/texlive/2022/texmf-dist/tex/latex/caption/caption-koma.sto +File: caption-koma.sto 2020/09/21 v2.0b Adaption of the caption package to the +KOMA-Script document classes (AR) +)) +\c@caption@flags=\count347 +\c@continuedfloat=\count348 +Package caption Info: hyperref package is loaded. +Package caption Info: longtable package is loaded. +(/usr/local/texlive/2022/texmf-dist/tex/latex/caption/ltcaption.sty +Package: ltcaption 2021/01/08 v1.4c longtable captions (AR) +)) (/usr/local/texlive/2022/texmf-dist/tex/latex/float/float.sty +Package: float 2001/11/08 v1.3d Float enhancements (AL) +\c@float@type=\count349 +\float@exts=\toks40 +\float@box=\box89 +\@float@everytoks=\toks41 +\@floatcapt=\box90 +) +\@float@every@codelisting=\toks42 +\c@codelisting=\count350 +(/usr/local/texlive/2022/texmf-dist/tex/latex/caption/subcaption.sty +Package: subcaption 2022/01/07 v1.5 Sub-captions (AR) +\c@subfigure=\count351 +\c@subtable=\count352 +) (/usr/local/texlive/2022/texmf-dist/tex/latex/xurl/xurl.sty +Package: xurl 2022/01/09 v 0.10 modify URL breaks +) +Package hyperref Info: Option `colorlinks' set `true' on input line 213. +No file index.aux. +\openout1 = `index.aux'. + +LaTeX Font Info: Checking defaults for OML/cmm/m/it on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for OMS/cmsy/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for OT1/cmr/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for T1/cmr/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for TS1/cmr/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for TU/lmr/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for OMX/cmex/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for U/cmr/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for PD1/pdf/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +LaTeX Font Info: Checking defaults for PU/pdf/m/n on input line 229. +LaTeX Font Info: ... okay on input line 229. +Package scrbase Info: activating english \contentsname on input line 229. +Package scrbase Info: activating english \listfigurename on input line 229. +Package scrbase Info: activating english \listtablename on input line 229. +LaTeX Font Info: Overwriting math alphabet `\mathrm' in version `normal' +(Font) OT1/lmr/m/n --> TU/lmr/m/n on input line 229. +LaTeX Font Info: Overwriting math alphabet `\mathit' in version `normal' +(Font) OT1/lmr/m/it --> TU/lmr/m/it on input line 229. +LaTeX Font Info: Overwriting math alphabet `\mathbf' in version `normal' +(Font) OT1/lmr/bx/n --> TU/lmr/bx/n on input line 229. +LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `normal' +(Font) OT1/lmss/m/n --> TU/lmss/m/n on input line 229. +LaTeX Font Info: Overwriting math alphabet `\mathsf' in version `bold' +(Font) OT1/lmss/bx/n --> TU/lmss/bx/n on input line 229. +LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `normal' +(Font) OT1/lmtt/m/n --> TU/lmtt/m/n on input line 229. +LaTeX Font Info: Overwriting math alphabet `\mathtt' in version `bold' +(Font) OT1/lmtt/m/n --> TU/lmtt/bx/n on input line 229. + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: Font family 'latinmodern-math.otf(0)' created for font +(fontspec) 'latinmodern-math.otf' with options +(fontspec) [Scale=MatchLowercase,BoldItalicFont={},ItalicFont={},Sm +allCapsFont={},Script=Math,BoldFont={latinmodern-math.otf}]. +(fontspec) +(fontspec) This font family consists of the following NFSS +(fontspec) series/shapes: +(fontspec) +(fontspec) - 'normal' (m/n) with NFSS spec.: +(fontspec) <->s*[0.9999967668407183]"[latinmodern-math.otf]/OT:scri +pt=math;language=dflt;" +(fontspec) - 'small caps' (m/sc) with NFSS spec.: +(fontspec) - 'bold' (b/n) with NFSS spec.: +(fontspec) <->s*[0.9999967668407183]"[latinmodern-math.otf]/OT:scri +pt=math;language=dflt;" +(fontspec) - 'bold small caps' (b/sc) with NFSS spec.: + +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(0)/m/n' will be +(Font) scaled to size 10.95pt on input line 229. + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: Font family 'latinmodern-math.otf(1)' created for font +(fontspec) 'latinmodern-math.otf' with options +(fontspec) [Scale=MatchLowercase,BoldItalicFont={},ItalicFont={},Sm +allCapsFont={},Script=Math,SizeFeatures={{Size=9.3075-},{Size=6.57-9.3075,Font= +latinmodern-math.otf,Style=MathScript},{Size=-6.57,Font=latinmodern-math.otf,St +yle=MathScriptScript}},BoldFont={latinmodern-math.otf}]. +(fontspec) +(fontspec) This font family consists of the following NFSS +(fontspec) series/shapes: +(fontspec) +(fontspec) - 'normal' (m/n) with NFSS spec.: +(fontspec) <9.3075->s*[0.9999967668407183]"[latinmodern-math.otf]/O +T:script=math;language=dflt;"<6.57-9.3075>s*[0.9999967668407183]"[latinmodern-m +ath.otf]/OT:script=math;language=dflt;+ssty=0;"<-6.57>s*[0.9999967668407183]"[l +atinmodern-math.otf]/OT:script=math;language=dflt;+ssty=1;" +(fontspec) - 'small caps' (m/sc) with NFSS spec.: +(fontspec) - 'bold' (b/n) with NFSS spec.: +(fontspec) <->s*[0.9999967668407183]"[latinmodern-math.otf]/OT:scri +pt=math;language=dflt;" +(fontspec) - 'bold small caps' (b/sc) with NFSS spec.: + +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be +(Font) scaled to size 10.95pt on input line 229. +LaTeX Font Info: Encoding `OT1' has changed to `TU' for symbol font +(Font) `operators' in the math version `normal' on input line 229. + +LaTeX Font Info: Overwriting symbol font `operators' in version `normal' +(Font) OT1/lmr/m/n --> TU/latinmodern-math.otf(1)/m/n on input + line 229. +LaTeX Font Info: Encoding `OT1' has changed to `TU' for symbol font +(Font) `operators' in the math version `bold' on input line 229. +LaTeX Font Info: Overwriting symbol font `operators' in version `bold' +(Font) OT1/lmr/bx/n --> TU/latinmodern-math.otf(1)/b/n on inpu +t line 229. + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 1.000096766517402. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 1.000096766517402. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 1.000096766517402. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 1.000096766517402. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 1.000096766517402. + + +Package fontspec Info: Font family 'latinmodern-math.otf(2)' created for font +(fontspec) 'latinmodern-math.otf' with options +(fontspec) [Scale=MatchLowercase,BoldItalicFont={},ItalicFont={},Sm +allCapsFont={},Script=Math,SizeFeatures={{Size=9.3075-},{Size=6.57-9.3075,Font= +latinmodern-math.otf,Style=MathScript},{Size=-6.57,Font=latinmodern-math.otf,St +yle=MathScriptScript}},BoldFont={latinmodern-math.otf},ScaleAgain=1.0001,FontAd +justment={\fontdimen +(fontspec) 8\font =7.41315pt\relax \fontdimen 9\font +(fontspec) =4.3143pt\relax \fontdimen 10\font =4.8618pt\relax +(fontspec) \fontdimen 11\font =7.5117pt\relax \fontdimen 12\font +(fontspec) =3.77776pt\relax \fontdimen 13\font =3.97485pt\relax +(fontspec) \fontdimen 14\font =3.97485pt\relax \fontdimen 15\font +(fontspec) =3.16455pt\relax \fontdimen 16\font =2.70465pt\relax +(fontspec) \fontdimen 17\font =2.70465pt\relax \fontdimen 18\font +(fontspec) =2.7375pt\relax \fontdimen 19\font =2.19pt\relax +(fontspec) \fontdimen 22\font =2.7375pt\relax \fontdimen 20\font +(fontspec) =0pt\relax \fontdimen 21\font =0pt\relax }]. +(fontspec) +(fontspec) This font family consists of the following NFSS +(fontspec) series/shapes: +(fontspec) +(fontspec) - 'normal' (m/n) with NFSS spec.: +(fontspec) <9.3075->s*[1.000096766517402]"[latinmodern-math.otf]/OT +:script=math;language=dflt;"<6.57-9.3075>s*[1.000096766517402]"[latinmodern-mat +h.otf]/OT:script=math;language=dflt;+ssty=0;"<-6.57>s*[1.000096766517402]"[lati +nmodern-math.otf]/OT:script=math;language=dflt;+ssty=1;" +(fontspec) - 'small caps' (m/sc) with NFSS spec.: +(fontspec) and font adjustment code: +(fontspec) \fontdimen 8\font =7.41315pt\relax \fontdimen 9\font +(fontspec) =4.3143pt\relax \fontdimen 10\font =4.8618pt\relax +(fontspec) \fontdimen 11\font =7.5117pt\relax \fontdimen 12\font +(fontspec) =3.77776pt\relax \fontdimen 13\font =3.97485pt\relax +(fontspec) \fontdimen 14\font =3.97485pt\relax \fontdimen 15\font +(fontspec) =3.16455pt\relax \fontdimen 16\font =2.70465pt\relax +(fontspec) \fontdimen 17\font =2.70465pt\relax \fontdimen 18\font +(fontspec) =2.7375pt\relax \fontdimen 19\font =2.19pt\relax +(fontspec) \fontdimen 22\font =2.7375pt\relax \fontdimen 20\font +(fontspec) =0pt\relax \fontdimen 21\font =0pt\relax +(fontspec) - 'bold' (b/n) with NFSS spec.: +(fontspec) <->s*[1.000096766517402]"[latinmodern-math.otf]/OT:scrip +t=math;language=dflt;" +(fontspec) - 'bold small caps' (b/sc) with NFSS spec.: +(fontspec) and font adjustment code: +(fontspec) \fontdimen 8\font =7.41315pt\relax \fontdimen 9\font +(fontspec) =4.3143pt\relax \fontdimen 10\font =4.8618pt\relax +(fontspec) \fontdimen 11\font =7.5117pt\relax \fontdimen 12\font +(fontspec) =3.77776pt\relax \fontdimen 13\font =3.97485pt\relax +(fontspec) \fontdimen 14\font =3.97485pt\relax \fontdimen 15\font +(fontspec) =3.16455pt\relax \fontdimen 16\font =2.70465pt\relax +(fontspec) \fontdimen 17\font =2.70465pt\relax \fontdimen 18\font +(fontspec) =2.7375pt\relax \fontdimen 19\font =2.19pt\relax +(fontspec) \fontdimen 22\font =2.7375pt\relax \fontdimen 20\font +(fontspec) =0pt\relax \fontdimen 21\font =0pt\relax + +LaTeX Font Info: Encoding `OMS' has changed to `TU' for symbol font +(Font) `symbols' in the math version `normal' on input line 229. +LaTeX Font Info: Overwriting symbol font `symbols' in version `normal' +(Font) OMS/lmsy/m/n --> TU/latinmodern-math.otf(2)/m/n on inpu +t line 229. +LaTeX Font Info: Encoding `OMS' has changed to `TU' for symbol font +(Font) `symbols' in the math version `bold' on input line 229. +LaTeX Font Info: Overwriting symbol font `symbols' in version `bold' +(Font) OMS/lmsy/b/n --> TU/latinmodern-math.otf(2)/b/n on inpu +t line 229. + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9998967671640342. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9998967671640342. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9998967671640342. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9998967671640342. + + +Package fontspec Info: latinmodern-math scale = 0.9999967668407183. + + +Package fontspec Info: latinmodern-math scale = 0.9998967671640342. + + +Package fontspec Info: Font family 'latinmodern-math.otf(3)' created for font +(fontspec) 'latinmodern-math.otf' with options +(fontspec) [Scale=MatchLowercase,BoldItalicFont={},ItalicFont={},Sm +allCapsFont={},Script=Math,SizeFeatures={{Size=9.3075-},{Size=6.57-9.3075,Font= +latinmodern-math.otf,Style=MathScript},{Size=-6.57,Font=latinmodern-math.otf,St +yle=MathScriptScript}},BoldFont={latinmodern-math.otf},ScaleAgain=0.9999,FontAd +justment={\fontdimen +(fontspec) 8\font =0.438pt\relax \fontdimen 9\font =2.19pt\relax +(fontspec) \fontdimen 10\font =1.82864pt\relax \fontdimen 11\font +(fontspec) =1.21545pt\relax \fontdimen 12\font =6.56999pt\relax +(fontspec) \fontdimen 13\font =0pt\relax }]. +(fontspec) +(fontspec) This font family consists of the following NFSS +(fontspec) series/shapes: +(fontspec) +(fontspec) - 'normal' (m/n) with NFSS spec.: +(fontspec) <9.3075->s*[0.9998967671640342]"[latinmodern-math.otf]/O +T:script=math;language=dflt;"<6.57-9.3075>s*[0.9998967671640342]"[latinmodern-m +ath.otf]/OT:script=math;language=dflt;+ssty=0;"<-6.57>s*[0.9998967671640342]"[l +atinmodern-math.otf]/OT:script=math;language=dflt;+ssty=1;" +(fontspec) - 'small caps' (m/sc) with NFSS spec.: +(fontspec) and font adjustment code: +(fontspec) \fontdimen 8\font =0.438pt\relax \fontdimen 9\font +(fontspec) =2.19pt\relax \fontdimen 10\font =1.82864pt\relax +(fontspec) \fontdimen 11\font =1.21545pt\relax \fontdimen 12\font +(fontspec) =6.56999pt\relax \fontdimen 13\font =0pt\relax +(fontspec) - 'bold' (b/n) with NFSS spec.: +(fontspec) <->s*[0.9998967671640342]"[latinmodern-math.otf]/OT:scri +pt=math;language=dflt;" +(fontspec) - 'bold small caps' (b/sc) with NFSS spec.: +(fontspec) and font adjustment code: +(fontspec) \fontdimen 8\font =0.438pt\relax \fontdimen 9\font +(fontspec) =2.19pt\relax \fontdimen 10\font =1.82864pt\relax +(fontspec) \fontdimen 11\font =1.21545pt\relax \fontdimen 12\font +(fontspec) =6.56999pt\relax \fontdimen 13\font =0pt\relax + +LaTeX Font Info: Encoding `OMX' has changed to `TU' for symbol font +(Font) `largesymbols' in the math version `normal' on input line 2 +29. +LaTeX Font Info: Overwriting symbol font `largesymbols' in version `normal' +(Font) OMX/lmex/m/n --> TU/latinmodern-math.otf(3)/m/n on inpu +t line 229. +LaTeX Font Info: Encoding `OMX' has changed to `TU' for symbol font +(Font) `largesymbols' in the math version `bold' on input line 229 +. +LaTeX Font Info: Overwriting symbol font `largesymbols' in version `bold' +(Font) OMX/lmex/m/n --> TU/latinmodern-math.otf(3)/b/n on inpu +t line 229. +LaTeX Info: Redefining \microtypecontext on input line 229. +Package microtype Info: Applying patch `item' on input line 229. +Package microtype Info: Applying patch `toc' on input line 229. +Package microtype Info: Applying patch `eqnum' on input line 229. +Package microtype Info: Applying patch `footnote' on input line 229. +Package microtype Info: Character protrusion enabled (level 2). +Package microtype Info: Using protrusion set `basicmath'. +Package microtype Info: No adjustment of tracking. +Package microtype Info: No adjustment of spacing. +Package microtype Info: No adjustment of kerning. + +(/usr/local/texlive/2022/texmf-dist/tex/latex/microtype/mt-LatinModernRoman.cfg +File: mt-LatinModernRoman.cfg 2021/02/21 v1.1 microtype config. file: Latin Mod +ern Roman (RS) +) +Package hyperref Info: Link coloring ON on input line 229. +(/usr/local/texlive/2022/texmf-dist/tex/latex/hyperref/nameref.sty +Package: nameref 2021-04-02 v2.47 Cross-referencing by name of section +(/usr/local/texlive/2022/texmf-dist/tex/latex/refcount/refcount.sty +Package: refcount 2019/12/15 v3.6 Data extraction from label references (HO) +) +(/usr/local/texlive/2022/texmf-dist/tex/generic/gettitlestring/gettitlestring.s +ty +Package: gettitlestring 2019/12/15 v1.6 Cleanup title references (HO) +) +\c@section@level=\count353 +) +LaTeX Info: Redefining \ref on input line 229. +LaTeX Info: Redefining \pageref on input line 229. +LaTeX Info: Redefining \nameref on input line 229. + +Package hyperref Warning: Rerun to get /PageLabels entry. + +Package caption Info: Begin \AtBeginDocument code. +Package caption Info: float package is loaded. +Package caption Info: End \AtBeginDocument code. +Package microtype Info: Loading generic protrusion settings for font family +(microtype) `lmss' (encoding: TU). +(microtype) For optimal results, create family-specific settings. +(microtype) See the microtype manual for details. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be +(Font) scaled to size 14.4pt on input line 231. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be +(Font) scaled to size 10.0pt on input line 231. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be +(Font) scaled to size 7.0pt on input line 231. +LaTeX Font Info: Trying to load font information for OML+lmm on input line 2 +31. +(/usr/local/texlive/2022/texmf-dist/tex/latex/lm/omllmm.fd +File: omllmm.fd 2015/05/01 v1.6.1 Font defs for Latin Modern +) +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be +(Font) scaled to size 14.4013pt on input line 231. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be +(Font) scaled to size 10.00092pt on input line 231. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be +(Font) scaled to size 7.00064pt on input line 231. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be +(Font) scaled to size 14.39845pt on input line 231. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be +(Font) scaled to size 9.99893pt on input line 231. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be +(Font) scaled to size 6.99925pt on input line 231. +LaTeX Font Info: Trying to load font information for U+msa on input line 231 +. +(/usr/local/texlive/2022/texmf-dist/tex/latex/amsfonts/umsa.fd +File: umsa.fd 2013/01/14 v3.01 AMS symbols A +) (/usr/local/texlive/2022/texmf-dist/tex/latex/microtype/mt-msa.cfg +File: mt-msa.cfg 2006/02/04 v1.1 microtype config. file: AMS symbols (a) (RS) +) +LaTeX Font Info: Trying to load font information for U+msb on input line 231 +. +(/usr/local/texlive/2022/texmf-dist/tex/latex/amsfonts/umsb.fd +File: umsb.fd 2013/01/14 v3.01 AMS symbols B +) (/usr/local/texlive/2022/texmf-dist/tex/latex/microtype/mt-msb.cfg +File: mt-msb.cfg 2005/06/01 v1.0 microtype config. file: AMS symbols (b) (RS) +) [1 + + + +] +Package tocbasic Info: character protrusion at toc deactivated on input line 23 +6. +\tf@toc=\write5 +\openout5 = `index.toc'. + +[2 + +] +Underfull \hbox (badness 1910) in paragraph at lines 259--261 +[]\TU/lmr/m/n/10.95 If you spot any typos or would like to suggest any changes, + please email us at + [] + +[3 + +] +chapter 1. + +Class scrreprt Warning: \float@addtolists detected! +(scrreprt) Implementation of \float@addtolist became +(scrreprt) deprecated in KOMA-Script v3.01 2008/11/14 and +(scrreprt) has been replaced by several more flexible +(scrreprt) features of package `tocbasic`. +(scrreprt) Since Version 3.12 support for deprecated +(scrreprt) \float@addtolist interface has been +(scrreprt) restricted to only some of the KOMA-Script +(scrreprt) features and been removed from others. +(scrreprt) Loading of package `scrhack' may help to +(scrreprt) avoid this warning, if you are using a +(scrreprt) a package that still implements the +(scrreprt) deprecated \float@addtolist interface. + +(/usr/local/texlive/2022/texmf-dist/tex/latex/microtype/mt-TU-empty.cfg +File: mt-TU-empty.cfg 2021/06/22 v1.1 microtype config. file: fonts with nonsta +ndard glyph set (RS) +) +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be +(Font) scaled to size 7.665pt on input line 287. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(1)/m/n' will be +(Font) scaled to size 5.475pt on input line 287. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be +(Font) scaled to size 10.95099pt on input line 287. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be +(Font) scaled to size 7.66568pt on input line 287. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(2)/m/n' will be +(Font) scaled to size 5.4755pt on input line 287. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be +(Font) scaled to size 10.94882pt on input line 287. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be +(Font) scaled to size 7.66417pt on input line 287. +LaTeX Font Info: Font shape `TU/latinmodern-math.otf(3)/m/n' will be +(Font) scaled to size 5.47441pt on input line 287. +[4 + +] [5] [6] [7] +Underfull \hbox (badness 10000) in paragraph at lines 559--561 + + [] + + +Underfull \hbox (badness 10000) in paragraph at lines 568--570 + + [] + +[8] +chapter 2. +[9 + +] +LaTeX Font Info: Font shape `TU/lmtt/bx/n' in size <14.4> not available +(Font) Font shape `TU/lmtt/b/n' tried instead on input line 673. +LaTeX Font Info: Font shape `TU/lmtt/bx/n' in size <10.95> not available +(Font) Font shape `TU/lmtt/b/n' tried instead on input line 692. +[10] [11] [12] +LaTeX Font Info: Font shape `TU/lmtt/bx/n' in size <12> not available +(Font) Font shape `TU/lmtt/b/n' tried instead on input line 918. +[13] +Missing character: There is no   (U+2003) in font [lmroman10-regular]:mapping=t +ex-text;! + +Overfull \hbox (12.53052pt too wide) in alignment at lines 979--994 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.1 on input line 994. + +[14] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.2 on input line 1029. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.3 on input line 1052. + +[15] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.4 on input line 1083. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.5 on input line 1106. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.6 on input line 1148. + +[16] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.7 on input line 1166. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.8 on input line 1190. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.9 on input line 1229. + +[17] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.10 on input line 1266. + +[18] +Overfull \hbox (36.13486pt too wide) in paragraph at lines 1330--1330 +[]\TU/lmtt/m/n/10.95 Index([[]index[], []Candidate[], []Year[], []Popular vote[ +], []Result[], []%[]], dtype=[]object[])[] + [] + +[19] +Overfull \hbox (1.58052pt too wide) in alignment at lines 1415--1424 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.11 on input line 1424. + +[20] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.12 on input line 1448. + +[21] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.13 on input line 1531. + + +Overfull \hbox (1.58052pt too wide) in alignment at lines 1549--1557 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.14 on input line 1557. + +[22] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.15 on input line 1585. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.16 on input line 1614. + + +Overfull \hbox (1.58052pt too wide) in alignment at lines 1630--1638 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.17 on input line 1638. + +[23] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.18 on input line 1709. + +[24] + +Package longtable Warning: Column widths have changed +(longtable) in table 2.19 on input line 1736. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.20 on input line 1765. + +[25] +Overfull \hbox (1.58052pt too wide) in alignment at lines 1824--1832 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.21 on input line 1832. + + +Package longtable Warning: Column widths have changed +(longtable) in table 2.22 on input line 1862. + +[26] [27] +chapter 3. +[28 + +] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.1 on input line 1984. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.2 on input line 2026. + +[29] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.3 on input line 2048. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.4 on input line 2080. + +[30] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.5 on input line 2146. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.6 on input line 2163. + +[31] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.7 on input line 2191. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.8 on input line 2225. + +[32] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.9 on input line 2255. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.10 on input line 2300. + +[33] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.11 on input line 2341. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.12 on input line 2379. + +[34] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.13 on input line 2406. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.14 on input line 2431. + +[35] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.15 on input line 2460. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.16 on input line 2494. + +[36] [37] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.17 on input line 2643. + +[38] +Underfull \hbox (badness 1845) in paragraph at lines 2671--2674 +[]\TU/lmr/m/n/10.95 By default, \TU/lmtt/m/n/10.95 .sample() \TU/lmr/m/n/10.95 +selects entries \TU/lmr/m/it/10.95 without \TU/lmr/m/n/10.95 replacement. Pass +in the argument + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.18 on input line 2690. + + +Package longtable Warning: Column widths have changed +(longtable) in table 3.19 on input line 2714. + +[39] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.20 on input line 2734. + +[40] + +Package longtable Warning: Column widths have changed +(longtable) in table 3.21 on input line 2816. + +[41] +chapter 4. +[42 + +] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.1 on input line 2931. + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.2 on input line 2962. + +[43] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.3 on input line 2987. + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.4 on input line 3012. + +[44] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.5 on input line 3039. + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.6 on input line 3076. + +[45] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.7 on input line 3101. + +[46] +Overfull \hbox (105.11981pt too wide) in paragraph at lines 3160--3160 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/2718070104.py:1: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3162--3162 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.sum. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "sum" instea +d.[] + [] + +File: pandas_3/images/agg.png Graphic file (type bmp) + +[47] +Overfull \hbox (93.62231pt too wide) in paragraph at lines 3209--3209 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/86785752.py:1: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3211--3211 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.min. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "min" instea +d.[] + [] + + +Overfull \hbox (105.11981pt too wide) in paragraph at lines 3236--3236 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/3032256904.py:1: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3238--3238 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.max. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "max" instea +d.[] + [] + + +Overfull \hbox (105.11981pt too wide) in paragraph at lines 3264--3264 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/1958904241.py:2: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3266--3266 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.sum. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "sum" instea +d.[] + [] + +[48] +Overfull \hbox (105.11981pt too wide) in paragraph at lines 3313--3313 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/3244314896.py:2: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3315--3315 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.min. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "min" instea +d.[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.12 on input line 3331. + +[49] +Overfull \hbox (105.11981pt too wide) in paragraph at lines 3341--3341 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/3805876622.py:2: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3343--3343 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.max. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "max" instea +d.[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.13 on input line 3359. + + +Overfull \hbox (99.37106pt too wide) in paragraph at lines 3373--3373 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/308986604.py:2: FutureWarning:[] + [] + + +Overfull \hbox (858.20592pt too wide) in paragraph at lines 3375--3375 +[]\TU/lmtt/m/n/10.95 The provided callable is cu +rrently using DataFrameGroupBy.mean. In a future version of pandas, the provide +d callable will be used directly. To keep current behavior pass the string "mea +n" instead.[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.14 on input line 3391. + +[50] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.15 on input line 3447. + +File: pandas_3/images/first.png Graphic file (type bmp) + +[51] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.16 on input line 3486. + + +Overfull \hbox (99.37106pt too wide) in paragraph at lines 3507--3507 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/390646742.py:1: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3509--3509 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.sum. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "sum" instea +d.[] + [] + +[52] +Overfull \hbox (105.11981pt too wide) in paragraph at lines 3540--3540 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/4066413905.py:2: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3542--3542 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.sum. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "sum" instea +d.[] + [] + +[53] [54] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.18 on input line 3653. + + +Overfull \hbox (68.74403pt too wide) in paragraph at lines 3664--3669 +\TU/lmr/m/n/10.95 to apply our function to the table as a whole by doing \TU/lm +tt/m/n/10.95 f_babynames.groupby("Name").agg(ratio_to_peak)\TU/lmr/m/n/10.95 , + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.19 on input line 3708. + +[55] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.20 on input line 3735. + +[56] +Overfull \hbox (105.11981pt too wide) in paragraph at lines 3782--3782 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/1912269730.py:1: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 3784--3784 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.sum. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "sum" instea +d.[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.21 on input line 3800. + + +Overfull \hbox (1.58052pt too wide) in alignment at lines 3823--3832 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.22 on input line 3832. + +[57] +Overfull \hbox (2375.87561pt too wide) in paragraph at lines 3861--3861 +[]\TU/lmtt/m/n/10.95 {[]American[]: [22, 126], []American Independent[]: [115, +119, 124], []Anti-Masonic[]: [6], []Anti-Monopoly[]: [38], []Citizens[]: [127], + []Communist[]: [89], []Constitution[]: [160, 164, 172], []Constitutional Union +[]: [24], []Democratic[]: [2, 4, 8, 10, 13, 14, 17, 20, 28, 29, 34, 37, 39, 45, + 47, 52, 55, 57, 64, 70, 74, 77, 81, 83, 86, 91, 94, 97, 100, 105, 108, 111, 11 +4, 116, 118, 123, 129, 134, 137, 140, 144, 151, 158, 162, 168, 176, 178], []Dem +ocratic-Republican[]: [0, 1], []Dixiecrat[]: [103], []Farmer– + [] + + +Overfull \hbox (5031.79755pt too wide) in paragraph at lines 3861--3861 +\TU/lmtt/m/n/10.95 Labor[]: [78], []Free Soil[]: [15, 18], []Green[]: [149, 155 +, 156, 165, 170, 177, 181], []Greenback[]: [35], []Independent[]: [121, 130, 14 +3, 161, 167, 174], []Liberal Republican[]: [31], []Libertarian[]: [125, 128, 13 +2, 138, 139, 146, 153, 159, 163, 169, 175, 180], []National Democratic[]: [50], + []National Republican[]: [3, 5], []National Union[]: [27], []Natural Law[]: [1 +48], []New Alliance[]: [136], []Northern Democratic[]: [26], []Populist[]: [48, + 61, 141], []Progressive[]: [68, 82, 101, 107], []Prohibition[]: [41, 44, 49, 5 +1, 54, 59, 63, 67, 73, 75, 99], []Reform[]: [150, 154], []Republican[]: [21, 23 +, 30, 32, 33, 36, 40, 43, 46, 53, 56, 60, 65, 69, 72, 79, 80, 84, 87, 90, 96, 9 +8, 104, 106, 109, 112, 113, 117, 120, 122, 131, 133, 135, 142, 145, 152, 157, 1 +66, 171, 173, 179], []Socialist[]: [58, 62, 66, 71, 76, 85, 88, 92, 95, 102], [ +]Southern Democratic[]: [25], []States[] Rights[]: [110], []Taxpayers[]: [147], + []Union[]: [93], []Union Labor[]: [42], []Whig[]: [7, 9, 11, 12, 16, 19]}[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.23 on input line 3887. + +[58] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.24 on input line 3949. + +[59] [60] +Overfull \hbox (0.72641pt too wide) in alignment at lines 4101--4115 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.26 on input line 4115. + +[61] +Overfull \hbox (105.11981pt too wide) in paragraph at lines 4164--4164 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/4278286395.py:1: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 4166--4166 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.max. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "max" instea +d.[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.27 on input line 4189. + +[62] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.28 on input line 4250. + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.29 on input line 4281. + +[63] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.30 on input line 4325. + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.31 on input line 4348. + +[64] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.32 on input line 4382. + + +Overfull \hbox (105.11981pt too wide) in paragraph at lines 4393--4393 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/3186035650.py:3: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 4395--4395 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.sum. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "sum" instea +d.[] + [] + + +Overfull \hbox (70.24188pt too wide) in alignment at lines 4405--4412 + [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.33 on input line 4412. + +[65] +Overfull \hbox (105.11981pt too wide) in paragraph at lines 4454--4454 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/2548053048.py:3: FutureWarning:[] + [] + + +Overfull \hbox (840.95969pt too wide) in paragraph at lines 4456--4456 +[]\TU/lmtt/m/n/10.95 The provided callable is cur +rently using DataFrameGroupBy.sum. In a future version of pandas, the provided +callable will be used directly. To keep current behavior pass the string "sum" +instead.[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.34 on input line 4472. + +[66] +Overfull \hbox (99.37106pt too wide) in paragraph at lines 4517--4517 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57880/970182367.py:1: FutureWarning:[] + [] + + +Overfull \hbox (806.4672pt too wide) in paragraph at lines 4519--4519 +[]\TU/lmtt/m/n/10.95 The provided callable is currently + using DataFrameGroupBy.max. In a future version of pandas, the provided callab +le will be used directly. To keep current behavior pass the string "max" instea +d.[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.35 on input line 4539. + +[67] +Overfull \hbox (1.58052pt too wide) in alignment at lines 4571--4580 + [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.36 on input line 4580. + + +Overfull \hbox (67.95822pt too wide) in alignment at lines 4605--4616 + [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.37 on input line 4616. + +[68] + +Package longtable Warning: Column widths have changed +(longtable) in table 4.38 on input line 4638. + + +Overfull \hbox (167.43184pt too wide) in alignment at lines 4658--4663 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (167.43184pt too wide) in alignment at lines 4663--4665 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (354.60016pt too wide) in alignment at lines 4665--4676 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 4.39 on input line 4676. + +[69] [70] +chapter 5. +[71 + +] [72] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.1 on input line 4833. + +[73] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.2 on input line 4916. + +[74] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.3 on input line 4983. + +[75] [76] +Overfull \hbox (13.13986pt too wide) in paragraph at lines 5074--5074 +[]\TU/lmtt/m/n/10.95 -rw-r--r-- 1 jianingding21 staff 114K Aug 27 03:33 dat +a/confirmed-cases.json[] + [] + +[77] +Overfull \hbox (2646.0668pt too wide) in paragraph at lines 5190--5190 +[]\TU/lmtt/m/n/10.95 dict_keys([[]id[], []name[], []assetType[], []attribution[ +], []averageRating[], []category[], []createdAt[], []description[], []displayTy +pe[], []downloadCount[], []hideFromCatalog[], []hideFromDataJson[], []newBacken +d[], []numberOfComments[], []oid[], []provenance[], []publicationAppendEnabled[ +], []publicationDate[], []publicationGroup[], []publicationStage[], []rowsUpdat +edAt[], []rowsUpdatedBy[], []tableId[], []totalTimesRated[], []viewCount[], []v +iewLastModified[], []viewType[], []approvals[], []columns[], []grants[], []meta +data[], []owner[], []query[], []rights[], []tableAuthor[], []tags[], []flags[]] +)[] + [] + +[78] +Overfull \hbox (398.30603pt too wide) in paragraph at lines 5238--5238 +[]\TU/lmtt/m/n/10.95 000 | [[]row-kzbg.v7my-c3y2[], []00000000-0000-0000-0405-C +B14DE51DAA7[], 0, 1643733903, None, 1643733903, None, []{ }[], []2020-02-28T00: +00:00[], []1[], []1[]][] + [] + + +Overfull \hbox (398.30603pt too wide) in paragraph at lines 5239--5239 +[]\TU/lmtt/m/n/10.95 001 | [[]row-jkyx_9u4r-h2yw[], []00000000-0000-0000-F806-8 +6D0DBE0E17F[], 0, 1643733903, None, 1643733903, None, []{ }[], []2020-02-29T00: +00:00[], []0[], []1[]][] + [] + + +Overfull \hbox (398.30603pt too wide) in paragraph at lines 5240--5240 +[]\TU/lmtt/m/n/10.95 002 | [[]row-qifg_4aug-y3ym[], []00000000-0000-0000-2DCE-4 +D1872F9B216[], 0, 1643733903, None, 1643733903, None, []{ }[], []2020-03-01T00: +00:00[], []0[], []1[]][] + [] + +[79] +Overfull \hbox (177.33301pt too wide) in alignment at lines 5319--5324 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (177.33301pt too wide) in alignment at lines 5324--5326 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (560.65959pt too wide) in alignment at lines 5326--5342 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.4 on input line 5342. + +[80] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.5 on input line 5369. + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.6 on input line 5389. + +[81] +File: eda/images/variable.png Graphic file (type bmp) + +[82] [83] +Overfull \hbox (229.5426pt too wide) in alignment at lines 5528--5533 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (229.5426pt too wide) in alignment at lines 5533--5535 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (848.37073pt too wide) in alignment at lines 5535--5554 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.7 on input line 5554. + + +Overfull \hbox (87.87357pt too wide) in paragraph at lines 5576--5576 +[]\TU/lmtt/m/n/10.95 /var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel +_57962/874729699.py:1: UserWarning:[] + [] + + +Overfull \hbox (530.52725pt too wide) in paragraph at lines 5578--5578 +[]\TU/lmtt/m/n/10.95 Could not infer format, so each element will be parsed ind +ividually, falling back to []dateutil[]. To ensure parsing is consistent and as +-expected, please specify a format.[] + [] + + +Overfull \hbox (229.5426pt too wide) in alignment at lines 5581--5586 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (229.5426pt too wide) in alignment at lines 5586--5588 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (784.6527pt too wide) in alignment at lines 5588--5607 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.8 on input line 5607. + +[84] +Overfull \hbox (229.5426pt too wide) in alignment at lines 5654--5659 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (229.5426pt too wide) in alignment at lines 5659--5661 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (843.74982pt too wide) in alignment at lines 5661--5682 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.9 on input line 5682. + +[85] [86] [87] +Overfull \hbox (74.12302pt too wide) in alignment at lines 5874--5879 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (74.12302pt too wide) in alignment at lines 5879--5881 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (98.26779pt too wide) in alignment at lines 5881--5888 + [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.10 on input line 5888. + +[88] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.11 on input line 5922. + + +Overfull \hbox (187.21458pt too wide) in alignment at lines 5945--5950 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (187.21458pt too wide) in alignment at lines 5950--5952 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (192.6896pt too wide) in alignment at lines 5952--5958 + [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.12 on input line 5958. + +[89] [90] +Overfull \hbox (187.21458pt too wide) in alignment at lines 6032--6037 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (187.21458pt too wide) in alignment at lines 6037--6039 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (192.6896pt too wide) in alignment at lines 6039--6045 + [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.13 on input line 6045. + + +Overfull \hbox (187.21458pt too wide) in alignment at lines 6075--6080 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (187.21458pt too wide) in alignment at lines 6080--6082 + [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (192.6896pt too wide) in alignment at lines 6082--6088 + [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.14 on input line 6088. + +[91] +Underfull \hbox (badness 1205) in paragraph at lines 6098--6105 +[]\TU/lmr/m/n/10.95 Running the below cells cleans the data. There are a few ne +w methods here: * + [] + + +Overfull \hbox (264.21036pt too wide) in alignment at lines 6134--6145 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.15 on input line 6145. + +[92] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.16 on input line 6198. + + +Overfull \hbox (841.39346pt too wide) in alignment at lines 6225--6232 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (841.39346pt too wide) in alignment at lines 6232--6234 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (1131.5684pt too wide) in alignment at lines 6234--6251 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + +[93] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.17 on input line 6251. + + +Overfull \hbox (288.91461pt too wide) in alignment at lines 6273--6279 + [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (288.91461pt too wide) in alignment at lines 6279--6281 + [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (360.08961pt too wide) in alignment at lines 6281--6292 + [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.18 on input line 6292. + +[94] +Overfull \hbox (423.8174pt too wide) in alignment at lines 6321--6327 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (423.8174pt too wide) in alignment at lines 6327--6329 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (494.9924pt too wide) in alignment at lines 6329--6340 + [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.19 on input line 6340. + + +Overfull \hbox (693.62297pt too wide) in alignment at lines 6358--6365 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (693.62297pt too wide) in alignment at lines 6365--6367 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (764.79797pt too wide) in alignment at lines 6367--6378 + [] [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.20 on input line 6378. + +[95] +Overfull \hbox (603.63707pt too wide) in alignment at lines 6390--6396 + [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (603.63707pt too wide) in alignment at lines 6396--6398 + [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (737.78549pt too wide) in alignment at lines 6398--6415 + [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.21 on input line 6415. + + +Overfull \hbox (175.21458pt too wide) in alignment at lines 6450--6456 + [] [] [] [] [] [] [] + [] + + +Overfull \hbox (175.21458pt too wide) in alignment at lines 6456--6458 + [] [] [] [] [] [] [] + [] + + +Overfull \hbox (175.21458pt too wide) in alignment at lines 6458--6464 + [] [] [] [] [] [] [] + [] + +[96] +Overfull \hbox (246.73534pt too wide) in alignment at lines 6480--6491 + [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.23 on input line 6491. + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.24 on input line 6513. + + +Overfull \hbox (175.21458pt too wide) in alignment at lines 6524--6530 + [] [] [] [] [] [] [] + [] + + +Overfull \hbox (175.21458pt too wide) in alignment at lines 6530--6532 + [] [] [] [] [] [] [] + [] + + +Overfull \hbox (175.21458pt too wide) in alignment at lines 6532--6538 + [] [] [] [] [] [] [] + [] + +[97] +Overfull \hbox (246.73534pt too wide) in alignment at lines 6560--6571 + [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.26 on input line 6571. + + +Overfull \hbox (246.73534pt too wide) in alignment at lines 6597--6608 + [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.27 on input line 6608. + +[98] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.28 on input line 6631. + + +Overfull \hbox (198.92871pt too wide) in alignment at lines 6650--6655 + [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (198.92871pt too wide) in alignment at lines 6655--6657 + [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (325.00702pt too wide) in alignment at lines 6657--6667 + [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.29 on input line 6667. + + +Overfull \hbox (603.63707pt too wide) in alignment at lines 6680--6686 + [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (603.63707pt too wide) in alignment at lines 6686--6688 + [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Overfull \hbox (729.71538pt too wide) in alignment at lines 6688--6699 + [] [] [] [] [] [] [] [] [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.30 on input line 6699. + +[99] [100] +Overfull \hbox (18.88861pt too wide) in paragraph at lines 6773--6773 +[]\TU/lmtt/m/n/10.95 71 | # decimal average interpo +lated trend #days[] + [] + + +Overfull \hbox (7.39111pt too wide) in paragraph at lines 6775--6775 +[]\TU/lmtt/m/n/10.95 73 | 1958 3 1958.208 315.71 315. +71 314.62 -1[] + [] + + +Overfull \hbox (7.39111pt too wide) in paragraph at lines 6776--6776 +[]\TU/lmtt/m/n/10.95 74 | 1958 4 1958.292 317.45 317. +45 315.29 -1[] + [] + + +Overfull \hbox (7.39111pt too wide) in paragraph at lines 6777--6777 +[]\TU/lmtt/m/n/10.95 75 | 1958 5 1958.375 317.50 317. +50 314.71 -1[] + [] + + +Overfull \hbox (7.39111pt too wide) in paragraph at lines 6778--6778 +[]\TU/lmtt/m/n/10.95 76 | 1958 6 1958.458 -99.99 317. +10 314.85 -1[] + [] + + +Overfull \hbox (7.39111pt too wide) in paragraph at lines 6779--6779 +[]\TU/lmtt/m/n/10.95 77 | 1958 7 1958.542 315.86 315. +86 314.98 -1[] + [] + + +Overfull \hbox (7.39111pt too wide) in paragraph at lines 6780--6780 +[]\TU/lmtt/m/n/10.95 78 | 1958 8 1958.625 314.93 314. +93 315.94 -1[] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.31 on input line 6824. + +[101] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.32 on input line 6862. + +File: eda/eda_files/figure-pdf/cell-62-output-1.pdf Graphic file (type pdf) + + +Overfull \hbox (10.9664pt too wide) in paragraph at lines 6875--6876 +[][] + [] + +[102] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.33 on input line 6902. + +[103] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.34 on input line 6922. + +[104] +File: eda/eda_files/figure-pdf/cell-67-output-1.pdf Graphic file (type pdf) + +[105] +File: eda/eda_files/figure-pdf/cell-68-output-1.pdf Graphic file (type pdf) + + +Overfull \hbox (10.954pt too wide) in paragraph at lines 7035--7036 +[][] + [] + +[106] [107] +File: eda/eda_files/figure-pdf/cell-69-output-1.pdf Graphic file (type pdf) + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.35 on input line 7103. + +[108] +File: eda/eda_files/figure-pdf/cell-71-output-1.pdf Graphic file (type pdf) + + +Overfull \hbox (10.9664pt too wide) in paragraph at lines 7134--7135 +[][] + [] + +[109] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.36 on input line 7166. + +[110] + +Package longtable Warning: Column widths have changed +(longtable) in table 5.37 on input line 7188. + + +Package longtable Warning: Column widths have changed +(longtable) in table 5.38 on input line 7232. + +[111] +File: eda/eda_files/figure-pdf/cell-75-output-1.pdf Graphic file (type pdf) + + +Overfull \hbox (10.9464pt too wide) in paragraph at lines 7268--7269 +[][] + [] + +[112] +File: eda/eda_files/figure-pdf/cell-76-output-1.pdf Graphic file (type pdf) + + +Overfull \hbox (10.94781pt too wide) in paragraph at lines 7293--7294 +[][] + [] + +[113] [114] +File: eda/eda_files/figure-pdf/cell-77-output-1.pdf Graphic file (type pdf) + + +Overfull \hbox (10.94781pt too wide) in paragraph at lines 7339--7340 +[][] + [] + +[115] [116] +chapter 6. +[117 + +] + +Package longtable Warning: Column widths have changed +(longtable) in table 6.2 on input line 7598. + + +Package longtable Warning: Column widths have changed +(longtable) in table 6.3 on input line 7611. + +[118] + +Package longtable Warning: Column widths have changed +(longtable) in table 6.4 on input line 7671. + +[119] + +Package longtable Warning: Column widths have changed +(longtable) in table 6.5 on input line 7684. + + +Package longtable Warning: Column widths have changed +(longtable) in table 6.6 on input line 7730. + +[120] + +Package longtable Warning: Column widths have changed +(longtable) in table 6.7 on input line 7744. + + +Overfull \hbox (312.07478pt too wide) in paragraph at lines 7764--7764 +[]\TU/lmtt/m/n/10.95 [[]169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /s +tat141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n[],[] + + [] + + +Overfull \hbox (427.04974pt too wide) in paragraph at lines 7765--7765 +[] []\TU/lmtt/m/n/10.95 193.205.203.3 - - [2/Feb/2005:17:23:6 -0800] "GET /stat +141/Notes/dim.html HTTP/1.0" 404 302 "http://eeyore.ucdavis.edu/stat141/Notes/s +ession.html"\n[],[] + [] + + +Overfull \hbox (168.35606pt too wide) in paragraph at lines 7766--7766 +[] []\TU/lmtt/m/n/10.95 169.237.46.240 - "" [3/Feb/2006:10:18:37 -0800] "GET /s +tat141/homework/Solutions/hw1Sol.pdf HTTP/1.1"\n[]][] + [] + +[121] +Overfull \hbox (4.39713pt too wide) in paragraph at lines 7901--7901 +\TU/lmr/m/n/10.95 ABBBBBBA| + [] + + +Overfull \hbox (13.52943pt too wide) in paragraph at lines 7904--7904 +\TU/lmr/m/n/10.95 ABABABABA| + [] + +[122] [123] [124] +Underfull \hbox (badness 10000) in paragraph at lines 8091--8093 + + [] + + +Overfull \hbox (6.13503pt too wide) in paragraph at lines 8143--8145 +\TU/lmtt/m/n/10.95 non-whitespace| + [] + + +Overfull \hbox (6.55809pt too wide) in paragraph at lines 8146--8146 +[]|\TU/lmr/m/n/10.95 PEPPERS3982 + [] + +[125] [126] +Overfull \hbox (300.57729pt too wide) in paragraph at lines 8213--8213 +[][]\TU/lmtt/m/n/10.95 169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /st +at141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n[][] + [] + +[127] + +Package longtable Warning: Column widths have changed +(longtable) in table 6.12 on input line 8344. + +[128] +Overfull \hbox (40.27951pt too wide) in paragraph at lines 8386--8388 +\TU/lmtt/m/n/10.95 pandas \TU/lmr/m/n/10.95 similarily provides extraction func +tionality on a \TU/lmtt/m/n/10.95 Series \TU/lmr/m/n/10.95 of data: \TU/lmtt/m/ +n/10.95 ser.str.findall(pattern) + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 6.13 on input line 8417. + +[129] + +Package longtable Warning: Column widths have changed +(longtable) in table 6.14 on input line 8461. + + +Overfull \hbox (126.77437pt too wide) in alignment at lines 8476--8481 + [] [] [] [] [] + [] + + +Package longtable Warning: Column widths have changed +(longtable) in table 6.15 on input line 8481. + +[130] +Overfull \hbox (300.57729pt too wide) in paragraph at lines 8549--8549 +[][]\TU/lmtt/m/n/10.95 169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /st +at141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n[][] + [] + +[131] +! Undefined control sequence. +l.8612 ...n your own character classes with \d, \w + , \s, +Here is how much of TeX's memory you used: + 39096 strings out of 476179 + 800155 string characters out of 5809544 + 1412163 words of memory out of 5000000 + 59251 multiletter control sequences out of 15000+600000 + 475754 words of font info for 86 fonts, out of 8000000 for 9000 + 1348 hyphenation exceptions out of 8191 + 108i,8n,121p,10600b,942s stack positions out of 10000i,1000n,20000p,200000b,200000s + +Output written on index.pdf (131 pages). diff --git a/index.pdf b/index.pdf new file mode 100644 index 000000000..78c45eac5 Binary files /dev/null and b/index.pdf differ diff --git a/index.tex b/index.tex new file mode 100644 index 000000000..adf28f446 --- /dev/null +++ b/index.tex @@ -0,0 +1,25896 @@ +% Options for packages loaded elsewhere +\PassOptionsToPackage{unicode}{hyperref} +\PassOptionsToPackage{hyphens}{url} +\PassOptionsToPackage{dvipsnames,svgnames,x11names}{xcolor} +% +\documentclass[ + letterpaper, + DIV=11, + numbers=noendperiod]{scrreprt} + +\usepackage{amsmath,amssymb} +\usepackage{iftex} +\ifPDFTeX + \usepackage[T1]{fontenc} + \usepackage[utf8]{inputenc} + \usepackage{textcomp} % provide euro and other symbols +\else % if luatex or xetex + \usepackage{unicode-math} + \defaultfontfeatures{Scale=MatchLowercase} + \defaultfontfeatures[\rmfamily]{Ligatures=TeX,Scale=1} +\fi +\usepackage{lmodern} +\ifPDFTeX\else + % xetex/luatex font selection +\fi +% Use upquote if available, for straight quotes in verbatim environments +\IfFileExists{upquote.sty}{\usepackage{upquote}}{} +\IfFileExists{microtype.sty}{% use microtype if available + \usepackage[]{microtype} + \UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts +}{} +\makeatletter +\@ifundefined{KOMAClassName}{% if non-KOMA class + \IfFileExists{parskip.sty}{% + \usepackage{parskip} + }{% else + \setlength{\parindent}{0pt} + \setlength{\parskip}{6pt plus 2pt minus 1pt}} +}{% if KOMA class + \KOMAoptions{parskip=half}} +\makeatother +\usepackage{xcolor} +\setlength{\emergencystretch}{3em} % prevent overfull lines +\setcounter{secnumdepth}{5} +% Make \paragraph and \subparagraph free-standing +\makeatletter +\ifx\paragraph\undefined\else + \let\oldparagraph\paragraph + \renewcommand{\paragraph}{ + \@ifstar + \xxxParagraphStar + \xxxParagraphNoStar + } + \newcommand{\xxxParagraphStar}[1]{\oldparagraph*{#1}\mbox{}} + \newcommand{\xxxParagraphNoStar}[1]{\oldparagraph{#1}\mbox{}} +\fi +\ifx\subparagraph\undefined\else + \let\oldsubparagraph\subparagraph + \renewcommand{\subparagraph}{ + \@ifstar + \xxxSubParagraphStar + \xxxSubParagraphNoStar + } + \newcommand{\xxxSubParagraphStar}[1]{\oldsubparagraph*{#1}\mbox{}} + \newcommand{\xxxSubParagraphNoStar}[1]{\oldsubparagraph{#1}\mbox{}} +\fi +\makeatother + +\usepackage{color} +\usepackage{fancyvrb} +\newcommand{\VerbBar}{|} +\newcommand{\VERB}{\Verb[commandchars=\\\{\}]} +\DefineVerbatimEnvironment{Highlighting}{Verbatim}{commandchars=\\\{\}} +% Add ',fontsize=\small' for more characters per line +\usepackage{framed} +\definecolor{shadecolor}{RGB}{241,243,245} +\newenvironment{Shaded}{\begin{snugshade}}{\end{snugshade}} +\newcommand{\AlertTok}[1]{\textcolor[rgb]{0.68,0.00,0.00}{#1}} +\newcommand{\AnnotationTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{#1}} +\newcommand{\AttributeTok}[1]{\textcolor[rgb]{0.40,0.45,0.13}{#1}} +\newcommand{\BaseNTok}[1]{\textcolor[rgb]{0.68,0.00,0.00}{#1}} +\newcommand{\BuiltInTok}[1]{\textcolor[rgb]{0.00,0.23,0.31}{#1}} +\newcommand{\CharTok}[1]{\textcolor[rgb]{0.13,0.47,0.30}{#1}} +\newcommand{\CommentTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{#1}} +\newcommand{\CommentVarTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{\textit{#1}}} +\newcommand{\ConstantTok}[1]{\textcolor[rgb]{0.56,0.35,0.01}{#1}} +\newcommand{\ControlFlowTok}[1]{\textcolor[rgb]{0.00,0.23,0.31}{\textbf{#1}}} +\newcommand{\DataTypeTok}[1]{\textcolor[rgb]{0.68,0.00,0.00}{#1}} +\newcommand{\DecValTok}[1]{\textcolor[rgb]{0.68,0.00,0.00}{#1}} +\newcommand{\DocumentationTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{\textit{#1}}} +\newcommand{\ErrorTok}[1]{\textcolor[rgb]{0.68,0.00,0.00}{#1}} +\newcommand{\ExtensionTok}[1]{\textcolor[rgb]{0.00,0.23,0.31}{#1}} +\newcommand{\FloatTok}[1]{\textcolor[rgb]{0.68,0.00,0.00}{#1}} +\newcommand{\FunctionTok}[1]{\textcolor[rgb]{0.28,0.35,0.67}{#1}} +\newcommand{\ImportTok}[1]{\textcolor[rgb]{0.00,0.46,0.62}{#1}} +\newcommand{\InformationTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{#1}} +\newcommand{\KeywordTok}[1]{\textcolor[rgb]{0.00,0.23,0.31}{\textbf{#1}}} +\newcommand{\NormalTok}[1]{\textcolor[rgb]{0.00,0.23,0.31}{#1}} +\newcommand{\OperatorTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{#1}} +\newcommand{\OtherTok}[1]{\textcolor[rgb]{0.00,0.23,0.31}{#1}} +\newcommand{\PreprocessorTok}[1]{\textcolor[rgb]{0.68,0.00,0.00}{#1}} +\newcommand{\RegionMarkerTok}[1]{\textcolor[rgb]{0.00,0.23,0.31}{#1}} +\newcommand{\SpecialCharTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{#1}} +\newcommand{\SpecialStringTok}[1]{\textcolor[rgb]{0.13,0.47,0.30}{#1}} +\newcommand{\StringTok}[1]{\textcolor[rgb]{0.13,0.47,0.30}{#1}} +\newcommand{\VariableTok}[1]{\textcolor[rgb]{0.07,0.07,0.07}{#1}} +\newcommand{\VerbatimStringTok}[1]{\textcolor[rgb]{0.13,0.47,0.30}{#1}} +\newcommand{\WarningTok}[1]{\textcolor[rgb]{0.37,0.37,0.37}{\textit{#1}}} + +\providecommand{\tightlist}{% + \setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}\usepackage{longtable,booktabs,array} +\usepackage{multirow} +\usepackage{calc} % for calculating minipage widths +% Correct order of tables after \paragraph or \subparagraph +\usepackage{etoolbox} +\makeatletter +\patchcmd\longtable{\par}{\if@noskipsec\mbox{}\fi\par}{}{} +\makeatother +% Allow footnotes in longtable head/foot +\IfFileExists{footnotehyper.sty}{\usepackage{footnotehyper}}{\usepackage{footnote}} +\makesavenoteenv{longtable} +\usepackage{graphicx} +\makeatletter +\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi} +\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi} +\makeatother +% Scale images if necessary, so that they will not overflow the page +% margins by default, and it is still possible to overwrite the defaults +% using explicit options in \includegraphics[width, height, ...]{} +\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio} +% Set default figure placement to htbp +\makeatletter +\def\fps@figure{htbp} +\makeatother + +\KOMAoption{captions}{tableheading} +\makeatletter +\@ifpackageloaded{tcolorbox}{}{\usepackage[skins,breakable]{tcolorbox}} +\@ifpackageloaded{fontawesome5}{}{\usepackage{fontawesome5}} +\definecolor{quarto-callout-color}{HTML}{909090} +\definecolor{quarto-callout-note-color}{HTML}{0758E5} +\definecolor{quarto-callout-important-color}{HTML}{CC1914} +\definecolor{quarto-callout-warning-color}{HTML}{EB9113} +\definecolor{quarto-callout-tip-color}{HTML}{00A047} +\definecolor{quarto-callout-caution-color}{HTML}{FC5300} +\definecolor{quarto-callout-color-frame}{HTML}{acacac} +\definecolor{quarto-callout-note-color-frame}{HTML}{4582ec} +\definecolor{quarto-callout-important-color-frame}{HTML}{d9534f} +\definecolor{quarto-callout-warning-color-frame}{HTML}{f0ad4e} +\definecolor{quarto-callout-tip-color-frame}{HTML}{02b875} +\definecolor{quarto-callout-caution-color-frame}{HTML}{fd7e14} +\makeatother +\makeatletter +\@ifpackageloaded{bookmark}{}{\usepackage{bookmark}} +\makeatother +\makeatletter +\@ifpackageloaded{caption}{}{\usepackage{caption}} +\AtBeginDocument{% +\ifdefined\contentsname + \renewcommand*\contentsname{Table of contents} +\else + \newcommand\contentsname{Table of contents} +\fi +\ifdefined\listfigurename + \renewcommand*\listfigurename{List of Figures} +\else + \newcommand\listfigurename{List of Figures} +\fi +\ifdefined\listtablename + \renewcommand*\listtablename{List of Tables} +\else + \newcommand\listtablename{List of Tables} +\fi +\ifdefined\figurename + \renewcommand*\figurename{Figure} +\else + \newcommand\figurename{Figure} +\fi +\ifdefined\tablename + \renewcommand*\tablename{Table} +\else + \newcommand\tablename{Table} +\fi +} +\@ifpackageloaded{float}{}{\usepackage{float}} +\floatstyle{ruled} +\@ifundefined{c@chapter}{\newfloat{codelisting}{h}{lop}}{\newfloat{codelisting}{h}{lop}[chapter]} +\floatname{codelisting}{Listing} +\newcommand*\listoflistings{\listof{codelisting}{List of Listings}} +\makeatother +\makeatletter +\makeatother +\makeatletter +\@ifpackageloaded{caption}{}{\usepackage{caption}} +\@ifpackageloaded{subcaption}{}{\usepackage{subcaption}} +\makeatother + +\ifLuaTeX + \usepackage{selnolig} % disable illegal ligatures +\fi +\usepackage{bookmark} + +\IfFileExists{xurl.sty}{\usepackage{xurl}}{} % add URL line breaks if available +\urlstyle{same} % disable monospaced font for URLs +\hypersetup{ + pdftitle={Principles and Techniques of Data Science}, + pdfauthor={Bella Crouch; Yash Dave; Ian Dong; Kanu Grover; Ishani Gupta; Minh Phan; Nikhil Reddy; Milad Shafaie; Matthew Shen; Lillian Weng}, + colorlinks=true, + linkcolor={blue}, + filecolor={Maroon}, + citecolor={Blue}, + urlcolor={Blue}, + pdfcreator={LaTeX via pandoc}} + + +\title{Principles and Techniques of Data Science} +\usepackage{etoolbox} +\makeatletter +\providecommand{\subtitle}[1]{% add subtitle to \maketitle + \apptocmd{\@title}{\par {\large #1 \par}}{}{} +} +\makeatother +\subtitle{Data 100} +\author{Bella Crouch \and Yash Dave \and Ian Dong \and Kanu +Grover \and Ishani Gupta \and Minh Phan \and Nikhil Reddy \and Milad +Shafaie \and Matthew Shen \and Lillian Weng} +\date{} + +\begin{document} +\maketitle + +\renewcommand*\contentsname{Table of contents} +{ +\hypersetup{linkcolor=} +\setcounter{tocdepth}{2} +\tableofcontents +} + +\bookmarksetup{startatroot} + +\chapter*{Welcome}\label{welcome} +\addcontentsline{toc}{chapter}{Welcome} + +\markboth{Welcome}{Welcome} + +\section*{About the Course Notes}\label{about-the-course-notes} +\addcontentsline{toc}{section}{About the Course Notes} + +\markright{About the Course Notes} + +This text offers supplementary resources to accompany lectures presented +in the Fall 2024 Edition of the UC Berkeley course Data 100: Principles +and Techniques of Data Science. + +New notes will be added each week to accompany live lectures. See the +full calendar of lectures on the \href{https://ds100.org/fa24/}{course +website}. + +If you spot any typos or would like to suggest any changes, please email +us at \textbf{data100.instructors@berkeley.edu}. + +\bookmarksetup{startatroot} + +\chapter{Introduction}\label{introduction} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Acquaint yourself with the overarching goals of Data 100 +\item + Understand the stages of the data science lifecycle +\end{itemize} + +\end{tcolorbox} + +Data science is an interdisciplinary field with a variety of +applications and offers great potential to address challenging societal +issues. By building data science skills, you can empower yourself to +participate in and drive conversations that shape your life and society +as a whole, whether that be fighting against climate change, launching +diversity initiatives, or more. + +The field of data science is rapidly evolving; many of the key technical +underpinnings in modern-day data science have been popularized during +the early 21\textsuperscript{st} century, and you will learn them +throughout the course. It has a wide range of applications from science +and medicine to sports. + +While data science has immense potential to address challenging problems +facing society by enhancing our critical thinking, it can also be used +obscure complex decisions and reinforce historical trends and biases. +This course will implore you to consider the ethics of data science +within its applications. + +Data science is fundamentally human-centered and facilitates +decision-making by quantitatively balancing tradeoffs. To quantify +things reliably, we must use and analyze data appropriately, apply +critical thinking and skepticism at every step of the way, and consider +how our decisions affect others. + +Ultimately, data science is the application of data-centric, +computational, and inferential thinking to: + +\begin{itemize} +\tightlist +\item + Understand the world (science). +\item + Solve problems (engineering). +\end{itemize} + +A true mastery of data science requires a deep theoretical understanding +and strong grasp of domain expertise. This course will help you build on +the former -- specifically, the foundation of your technical knowledge, +allowing you to take data and produce useful insights on the world's +most challenging and ambiguous problems. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Course Goals}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Prepare you for advanced Berkeley courses in \textbf{data management, + machine learning, and statistics.} +\item + Enable you to launch a career as a data scientist by providing + experience working with \textbf{real-world data, tools, and + techniques}. +\item + Empower you to apply computational and inferential thinking to address + \textbf{real-world problems}. +\end{itemize} + +\end{tcolorbox} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Some Topics We'll Cover}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + \texttt{pandas} and \texttt{NumPy} +\item + Exploratory Data Analysis +\item + Regular Expressions +\item + Visualization +\item + Sampling +\item + Model Design and Loss Formulation +\item + Linear Regression +\item + Gradient Descent +\item + Logistic Regression +\item + Clustering +\item + PCA +\end{itemize} + +\end{tcolorbox} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Prerequisites}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +To ensure that you can get the most out of the course content, please +make sure that you are familiar with: + +\begin{itemize} +\tightlist +\item + Using Python. +\item + Using Jupyter notebooks. +\item + Inference from Data 8. +\item + Linear algebra +\end{itemize} + +\end{tcolorbox} + +To set you up for success, we've organized concepts in Data 100 around +the \textbf{data science lifecycle}: an \emph{iterative} process that +encompasses the various statistical and computational building blocks of +data science. + +\section{Data Science Lifecycle}\label{data-science-lifecycle} + +The data science lifecycle is a \emph{high-level overview} of the data +science workflow. It's a cycle of stages that a data scientist should +explore as they conduct a thorough analysis of a data-driven problem. + +There are many variations of the key ideas present in the data science +lifecycle. In Data 100, we visualize the stages of the lifecycle using a +flow diagram. Notice how there are two entry points. + +\subsection{Ask a Question}\label{ask-a-question} + +Whether by curiosity or necessity, data scientists constantly ask +questions. For example, in the business world, data scientists may be +interested in predicting the profit generated by a certain investment. +In the field of medicine, they may ask whether some patients are more +likely than others to benefit from a treatment. + +Posing questions is one of the primary ways the data science lifecycle +begins. It helps to fully define the question. Here are some things you +should ask yourself before framing a question. + +\begin{itemize} +\tightlist +\item + What do we want to know? + + \begin{itemize} + \tightlist + \item + A question that is too ambiguous may lead to confusion. + \end{itemize} +\item + What problems are we trying to solve? + + \begin{itemize} + \tightlist + \item + The goal of asking a question should be clear in order to justify + your efforts to stakeholders. + \end{itemize} +\item + What are the hypotheses we want to test? + + \begin{itemize} + \tightlist + \item + This gives a clear perspective from which to analyze final results. + \end{itemize} +\item + What are the metrics for our success? + + \begin{itemize} + \tightlist + \item + This establishes a clear point to know when to conclude the project. + \end{itemize} +\end{itemize} + +\subsection{Obtain Data}\label{obtain-data} + +The second entry point to the lifecycle is by obtaining data. A careful +analysis of any problem requires the use of data. Data may be readily +available to us, or we may have to embark on a process to collect it. +When doing so, it is crucial to ask the following: + +\begin{itemize} +\tightlist +\item + What data do we have, and what data do we need? + + \begin{itemize} + \tightlist + \item + Define the units of the data (people, cities, points in time, etc.) + and what features to measure. + \end{itemize} +\item + How will we sample more data? + + \begin{itemize} + \tightlist + \item + Scrape the web, collect manually, run experiments, etc. + \end{itemize} +\item + Is our data representative of the population we want to study? + + \begin{itemize} + \tightlist + \item + If our data is not representative of our population of interest, + then we can come to incorrect conclusions. + \end{itemize} +\end{itemize} + +Key procedures: \emph{data acquisition}, \emph{data cleaning} + +\subsection{Understand the Data}\label{understand-the-data} + +Raw data itself is not inherently useful. It's impossible to discern all +the patterns and relationships between variables without carefully +investigating them. Therefore, translating pure data into actionable +insights is a key job of a data scientist. For example, we may choose to +ask: + +\begin{itemize} +\tightlist +\item + How is our data organized, and what does it contain? + + \begin{itemize} + \tightlist + \item + Knowing what the data says about the world helps us better + understand the world. + \end{itemize} +\item + Do we have relevant data? + + \begin{itemize} + \tightlist + \item + If the data we have collected is not useful to the question at hand, + then we must collect more data. + \end{itemize} +\item + What are the biases, anomalies, or other issues with the data? + + \begin{itemize} + \tightlist + \item + These can lead to many false conclusions if ignored, so data + scientists must always be aware of these issues. + \end{itemize} +\item + How do we transform the data to enable effective analysis? + + \begin{itemize} + \tightlist + \item + Data is not always easy to interpret at first glance, so a data + scientist should strive to reveal the hidden insights. + \end{itemize} +\end{itemize} + +Key procedures: \emph{exploratory data analysis}, \emph{data +visualization}. + +\subsection{Understand the World}\label{understand-the-world} + +After observing the patterns in our data, we can begin answering our +questions. This may require that we predict a quantity (machine +learning) or measure the effect of some treatment (inference). + +From here, we may choose to report our results, or possibly conduct more +analysis. We may not be satisfied with our findings, or our initial +exploration may have brought up new questions that require new data. + +\begin{itemize} +\tightlist +\item + What does the data say about the world? + + \begin{itemize} + \tightlist + \item + Given our models, the data will lead us to certain conclusions about + the real world.\\ + \end{itemize} +\item + Does it answer our questions or accurately solve the problem? + + \begin{itemize} + \tightlist + \item + If our model and data can not accomplish our goals, then we must + reform our question, model, or both.\\ + \end{itemize} +\item + How robust are our conclusions and can we trust the predictions? + + \begin{itemize} + \tightlist + \item + Inaccurate models can lead to false conclusions. + \end{itemize} +\end{itemize} + +Key procedures: \emph{model creation}, \emph{prediction}, +\emph{inference}. + +\section{Conclusion}\label{conclusion} + +The data science lifecycle is meant to be a set of general guidelines +rather than a hard set of requirements. In our journey exploring the +lifecycle, we'll cover both the underlying theory and technologies used +in data science. By the end of the course, we hope that you start to see +yourself as a data scientist. + +With that, we'll begin by introducing one of the most important tools in +exploratory data analysis: \texttt{pandas}. + +\bookmarksetup{startatroot} + +\chapter{Pandas I}\label{pandas-i} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Build familiarity with \texttt{pandas} and \texttt{pandas} syntax. +\item + Learn key data structures: \texttt{DataFrame}, \texttt{Series}, and + \texttt{Index}. +\item + Understand methods for extracting data: \texttt{.loc}, \texttt{.iloc}, + and \texttt{{[}{]}}. +\end{itemize} + +\end{tcolorbox} + +In this sequence of lectures, we will dive right into things by having +you explore and manipulate real-world data. We'll first introduce +\texttt{pandas}, a popular Python library for interacting with +\textbf{tabular data}. + +\section{Tabular Data}\label{tabular-data} + +Data scientists work with data stored in a variety of formats. This +class focuses primarily on \emph{tabular data} --- data that is stored +in a table. + +Tabular data is one of the most common systems that data scientists use +to organize data. This is in large part due to the simplicity and +flexibility of tables. Tables allow us to represent each +\textbf{observation}, or instance of collecting data from an individual, +as its own \emph{row}. We can record each observation's distinct +characteristics, or \textbf{features}, in separate \emph{columns}. + +To see this in action, we'll explore the \texttt{elections} dataset, +which stores information about political candidates who ran for +president of the United States in previous years. + +In the \texttt{elections} dataset, each row (blue box) represents one +instance of a candidate running for president in a particular year. For +example, the first row represents Andrew Jackson running for president +in the year 1824. Each column (yellow box) represents one characteristic +piece of information about each presidential candidate. For example, the +column named ``Result'' stores whether or not the candidate won the +election. + +Your work in Data 8 helped you grow very familiar with using and +interpreting data stored in a tabular format. Back then, you used the +\texttt{Table} class of the \texttt{datascience} library, a special +programming library created specifically for Data 8 students. + +In Data 100, we will be working with the programming library +\texttt{pandas}, which is generally accepted in the data science +community as the industry- and academia-standard tool for manipulating +tabular data (as well as the inspiration for Petey, our panda bear +mascot). + +Using \texttt{pandas}, we can + +\begin{itemize} +\tightlist +\item + Arrange data in a tabular format. +\item + Extract useful information filtered by specific conditions. +\item + Operate on data to gain new insights. +\item + Apply \texttt{NumPy} functions to our data (our friends from Data 8). +\item + Perform vectorized computations to speed up our analysis (Lab 1). +\end{itemize} + +\section{\texorpdfstring{\texttt{Series}, \texttt{DataFrame}s, and +Indices}{Series, DataFrames, and Indices}}\label{series-dataframes-and-indices} + +To begin our work in \texttt{pandas}, we must first import the library +into our Python environment. This will allow us to use \texttt{pandas} +data structures and methods in our code. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# \textasciigrave{}pd\textasciigrave{} is the conventional alias for Pandas, as \textasciigrave{}np\textasciigrave{} is for NumPy} +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\end{Highlighting} +\end{Shaded} + +There are three fundamental data structures in \texttt{pandas}: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \textbf{\texttt{Series}}: 1D labeled array data; best thought of as + columnar data. +\item + \textbf{\texttt{DataFrame}}: 2D tabular data with rows and columns. +\item + \textbf{\texttt{Index}}: A sequence of row/column labels. +\end{enumerate} + +\texttt{DataFrame}s, \texttt{Series}, and Indices can be represented +visually in the following diagram, which considers the first few rows of +the \texttt{elections} dataset. + +Notice how the \textbf{DataFrame} is a two-dimensional object --- it +contains both rows and columns. The \textbf{Series} above is a singular +column of this \texttt{DataFrame}, namely the \texttt{Result} column. +Both contain an \textbf{Index}, or a shared list of row labels (the +integers from 0 to 4, inclusive). + +\subsection{Series}\label{series} + +A \texttt{Series} represents a column of a \texttt{DataFrame}; more +generally, it can be any 1-dimensional array-like object. It contains +both: + +\begin{itemize} +\tightlist +\item + A sequence of \textbf{values} of the same type. +\item + A sequence of data labels called the \textbf{index}. +\end{itemize} + +In the cell below, we create a \texttt{Series} named \texttt{s}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{s }\OperatorTok{=}\NormalTok{ pd.Series([}\StringTok{"welcome"}\NormalTok{, }\StringTok{"to"}\NormalTok{, }\StringTok{"data 100"}\NormalTok{])} +\NormalTok{s} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 welcome +1 to +2 data 100 +dtype: object +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] + \CommentTok{\# Accessing data values within the Series} +\NormalTok{ s.values} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array(['welcome', 'to', 'data 100'], dtype=object) +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] + \CommentTok{\# Accessing the Index of the Series} +\NormalTok{ s.index} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +RangeIndex(start=0, stop=3, step=1) +\end{verbatim} + +By default, the \texttt{index} of a \texttt{Series} is a sequential list +of integers beginning from 0. Optionally, a manually specified list of +desired indices can be passed to the \texttt{index} argument. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{s }\OperatorTok{=}\NormalTok{ pd.Series([}\OperatorTok{{-}}\DecValTok{1}\NormalTok{, }\DecValTok{10}\NormalTok{, }\DecValTok{2}\NormalTok{], index }\OperatorTok{=}\NormalTok{ [}\StringTok{"a"}\NormalTok{, }\StringTok{"b"}\NormalTok{, }\StringTok{"c"}\NormalTok{])} +\NormalTok{s} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +a -1 +b 10 +c 2 +dtype: int64 +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{s.index} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Index(['a', 'b', 'c'], dtype='object') +\end{verbatim} + +Indices can also be changed after initialization. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{s.index }\OperatorTok{=}\NormalTok{ [}\StringTok{"first"}\NormalTok{, }\StringTok{"second"}\NormalTok{, }\StringTok{"third"}\NormalTok{]} +\NormalTok{s} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +first -1 +second 10 +third 2 +dtype: int64 +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{s.index} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Index(['first', 'second', 'third'], dtype='object') +\end{verbatim} + +\subsubsection{\texorpdfstring{Selection in +\texttt{Series}}{Selection in Series}}\label{selection-in-series} + +Much like when working with \texttt{NumPy} arrays, we can select a +single value or a set of values from a \texttt{Series}. To do so, there +are three primary methods: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + A single label. +\item + A list of labels. +\item + A filtering condition. +\end{enumerate} + +To demonstrate this, let's define the Series \texttt{ser}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{ser }\OperatorTok{=}\NormalTok{ pd.Series([}\DecValTok{4}\NormalTok{, }\OperatorTok{{-}}\DecValTok{2}\NormalTok{, }\DecValTok{0}\NormalTok{, }\DecValTok{6}\NormalTok{], index }\OperatorTok{=}\NormalTok{ [}\StringTok{"a"}\NormalTok{, }\StringTok{"b"}\NormalTok{, }\StringTok{"c"}\NormalTok{, }\StringTok{"d"}\NormalTok{])} +\NormalTok{ser} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +a 4 +b -2 +c 0 +d 6 +dtype: int64 +\end{verbatim} + +\paragraph{A Single Label}\label{a-single-label} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# We return the value stored at the index label "a"} +\NormalTok{ser[}\StringTok{"a"}\NormalTok{] } +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.int64(4) +\end{verbatim} + +\paragraph{A List of Labels}\label{a-list-of-labels} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# We return a Series of the values stored at the index labels "a" and "c"} +\NormalTok{ser[[}\StringTok{"a"}\NormalTok{, }\StringTok{"c"}\NormalTok{]] } +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +a 4 +c 0 +dtype: int64 +\end{verbatim} + +\paragraph{A Filtering Condition}\label{a-filtering-condition} + +Perhaps the most interesting (and useful) method of selecting data from +a \texttt{Series} is by using a filtering condition. + +First, we apply a boolean operation to the \texttt{Series}. This creates +\textbf{a new \texttt{Series} of boolean values}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Filter condition: select all elements greater than 0} +\NormalTok{ser }\OperatorTok{\textgreater{}} \DecValTok{0} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +a True +b False +c False +d True +dtype: bool +\end{verbatim} + +We then use this boolean condition to index into our original +\texttt{Series}. \texttt{pandas} will select only the entries in the +original \texttt{Series} that satisfy the condition. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{ser[ser }\OperatorTok{\textgreater{}} \DecValTok{0}\NormalTok{] } +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +a 4 +d 6 +dtype: int64 +\end{verbatim} + +\subsection{\texorpdfstring{\texttt{DataFrames}}{DataFrames}}\label{dataframes} + +Typically, we will work with \texttt{Series} using the perspective that +they are columns in a \texttt{DataFrame}. We can think of a +\textbf{\texttt{DataFrame}} as a collection of \textbf{\texttt{Series}} +that all share the same \textbf{\texttt{Index}}. + +In Data 8, you encountered the \texttt{Table} class of the +\texttt{datascience} library, which represented tabular data. In Data +100, we'll be using the \texttt{DataFrame} class of the \texttt{pandas} +library. + +\subsubsection{\texorpdfstring{Creating a +\texttt{DataFrame}}{Creating a DataFrame}}\label{creating-a-dataframe} + +There are many ways to create a \texttt{DataFrame}. Here, we will cover +the most popular approaches: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + From a CSV file. +\item + Using a list and column name(s). +\item + From a dictionary. +\item + From a \texttt{Series}. +\end{enumerate} + +More generally, the syntax for creating a \texttt{DataFrame} is: + +\begin{verbatim} + pandas.DataFrame(data, index, columns) +\end{verbatim} + +\paragraph{From a CSV file}\label{from-a-csv-file} + +In Data 100, our data are typically stored in a CSV (comma-separated +values) file format. We can import a CSV file into a \texttt{DataFrame} +by passing the data path as an argument to the following \texttt{pandas} +function.  \texttt{pd.read\_csv("filename.csv")} + +With our new understanding of \texttt{pandas} in hand, let's return to +the \texttt{elections} dataset from before. Now, we can recognize that +it is represented as a \texttt{pandas} \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/elections.csv"}\NormalTok{)} +\NormalTok{elections} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 & win & 54.574789 \\ +... & ... & ... & ... & ... & ... & ... \\ +177 & 2016 & Jill Stein & Green & 1457226 & loss & 1.073699 \\ +178 & 2020 & Joseph Biden & Democratic & 81268924 & win & 51.311515 \\ +179 & 2020 & Donald Trump & Republican & 74216154 & loss & 46.858542 \\ +180 & 2020 & Jo Jorgensen & Libertarian & 1865724 & loss & 1.177979 \\ +181 & 2020 & Howard Hawkins & Green & 405035 & loss & 0.255731 \\ +\end{longtable} + +This code stores our \texttt{DataFrame} object in the \texttt{elections} +variable. Upon inspection, our \texttt{elections} \texttt{DataFrame} has +182 rows and 6 columns (\texttt{Year}, \texttt{Candidate}, +\texttt{Party}, \texttt{Popular\ Vote}, \texttt{Result}, \texttt{\%}). +Each row represents a single record --- in our example, a presidential +candidate from some particular year. Each column represents a single +attribute or feature of the record. + +\paragraph{Using a List and Column +Name(s)}\label{using-a-list-and-column-names} + +We'll now explore creating a \texttt{DataFrame} with data of our own. + +Consider the following examples. The first code cell creates a +\texttt{DataFrame} with a single column \texttt{Numbers}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df\_list }\OperatorTok{=}\NormalTok{ pd.DataFrame([}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{], columns}\OperatorTok{=}\NormalTok{[}\StringTok{"Numbers"}\NormalTok{])} +\NormalTok{df\_list} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Numbers \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1 \\ +1 & 2 \\ +2 & 3 \\ +\end{longtable} + +The second creates a \texttt{DataFrame} with the columns +\texttt{Numbers} and \texttt{Description}. Notice how a 2D list of +values is required to initialize the second \texttt{DataFrame} --- each +nested list represents a single row of data. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df\_list }\OperatorTok{=}\NormalTok{ pd.DataFrame([[}\DecValTok{1}\NormalTok{, }\StringTok{"one"}\NormalTok{], [}\DecValTok{2}\NormalTok{, }\StringTok{"two"}\NormalTok{]], columns }\OperatorTok{=}\NormalTok{ [}\StringTok{"Number"}\NormalTok{, }\StringTok{"Description"}\NormalTok{])} +\NormalTok{df\_list} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Number & Description \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1 & one \\ +1 & 2 & two \\ +\end{longtable} + +\paragraph{From a Dictionary}\label{from-a-dictionary} + +A third (and more common) way to create a \texttt{DataFrame} is with a +dictionary. The dictionary keys represent the column names, and the +dictionary values represent the column values. + +Below are two ways of implementing this approach. The first is based on +specifying the columns of the \texttt{DataFrame}, whereas the second is +based on specifying the rows of the \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df\_dict }\OperatorTok{=}\NormalTok{ pd.DataFrame(\{} + \StringTok{"Fruit"}\NormalTok{: [}\StringTok{"Strawberry"}\NormalTok{, }\StringTok{"Orange"}\NormalTok{], } + \StringTok{"Price"}\NormalTok{: [}\FloatTok{5.49}\NormalTok{, }\FloatTok{3.99}\NormalTok{]} +\NormalTok{\})} +\NormalTok{df\_dict} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Fruit & Price \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Strawberry & 5.49 \\ +1 & Orange & 3.99 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df\_dict }\OperatorTok{=}\NormalTok{ pd.DataFrame(} +\NormalTok{ [} +\NormalTok{ \{}\StringTok{"Fruit"}\NormalTok{:}\StringTok{"Strawberry"}\NormalTok{, }\StringTok{"Price"}\NormalTok{:}\FloatTok{5.49}\NormalTok{\}, } +\NormalTok{ \{}\StringTok{"Fruit"}\NormalTok{: }\StringTok{"Orange"}\NormalTok{, }\StringTok{"Price"}\NormalTok{:}\FloatTok{3.99}\NormalTok{\}} +\NormalTok{ ]} +\NormalTok{)} +\NormalTok{df\_dict} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Fruit & Price \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Strawberry & 5.49 \\ +1 & Orange & 3.99 \\ +\end{longtable} + +\paragraph{\texorpdfstring{From a +\texttt{Series}}{From a Series}}\label{from-a-series} + +Earlier, we explained how a \texttt{Series} was synonymous to a column +in a \texttt{DataFrame}. It follows, then, that a \texttt{DataFrame} is +equivalent to a collection of \texttt{Series}, which all share the same +\texttt{Index}. + +In fact, we can initialize a \texttt{DataFrame} by merging two or more +\texttt{Series}. Consider the \texttt{Series} \texttt{s\_a} and +\texttt{s\_b}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Notice how our indices, or row labels, are the same} + +\NormalTok{s\_a }\OperatorTok{=}\NormalTok{ pd.Series([}\StringTok{"a1"}\NormalTok{, }\StringTok{"a2"}\NormalTok{, }\StringTok{"a3"}\NormalTok{], index }\OperatorTok{=}\NormalTok{ [}\StringTok{"r1"}\NormalTok{, }\StringTok{"r2"}\NormalTok{, }\StringTok{"r3"}\NormalTok{])} +\NormalTok{s\_b }\OperatorTok{=}\NormalTok{ pd.Series([}\StringTok{"b1"}\NormalTok{, }\StringTok{"b2"}\NormalTok{, }\StringTok{"b3"}\NormalTok{], index }\OperatorTok{=}\NormalTok{ [}\StringTok{"r1"}\NormalTok{, }\StringTok{"r2"}\NormalTok{, }\StringTok{"r3"}\NormalTok{])} +\end{Highlighting} +\end{Shaded} + +We can turn individual \texttt{Series} into a \texttt{DataFrame} using +two common methods (shown below): + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.DataFrame(s\_a)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& 0 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +r1 & a1 \\ +r2 & a2 \\ +r3 & a3 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{s\_b.to\_frame()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& 0 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +r1 & b1 \\ +r2 & b2 \\ +r3 & b3 \\ +\end{longtable} + +To merge the two \texttt{Series} and specify their column names, we use +the following syntax: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.DataFrame(\{} + \StringTok{"A{-}column"}\NormalTok{: s\_a, } + \StringTok{"B{-}column"}\NormalTok{: s\_b} +\NormalTok{\})} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& A-column & B-column \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +r1 & a1 & b1 \\ +r2 & a2 & b2 \\ +r3 & a3 & b3 \\ +\end{longtable} + +\subsection{Indices}\label{indices} + +On a more technical note, an index doesn't have to be an integer, nor +does it have to be unique. For example, we can set the index of the +\texttt{elections} \texttt{DataFrame} to be the name of presidential +candidates. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Creating a DataFrame from a CSV file and specifying the index column} +\NormalTok{elections }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/elections.csv"}\NormalTok{, index\_col }\OperatorTok{=} \StringTok{"Candidate"}\NormalTok{)} +\NormalTok{elections} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& Year & Party & Popular vote & Result & \% \\ +Candidate & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Andrew Jackson & 1824 & Democratic-Republican & 151271 & loss & +57.210122 \\ +John Quincy Adams & 1824 & Democratic-Republican & 113142 & win & +42.789878 \\ +Andrew Jackson & 1828 & Democratic & 642806 & win & 56.203927 \\ +John Quincy Adams & 1828 & National Republican & 500897 & loss & +43.796073 \\ +Andrew Jackson & 1832 & Democratic & 702735 & win & 54.574789 \\ +... & ... & ... & ... & ... & ... \\ +Jill Stein & 2016 & Green & 1457226 & loss & 1.073699 \\ +Joseph Biden & 2020 & Democratic & 81268924 & win & 51.311515 \\ +Donald Trump & 2020 & Republican & 74216154 & loss & 46.858542 \\ +Jo Jorgensen & 2020 & Libertarian & 1865724 & loss & 1.177979 \\ +Howard Hawkins & 2020 & Green & 405035 & loss & 0.255731 \\ +\end{longtable} + +We can also select a new column and set it as the index of the +\texttt{DataFrame}. For example, we can set the index of the +\texttt{elections} \texttt{DataFrame} to represent the candidate's +party. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.reset\_index(inplace }\OperatorTok{=} \VariableTok{True}\NormalTok{) }\CommentTok{\# Resetting the index so we can set it again} +\CommentTok{\# This sets the index to the "Party" column} +\NormalTok{elections.set\_index(}\StringTok{"Party"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& Candidate & Year & Popular vote & Result & \% \\ +Party & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Democratic-Republican & Andrew Jackson & 1824 & 151271 & loss & +57.210122 \\ +Democratic-Republican & John Quincy Adams & 1824 & 113142 & win & +42.789878 \\ +Democratic & Andrew Jackson & 1828 & 642806 & win & 56.203927 \\ +National Republican & John Quincy Adams & 1828 & 500897 & loss & +43.796073 \\ +Democratic & Andrew Jackson & 1832 & 702735 & win & 54.574789 \\ +... & ... & ... & ... & ... & ... \\ +Green & Jill Stein & 2016 & 1457226 & loss & 1.073699 \\ +Democratic & Joseph Biden & 2020 & 81268924 & win & 51.311515 \\ +Republican & Donald Trump & 2020 & 74216154 & loss & 46.858542 \\ +Libertarian & Jo Jorgensen & 2020 & 1865724 & loss & 1.177979 \\ +Green & Howard Hawkins & 2020 & 405035 & loss & 0.255731 \\ +\end{longtable} + +And, if we'd like, we can revert the index back to the default list of +integers. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# This resets the index to be the default list of integer} +\NormalTok{elections.reset\_index(inplace}\OperatorTok{=}\VariableTok{True}\NormalTok{) } +\NormalTok{elections.index} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +RangeIndex(start=0, stop=182, step=1) +\end{verbatim} + +It is also important to note that the row labels that constitute an +index don't have to be unique. While index values can be unique and +numeric, acting as a row number, they can also be named and non-unique. + +Here we see unique and numeric index values. + +However, here the index values are not unique. + +\section{\texorpdfstring{\texttt{DataFrame} Attributes: Index, Columns, +and +Shape}{DataFrame Attributes: Index, Columns, and Shape}}\label{dataframe-attributes-index-columns-and-shape} + +On the other hand, column names in a \texttt{DataFrame} are almost +always unique. Looking back to the \texttt{elections} dataset, it +wouldn't make sense to have two columns named \texttt{"Candidate"}. +Sometimes, you'll want to extract these different values, in particular, +the list of row and column labels. + +For index/row labels, use \texttt{DataFrame.index}: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.set\_index(}\StringTok{"Party"}\NormalTok{, inplace }\OperatorTok{=} \VariableTok{True}\NormalTok{)} +\NormalTok{elections.index} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Index(['Democratic-Republican', 'Democratic-Republican', 'Democratic', + 'National Republican', 'Democratic', 'National Republican', + 'Anti-Masonic', 'Whig', 'Democratic', 'Whig', + ... + 'Constitution', 'Republican', 'Independent', 'Libertarian', + 'Democratic', 'Green', 'Democratic', 'Republican', 'Libertarian', + 'Green'], + dtype='object', name='Party', length=182) +\end{verbatim} + +For column labels, use \texttt{DataFrame.columns}: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.columns} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Index(['index', 'Candidate', 'Year', 'Popular vote', 'Result', '%'], dtype='object') +\end{verbatim} + +And for the shape of the \texttt{DataFrame}, we can use +\texttt{DataFrame.shape} to get the number of rows followed by the +number of columns: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.shape} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(182, 6) +\end{verbatim} + +\section{\texorpdfstring{Slicing in +\texttt{DataFrame}s}{Slicing in DataFrames}}\label{slicing-in-dataframes} + +Now that we've learned more about \texttt{DataFrame}s, let's dive deeper +into their capabilities. + +The API (Application Programming Interface) for the \texttt{DataFrame} +class is enormous. In this section, we'll discuss several methods of the +\texttt{DataFrame} API that allow us to extract subsets of data. + +The simplest way to manipulate a \texttt{DataFrame} is to extract a +subset of rows and columns, known as \textbf{slicing}. + +Common ways we may want to extract data are grabbing: + +\begin{itemize} +\tightlist +\item + The first or last \texttt{n} rows in the \texttt{DataFrame}. +\item + Data with a certain label. +\item + Data at a certain position. +\end{itemize} + +We will do so with four primary methods of the \texttt{DataFrame} class: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \texttt{.head} and \texttt{.tail} +\item + \texttt{.loc} +\item + \texttt{.iloc} +\item + \texttt{{[}{]}} +\end{enumerate} + +\subsection{\texorpdfstring{Extracting data with \texttt{.head} and +\texttt{.tail}}{Extracting data with .head and .tail}}\label{extracting-data-with-.head-and-.tail} + +The simplest scenario in which we want to extract data is when we simply +want to select the first or last few rows of the \texttt{DataFrame}. + +To extract the first \texttt{n} rows of a \texttt{DataFrame} +\texttt{df}, we use the syntax \texttt{df.head(n)}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/elections.csv"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Extract the first 5 rows of the DataFrame} +\NormalTok{elections.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 & win & 54.574789 \\ +\end{longtable} + +Similarly, calling \texttt{df.tail(n)} allows us to extract the last +\texttt{n} rows of the \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Extract the last 5 rows of the DataFrame} +\NormalTok{elections.tail(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +177 & 2016 & Jill Stein & Green & 1457226 & loss & 1.073699 \\ +178 & 2020 & Joseph Biden & Democratic & 81268924 & win & 51.311515 \\ +179 & 2020 & Donald Trump & Republican & 74216154 & loss & 46.858542 \\ +180 & 2020 & Jo Jorgensen & Libertarian & 1865724 & loss & 1.177979 \\ +181 & 2020 & Howard Hawkins & Green & 405035 & loss & 0.255731 \\ +\end{longtable} + +\subsection{\texorpdfstring{Label-based Extraction: Indexing with +\texttt{.loc}}{Label-based Extraction: Indexing with .loc}}\label{label-based-extraction-indexing-with-.loc} + +For the more complex task of extracting data with specific column or +index labels, we can use \texttt{.loc}. The \texttt{.loc} accessor +allows us to specify the \textbf{\emph{labels}} of rows and columns we +wish to extract. The \textbf{labels} (commonly referred to as the +\textbf{indices}) are the bold text on the far \emph{left} of a +\texttt{DataFrame}, while the \textbf{column labels} are the column +names found at the \emph{top} of a \texttt{DataFrame}. + +To grab data with \texttt{.loc}, we must specify the row and column +label(s) where the data exists. The row labels are the first argument to +the \texttt{.loc} function; the column labels are the second. + +Arguments to \texttt{.loc} can be: + +\begin{itemize} +\tightlist +\item + A single value. +\item + A slice. +\item + A list. +\end{itemize} + +For example, to select a single value, we can select the row labeled +\texttt{0} and the column labeled \texttt{Candidate} from the +\texttt{elections} \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.loc[}\DecValTok{0}\NormalTok{, }\StringTok{\textquotesingle{}Candidate\textquotesingle{}}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'Andrew Jackson' +\end{verbatim} + +Keep in mind that passing in just one argument as a single value will +produce a \texttt{Series}. Below, we've extracted a subset of the +\texttt{"Popular\ vote"} column as a \texttt{Series}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.loc[[}\DecValTok{87}\NormalTok{, }\DecValTok{25}\NormalTok{, }\DecValTok{179}\NormalTok{], }\StringTok{"Popular vote"}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +87 15761254 +25 848019 +179 74216154 +Name: Popular vote, dtype: int64 +\end{verbatim} + +To select \emph{multiple} rows and columns, we can use Python slice +notation. Here, we select the rows from labels \texttt{0} to \texttt{3} +and the columns from labels \texttt{"Year"} to \texttt{"Popular\ vote"}. +Notice that unlike Python slicing, \texttt{.loc} is \emph{inclusive} of +the right upper bound. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.loc[}\DecValTok{0}\NormalTok{:}\DecValTok{3}\NormalTok{, }\StringTok{\textquotesingle{}Year\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}Popular vote\textquotesingle{}}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 \\ +\end{longtable} + +Suppose that instead, we want to extract \emph{all} column values for +the first four rows in the \texttt{elections} \texttt{DataFrame}. The +shorthand \texttt{:} is useful for this. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.loc[}\DecValTok{0}\NormalTok{:}\DecValTok{3}\NormalTok{, :]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 \\ +\end{longtable} + +We can use the same shorthand to extract all rows. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.loc[:, [}\StringTok{"Year"}\NormalTok{, }\StringTok{"Candidate"}\NormalTok{, }\StringTok{"Result"}\NormalTok{]]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& Year & Candidate & Result \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & loss \\ +1 & 1824 & John Quincy Adams & win \\ +2 & 1828 & Andrew Jackson & win \\ +3 & 1828 & John Quincy Adams & loss \\ +4 & 1832 & Andrew Jackson & win \\ +... & ... & ... & ... \\ +177 & 2016 & Jill Stein & loss \\ +178 & 2020 & Joseph Biden & win \\ +179 & 2020 & Donald Trump & loss \\ +180 & 2020 & Jo Jorgensen & loss \\ +181 & 2020 & Howard Hawkins & loss \\ +\end{longtable} + +There are a couple of things we should note. Firstly, unlike +conventional Python, \texttt{pandas} allows us to slice string values +(in our example, the column labels). Secondly, slicing with +\texttt{.loc} is \emph{inclusive}. Notice how our resulting +\texttt{DataFrame} includes every row and column between and including +the slice labels we specified. + +Equivalently, we can use a list to obtain multiple rows and columns in +our \texttt{elections} \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.loc[[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{], [}\StringTok{\textquotesingle{}Year\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Candidate\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Party\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Popular vote\textquotesingle{}}\NormalTok{]]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 \\ +\end{longtable} + +Lastly, we can interchange list and slicing notation. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.loc[[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{], :]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 \\ +\end{longtable} + +\subsection{\texorpdfstring{Integer-based Extraction: Indexing with +\texttt{.iloc}}{Integer-based Extraction: Indexing with .iloc}}\label{integer-based-extraction-indexing-with-.iloc} + +Slicing with \texttt{.iloc} works similarly to \texttt{.loc}. However, +\texttt{.iloc} uses the \emph{index positions} of rows and columns +rather than the labels (think to yourself: \textbf{l}oc uses +\textbf{l}ables; \textbf{i}loc uses \textbf{i}ndices). The arguments to +the \texttt{.iloc} function also behave similarly --- single values, +lists, indices, and any combination of these are permitted. + +Let's begin reproducing our results from above. We'll begin by selecting +the first presidential candidate in our \texttt{elections} +\texttt{DataFrame}: + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# elections.loc[0, "Candidate"] {-} Previous approach} +\NormalTok{elections.iloc[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'Andrew Jackson' +\end{verbatim} + +Notice how the first argument to both \texttt{.loc} and \texttt{.iloc} +are the same. This is because the row with a label of \texttt{0} is +conveniently in the \(0^{\text{th}}\) (equivalently, the first position) +of the \texttt{elections} \texttt{DataFrame}. Generally, this is true of +any \texttt{DataFrame} where the row labels are incremented in ascending +order from 0. + +And, as before, if we were to pass in only one single value argument, +our result would be a \texttt{Series}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.iloc[[}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{],}\DecValTok{1}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +1 John Quincy Adams +2 Andrew Jackson +3 John Quincy Adams +Name: Candidate, dtype: object +\end{verbatim} + +However, when we select the first four rows and columns using +\texttt{.iloc}, we notice something. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# elections.loc[0:3, \textquotesingle{}Year\textquotesingle{}:\textquotesingle{}Popular vote\textquotesingle{}] {-} Previous approach} +\NormalTok{elections.iloc[}\DecValTok{0}\NormalTok{:}\DecValTok{4}\NormalTok{, }\DecValTok{0}\NormalTok{:}\DecValTok{4}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 \\ +\end{longtable} + +Slicing is no longer inclusive in \texttt{.iloc} --- it's +\emph{exclusive}. In other words, the right end of a slice is not +included when using \texttt{.iloc}. This is one of the subtleties of +\texttt{pandas} syntax; you will get used to it with practice. + +List behavior works just as expected. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\#elections.loc[[0, 1, 2, 3], [\textquotesingle{}Year\textquotesingle{}, \textquotesingle{}Candidate\textquotesingle{}, \textquotesingle{}Party\textquotesingle{}, \textquotesingle{}Popular vote\textquotesingle{}]] {-} Previous Approach} +\NormalTok{elections.iloc[[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{], [}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{3}\NormalTok{]]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 \\ +\end{longtable} + +And just like with \texttt{.loc}, we can use a colon with \texttt{.iloc} +to extract all rows or columns. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.iloc[:, }\DecValTok{0}\NormalTok{:}\DecValTok{3}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& Year & Candidate & Party \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican \\ +1 & 1824 & John Quincy Adams & Democratic-Republican \\ +2 & 1828 & Andrew Jackson & Democratic \\ +3 & 1828 & John Quincy Adams & National Republican \\ +4 & 1832 & Andrew Jackson & Democratic \\ +... & ... & ... & ... \\ +177 & 2016 & Jill Stein & Green \\ +178 & 2020 & Joseph Biden & Democratic \\ +179 & 2020 & Donald Trump & Republican \\ +180 & 2020 & Jo Jorgensen & Libertarian \\ +181 & 2020 & Howard Hawkins & Green \\ +\end{longtable} + +This discussion begs the question: when should we use \texttt{.loc} +vs.~\texttt{.iloc}? In most cases, \texttt{.loc} is generally safer to +use. You can imagine \texttt{.iloc} may return incorrect values when +applied to a dataset where the ordering of data can change. However, +\texttt{.iloc} can still be useful --- for example, if you are looking +at a \texttt{DataFrame} of sorted movie earnings and want to get the +median earnings for a given year, you can use \texttt{.iloc} to index +into the middle. + +Overall, it is important to remember that: + +\begin{itemize} +\tightlist +\item + \texttt{.loc} performances \textbf{l}abel-based extraction. +\item + \texttt{.iloc} performs \textbf{i}nteger-based extraction. +\end{itemize} + +\subsection{\texorpdfstring{Context-dependent Extraction: Indexing with +\texttt{{[}{]}}}{Context-dependent Extraction: Indexing with {[}{]}}}\label{context-dependent-extraction-indexing-with} + +The \texttt{{[}{]}} selection operator is the most baffling of all, yet +the most commonly used. It only takes a single argument, which may be +one of the following: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + A slice of row numbers. +\item + A list of column labels. +\item + A single-column label. +\end{enumerate} + +That is, \texttt{{[}{]}} is \emph{context-dependent}. Let's see some +examples. + +\subsubsection{A slice of row numbers}\label{a-slice-of-row-numbers} + +Say we wanted the first four rows of our \texttt{elections} +\texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections[}\DecValTok{0}\NormalTok{:}\DecValTok{4}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 \\ +\end{longtable} + +\subsubsection{A list of column labels}\label{a-list-of-column-labels} + +Suppose we now want the first four columns. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections[[}\StringTok{"Year"}\NormalTok{, }\StringTok{"Candidate"}\NormalTok{, }\StringTok{"Party"}\NormalTok{, }\StringTok{"Popular vote"}\NormalTok{]]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 \\ +... & ... & ... & ... & ... \\ +177 & 2016 & Jill Stein & Green & 1457226 \\ +178 & 2020 & Joseph Biden & Democratic & 81268924 \\ +179 & 2020 & Donald Trump & Republican & 74216154 \\ +180 & 2020 & Jo Jorgensen & Libertarian & 1865724 \\ +181 & 2020 & Howard Hawkins & Green & 405035 \\ +\end{longtable} + +\subsubsection{A single-column label}\label{a-single-column-label} + +Lastly, \texttt{{[}{]}} allows us to extract only the +\texttt{"Candidate"} column. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections[}\StringTok{"Candidate"}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 Andrew Jackson +1 John Quincy Adams +2 Andrew Jackson +3 John Quincy Adams +4 Andrew Jackson + ... +177 Jill Stein +178 Joseph Biden +179 Donald Trump +180 Jo Jorgensen +181 Howard Hawkins +Name: Candidate, Length: 182, dtype: object +\end{verbatim} + +The output is a \texttt{Series}! In this course, we'll become very +comfortable with \texttt{{[}{]}}, especially for selecting columns. In +practice, \texttt{{[}{]}} is much more common than \texttt{.loc}, +especially since it is far more concise. + +\section{Parting Note}\label{parting-note} + +The \texttt{pandas} library is enormous and contains many useful +functions. Here is a link to its +\href{https://pandas.pydata.org/docs/}{documentation}. We certainly +don't expect you to memorize each and every method of the library, and +we will give you a reference sheet for exams. + +The introductory Data 100 \texttt{pandas} lectures will provide a +high-level view of the key data structures and methods that will form +the foundation of your \texttt{pandas} knowledge. A goal of this course +is to help you build your familiarity with the real-world programming +practice of \ldots{} Googling! Answers to your questions can be found in +documentation, Stack Overflow, etc. Being able to search for, read, and +implement documentation is an important life skill for any data +scientist. + +With that, we will move on to Pandas II! + +\bookmarksetup{startatroot} + +\chapter{Pandas II}\label{pandas-ii} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Continue building familiarity with \texttt{pandas} syntax. +\item + Extract data from a \texttt{DataFrame} using conditional selection. +\item + Recognize situations where aggregation is useful and identify the + correct technique for performing an aggregation. +\end{itemize} + +\end{tcolorbox} + +Last time, we introduced the \texttt{pandas} library as a toolkit for +processing data. We learned the \texttt{DataFrame} and \texttt{Series} +data structures, familiarized ourselves with the basic syntax for +manipulating tabular data, and began writing our first lines of +\texttt{pandas} code. + +In this lecture, we'll start to dive into some advanced \texttt{pandas} +syntax. You may find it helpful to follow along with a notebook of your +own as we walk through these new pieces of code. + +We'll start by loading the \texttt{babynames} dataset. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# This code pulls census data and loads it into a DataFrame} +\CommentTok{\# We won\textquotesingle{}t cover it explicitly in this class, but you are welcome to explore it on your own} +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ urllib.request} +\ImportTok{import}\NormalTok{ os.path} +\ImportTok{import}\NormalTok{ zipfile} + +\NormalTok{data\_url }\OperatorTok{=} \StringTok{"https://www.ssa.gov/oact/babynames/state/namesbystate.zip"} +\NormalTok{local\_filename }\OperatorTok{=} \StringTok{"data/babynamesbystate.zip"} +\ControlFlowTok{if} \KeywordTok{not}\NormalTok{ os.path.exists(local\_filename): }\CommentTok{\# If the data exists don\textquotesingle{}t download again} + \ControlFlowTok{with}\NormalTok{ urllib.request.urlopen(data\_url) }\ImportTok{as}\NormalTok{ resp, }\BuiltInTok{open}\NormalTok{(local\_filename, }\StringTok{\textquotesingle{}wb\textquotesingle{}}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ f.write(resp.read())} + +\NormalTok{zf }\OperatorTok{=}\NormalTok{ zipfile.ZipFile(local\_filename, }\StringTok{\textquotesingle{}r\textquotesingle{}}\NormalTok{)} + +\NormalTok{ca\_name }\OperatorTok{=} \StringTok{\textquotesingle{}STATE.CA.TXT\textquotesingle{}} +\NormalTok{field\_names }\OperatorTok{=}\NormalTok{ [}\StringTok{\textquotesingle{}State\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Sex\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Year\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Name\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Count\textquotesingle{}}\NormalTok{]} +\ControlFlowTok{with}\NormalTok{ zf.}\BuiltInTok{open}\NormalTok{(ca\_name) }\ImportTok{as}\NormalTok{ fh:} +\NormalTok{ babynames }\OperatorTok{=}\NormalTok{ pd.read\_csv(fh, header}\OperatorTok{=}\VariableTok{None}\NormalTok{, names}\OperatorTok{=}\NormalTok{field\_names)} + +\NormalTok{babynames.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +1 & CA & F & 1910 & Helen & 239 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +3 & CA & F & 1910 & Margaret & 163 \\ +4 & CA & F & 1910 & Frances & 134 \\ +\end{longtable} + +\section{Conditional Selection}\label{conditional-selection} + +Conditional selection allows us to select a subset of rows in a +\texttt{DataFrame} that satisfy some specified condition. + +To understand how to use conditional selection, we must look at another +possible input of the \texttt{.loc} and \texttt{{[}{]}} methods -- a +boolean array, which is simply an array or \texttt{Series} where each +element is either \texttt{True} or \texttt{False}. This boolean array +must have a length equal to the number of rows in the +\texttt{DataFrame}. It will return all rows that correspond to a value +of \texttt{True} in the array. We used a very similar technique when +performing conditional extraction from a \texttt{Series} in the last +lecture. + +To see this in action, let's select all even-indexed rows in the first +10 rows of our \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Ask yourself: why is :9 is the correct slice to select the first 10 rows?} +\NormalTok{babynames\_first\_10\_rows }\OperatorTok{=}\NormalTok{ babynames.loc[:}\DecValTok{9}\NormalTok{, :]} + +\CommentTok{\# Notice how we have exactly 10 elements in our boolean array argument} +\NormalTok{babynames\_first\_10\_rows[[}\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{]]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +4 & CA & F & 1910 & Frances & 134 \\ +6 & CA & F & 1910 & Evelyn & 126 \\ +8 & CA & F & 1910 & Virginia & 101 \\ +\end{longtable} + +We can perform a similar operation using \texttt{.loc}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames\_first\_10\_rows.loc[[}\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{, }\VariableTok{True}\NormalTok{, }\VariableTok{False}\NormalTok{], :]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +4 & CA & F & 1910 & Frances & 134 \\ +6 & CA & F & 1910 & Evelyn & 126 \\ +8 & CA & F & 1910 & Virginia & 101 \\ +\end{longtable} + +These techniques worked well in this example, but you can imagine how +tedious it might be to list out \texttt{True} and \texttt{False}for +every row in a larger \texttt{DataFrame}. To make things easier, we can +instead provide a logical condition as an input to \texttt{.loc} or +\texttt{{[}{]}} that returns a boolean array with the necessary length. + +For example, to return all names associated with \texttt{F} sex: + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# First, use a logical condition to generate a boolean array} +\NormalTok{logical\_operator }\OperatorTok{=}\NormalTok{ (babynames[}\StringTok{"Sex"}\NormalTok{] }\OperatorTok{==} \StringTok{"F"}\NormalTok{)} + +\CommentTok{\# Then, use this boolean array to filter the DataFrame} +\NormalTok{babynames[logical\_operator].head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +1 & CA & F & 1910 & Helen & 239 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +3 & CA & F & 1910 & Margaret & 163 \\ +4 & CA & F & 1910 & Frances & 134 \\ +\end{longtable} + +Recall from the previous lecture that \texttt{.head()} will return only +the first few rows in the \texttt{DataFrame}. In reality, +\texttt{babynames{[}logical\ operator{]}} contains as many rows as there +are entries in the original \texttt{babynames} \texttt{DataFrame} with +sex \texttt{"F"}. + +Here, \texttt{logical\_operator} evaluates to a \texttt{Series} of +boolean values with length 407428. + +\begin{Shaded} +\begin{Highlighting}[] +\BuiltInTok{print}\NormalTok{(}\StringTok{"There are a total of }\SpecialCharTok{\{\}}\StringTok{ values in \textquotesingle{}logical\_operator\textquotesingle{}"}\NormalTok{.}\BuiltInTok{format}\NormalTok{(}\BuiltInTok{len}\NormalTok{(logical\_operator)))} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +There are a total of 407428 values in 'logical_operator' +\end{verbatim} + +Rows starting at row 0 and ending at row 239536 evaluate to +\texttt{True} and are thus returned in the \texttt{DataFrame}. Rows from +239537 onwards evaluate to \texttt{False} and are omitted from the +output. + +\begin{Shaded} +\begin{Highlighting}[] +\BuiltInTok{print}\NormalTok{(}\StringTok{"The 0th item in this \textquotesingle{}logical\_operator\textquotesingle{} is: }\SpecialCharTok{\{\}}\StringTok{"}\NormalTok{.}\BuiltInTok{format}\NormalTok{(logical\_operator.iloc[}\DecValTok{0}\NormalTok{]))} +\BuiltInTok{print}\NormalTok{(}\StringTok{"The 239536th item in this \textquotesingle{}logical\_operator\textquotesingle{} is: }\SpecialCharTok{\{\}}\StringTok{"}\NormalTok{.}\BuiltInTok{format}\NormalTok{(logical\_operator.iloc[}\DecValTok{239536}\NormalTok{]))} +\BuiltInTok{print}\NormalTok{(}\StringTok{"The 239537th item in this \textquotesingle{}logical\_operator\textquotesingle{} is: }\SpecialCharTok{\{\}}\StringTok{"}\NormalTok{.}\BuiltInTok{format}\NormalTok{(logical\_operator.iloc[}\DecValTok{239537}\NormalTok{]))} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +The 0th item in this 'logical_operator' is: True +The 239536th item in this 'logical_operator' is: True +The 239537th item in this 'logical_operator' is: False +\end{verbatim} + +Passing a \texttt{Series} as an argument to \texttt{babynames{[}{]}} has +the same effect as using a boolean array. In fact, the \texttt{{[}{]}} +selection operator can take a boolean \texttt{Series}, array, and list +as arguments. These three are used interchangeably throughout the +course. + +We can also use \texttt{.loc} to achieve similar results. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames.loc[babynames[}\StringTok{"Sex"}\NormalTok{] }\OperatorTok{==} \StringTok{"F"}\NormalTok{].head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +1 & CA & F & 1910 & Helen & 239 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +3 & CA & F & 1910 & Margaret & 163 \\ +4 & CA & F & 1910 & Frances & 134 \\ +\end{longtable} + +Boolean conditions can be combined using various bitwise operators, +allowing us to filter results by multiple conditions. In the table +below, p and q are boolean arrays or \texttt{Series}. + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +Symbol & Usage & Meaning \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\textasciitilde{} & \textasciitilde p & Returns negation of p \\ +\textbar{} & p \textbar{} q & p OR q \\ +\& & p \& q & p AND q \\ +\^{} & p \^{} q & p XOR q (exclusive or) \\ +\end{longtable} + +When combining multiple conditions with logical operators, we surround +each individual condition with a set of parenthesis \texttt{()}. This +imposes an order of operations on \texttt{pandas} evaluating your logic +and can avoid code erroring. + +For example, if we want to return data on all names with sex +\texttt{"F"} born before the year 2000, we can write: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[(babynames[}\StringTok{"Sex"}\NormalTok{] }\OperatorTok{==} \StringTok{"F"}\NormalTok{) }\OperatorTok{\&}\NormalTok{ (babynames[}\StringTok{"Year"}\NormalTok{] }\OperatorTok{\textless{}} \DecValTok{2000}\NormalTok{)].head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +1 & CA & F & 1910 & Helen & 239 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +3 & CA & F & 1910 & Margaret & 163 \\ +4 & CA & F & 1910 & Frances & 134 \\ +\end{longtable} + +Note that we're working with \texttt{Series}, so using \texttt{and} in +place of \texttt{\&}, or \texttt{or} in place \texttt{\textbar{}} will +error. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# This line of code will raise a ValueError} +\CommentTok{\# babynames[(babynames["Sex"] == "F") and (babynames["Year"] \textless{} 2000)].head()} +\end{Highlighting} +\end{Shaded} + +If we want to return data on all names with sex \texttt{"F"} \emph{or} +all born before the year 2000, we can write: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[(babynames[}\StringTok{"Sex"}\NormalTok{] }\OperatorTok{==} \StringTok{"F"}\NormalTok{) }\OperatorTok{|}\NormalTok{ (babynames[}\StringTok{"Year"}\NormalTok{] }\OperatorTok{\textless{}} \DecValTok{2000}\NormalTok{)].head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +1 & CA & F & 1910 & Helen & 239 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +3 & CA & F & 1910 & Margaret & 163 \\ +4 & CA & F & 1910 & Frances & 134 \\ +\end{longtable} + +Boolean array selection is a useful tool, but can lead to overly verbose +code for complex conditions. In the example below, our boolean condition +is long enough to extend for several lines of code. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Note: The parentheses surrounding the code make it possible to break the code on to multiple lines for readability} +\NormalTok{(} +\NormalTok{ babynames[(babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Bella"}\NormalTok{) }\OperatorTok{|} +\NormalTok{ (babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Alex"}\NormalTok{) }\OperatorTok{|} +\NormalTok{ (babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Ani"}\NormalTok{) }\OperatorTok{|} +\NormalTok{ (babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Lisa"}\NormalTok{)]} +\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +6289 & CA & F & 1923 & Bella & 5 \\ +7512 & CA & F & 1925 & Bella & 8 \\ +12368 & CA & F & 1932 & Lisa & 5 \\ +14741 & CA & F & 1936 & Lisa & 8 \\ +17084 & CA & F & 1939 & Lisa & 5 \\ +\end{longtable} + +Fortunately, \texttt{pandas} provides many alternative methods for +constructing boolean filters. + +The \texttt{.isin} function is one such example. This method evaluates +if the values in a \texttt{Series} are contained in a different sequence +(list, array, or \texttt{Series}) of values. In the cell below, we +achieve equivalent results to the \texttt{DataFrame} above with far more +concise code. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{names }\OperatorTok{=}\NormalTok{ [}\StringTok{"Bella"}\NormalTok{, }\StringTok{"Alex"}\NormalTok{, }\StringTok{"Narges"}\NormalTok{, }\StringTok{"Lisa"}\NormalTok{]} +\NormalTok{babynames[}\StringTok{"Name"}\NormalTok{].isin(names).head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 False +1 False +2 False +3 False +4 False +Name: Name, dtype: bool +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[babynames[}\StringTok{"Name"}\NormalTok{].isin(names)].head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +6289 & CA & F & 1923 & Bella & 5 \\ +7512 & CA & F & 1925 & Bella & 8 \\ +12368 & CA & F & 1932 & Lisa & 5 \\ +14741 & CA & F & 1936 & Lisa & 8 \\ +17084 & CA & F & 1939 & Lisa & 5 \\ +\end{longtable} + +The function \texttt{str.startswith} can be used to define a filter +based on string values in a \texttt{Series} object. It checks to see if +string values in a \texttt{Series} start with a particular character. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Identify whether names begin with the letter "N"} +\NormalTok{babynames[}\StringTok{"Name"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.startswith(}\StringTok{"N"}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 False +1 False +2 False +3 False +4 False +Name: Name, dtype: bool +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Extracting names that begin with the letter "N"} +\NormalTok{babynames[babynames[}\StringTok{"Name"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.startswith(}\StringTok{"N"}\NormalTok{)].head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +76 & CA & F & 1910 & Norma & 23 \\ +83 & CA & F & 1910 & Nellie & 20 \\ +127 & CA & F & 1910 & Nina & 11 \\ +198 & CA & F & 1910 & Nora & 6 \\ +310 & CA & F & 1911 & Nellie & 23 \\ +\end{longtable} + +\section{Adding, Removing, and Modifying +Columns}\label{adding-removing-and-modifying-columns} + +In many data science tasks, we may need to change the columns contained +in our \texttt{DataFrame} in some way. Fortunately, the syntax to do so +is fairly straightforward. + +To add a new column to a \texttt{DataFrame}, we use a syntax similar to +that used when accessing an existing column. Specify the name of the new +column by writing \texttt{df{[}"column"{]}}, then assign this to a +\texttt{Series} or array containing the values that will populate this +column. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Create a Series of the length of each name. } +\NormalTok{babyname\_lengths }\OperatorTok{=}\NormalTok{ babynames[}\StringTok{"Name"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.}\BuiltInTok{len}\NormalTok{()} + +\CommentTok{\# Add a column named "name\_lengths" that includes the length of each name} +\NormalTok{babynames[}\StringTok{"name\_lengths"}\NormalTok{] }\OperatorTok{=}\NormalTok{ babyname\_lengths} +\NormalTok{babynames.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & name\_lengths \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 & 4 \\ +1 & CA & F & 1910 & Helen & 239 & 5 \\ +2 & CA & F & 1910 & Dorothy & 220 & 7 \\ +3 & CA & F & 1910 & Margaret & 163 & 8 \\ +4 & CA & F & 1910 & Frances & 134 & 7 \\ +\end{longtable} + +If we need to later modify an existing column, we can do so by +referencing this column again with the syntax \texttt{df{[}"column"{]}}, +then re-assigning it to a new \texttt{Series} or array of the +appropriate length. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Modify the “name\_lengths” column to be one less than its original value} +\NormalTok{babynames[}\StringTok{"name\_lengths"}\NormalTok{] }\OperatorTok{=}\NormalTok{ babynames[}\StringTok{"name\_lengths"}\NormalTok{] }\OperatorTok{{-}} \DecValTok{1} +\NormalTok{babynames.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & name\_lengths \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 & 3 \\ +1 & CA & F & 1910 & Helen & 239 & 4 \\ +2 & CA & F & 1910 & Dorothy & 220 & 6 \\ +3 & CA & F & 1910 & Margaret & 163 & 7 \\ +4 & CA & F & 1910 & Frances & 134 & 6 \\ +\end{longtable} + +We can rename a column using the \texttt{.rename()} method. It takes in +a dictionary that maps old column names to their new ones. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Rename “name\_lengths” to “Length”} +\NormalTok{babynames }\OperatorTok{=}\NormalTok{ babynames.rename(columns}\OperatorTok{=}\NormalTok{\{}\StringTok{"name\_lengths"}\NormalTok{:}\StringTok{"Length"}\NormalTok{\})} +\NormalTok{babynames.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & Length \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 & 3 \\ +1 & CA & F & 1910 & Helen & 239 & 4 \\ +2 & CA & F & 1910 & Dorothy & 220 & 6 \\ +3 & CA & F & 1910 & Margaret & 163 & 7 \\ +4 & CA & F & 1910 & Frances & 134 & 6 \\ +\end{longtable} + +If we want to remove a column or row of a \texttt{DataFrame}, we can +call the \texttt{.drop} +\href{https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html}{(documentation)} +method. Use the \texttt{axis} parameter to specify whether a column or +row should be dropped. Unless otherwise specified, \texttt{pandas} will +assume that we are dropping a row by default. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Drop our new "Length" column from the DataFrame} +\NormalTok{babynames }\OperatorTok{=}\NormalTok{ babynames.drop(}\StringTok{"Length"}\NormalTok{, axis}\OperatorTok{=}\StringTok{"columns"}\NormalTok{)} +\NormalTok{babynames.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +1 & CA & F & 1910 & Helen & 239 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +3 & CA & F & 1910 & Margaret & 163 \\ +4 & CA & F & 1910 & Frances & 134 \\ +\end{longtable} + +Notice that we \emph{re-assigned} \texttt{babynames} to the result of +\texttt{babynames.drop(...)}. This is a subtle but important point: +\texttt{pandas} table operations \textbf{do not occur in-place}. Calling +\texttt{df.drop(...)} will output a \emph{copy} of \texttt{df} with the +row/column of interest removed without modifying the original +\texttt{df} table. + +In other words, if we simply call: + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# This creates a copy of \textasciigrave{}babynames\textasciigrave{} and removes the column "Name"...} +\NormalTok{babynames.drop(}\StringTok{"Name"}\NormalTok{, axis}\OperatorTok{=}\StringTok{"columns"}\NormalTok{)} + +\CommentTok{\# ...but the original \textasciigrave{}babynames\textasciigrave{} is unchanged! } +\CommentTok{\# Notice that the "Name" column is still present} +\NormalTok{babynames.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 \\ +1 & CA & F & 1910 & Helen & 239 \\ +2 & CA & F & 1910 & Dorothy & 220 \\ +3 & CA & F & 1910 & Margaret & 163 \\ +4 & CA & F & 1910 & Frances & 134 \\ +\end{longtable} + +\section{Useful Utility Functions}\label{useful-utility-functions} + +\texttt{pandas} contains an extensive library of functions that can help +shorten the process of setting and getting information from its data +structures. In the following section, we will give overviews of each of +the main utility functions that will help us in Data 100. + +Discussing all functionality offered by \texttt{pandas} could take an +entire semester! We will walk you through the most commonly-used +functions and encourage you to explore and experiment on your own. + +\begin{itemize} +\tightlist +\item + \texttt{NumPy} and built-in function support +\item + \texttt{.shape} +\item + \texttt{.size} +\item + \texttt{.describe()} +\item + \texttt{.sample()} +\item + \texttt{.value\_counts()} +\item + \texttt{.unique()} +\item + \texttt{.sort\_values()} +\end{itemize} + +The \texttt{pandas} +\href{https://pandas.pydata.org/docs/reference/index.html}{documentation} +will be a valuable resource in Data 100 and beyond. + +\subsection{\texorpdfstring{\texttt{NumPy}}{NumPy}}\label{numpy} + +\texttt{pandas} is designed to work well with \texttt{NumPy}, the +framework for array computations you encountered in +\href{https://www.data8.org/su23/reference/\#array-functions-and-methods}{Data +8}. Just about any \texttt{NumPy} function can be applied to +\texttt{pandas} \texttt{DataFrame}s and \texttt{Series}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Pull out the number of babies named Yash each year} +\NormalTok{yash\_count }\OperatorTok{=}\NormalTok{ babynames[babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Yash"}\NormalTok{][}\StringTok{"Count"}\NormalTok{]} +\NormalTok{yash\_count.head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +331824 8 +334114 9 +336390 11 +338773 12 +341387 10 +Name: Count, dtype: int64 +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Average number of babies named Yash each year} +\NormalTok{np.mean(yash\_count)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(17.142857142857142) +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Max number of babies named Yash born in any one year} +\NormalTok{np.}\BuiltInTok{max}\NormalTok{(yash\_count)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.int64(29) +\end{verbatim} + +\subsection{\texorpdfstring{\texttt{.shape} and +\texttt{.size}}{.shape and .size}}\label{shape-and-.size} + +\texttt{.shape} and \texttt{.size} are attributes of \texttt{Series} and +\texttt{DataFrame}s that measure the ``amount'' of data stored in the +structure. Calling \texttt{.shape} returns a tuple containing the number +of rows and columns present in the \texttt{DataFrame} or +\texttt{Series}. \texttt{.size} is used to find the total number of +elements in a structure, equivalent to the number of rows times the +number of columns. + +Many functions strictly require the dimensions of the arguments along +certain axes to match. Calling these dimension-finding functions is much +faster than counting all of the items by hand. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Return the shape of the DataFrame, in the format (num\_rows, num\_columns)} +\NormalTok{babynames.shape} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(407428, 5) +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Return the size of the DataFrame, equal to num\_rows * num\_columns} +\NormalTok{babynames.size} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +2037140 +\end{verbatim} + +\subsection{\texorpdfstring{\texttt{.describe()}}{.describe()}}\label{describe} + +If many statistics are required from a \texttt{DataFrame} (minimum +value, maximum value, mean value, etc.), then \texttt{.describe()} +\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.describe.html}{(documentation)} +can be used to compute all of them at once. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames.describe()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Year & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +count & 407428.000000 & 407428.000000 \\ +mean & 1985.733609 & 79.543456 \\ +std & 27.007660 & 293.698654 \\ +min & 1910.000000 & 5.000000 \\ +25\% & 1969.000000 & 7.000000 \\ +50\% & 1992.000000 & 13.000000 \\ +75\% & 2008.000000 & 38.000000 \\ +max & 2022.000000 & 8260.000000 \\ +\end{longtable} + +A different set of statistics will be reported if \texttt{.describe()} +is called on a \texttt{Series}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[}\StringTok{"Sex"}\NormalTok{].describe()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +count 407428 +unique 2 +top F +freq 239537 +Name: Sex, dtype: object +\end{verbatim} + +\subsection{\texorpdfstring{\texttt{.sample()}}{.sample()}}\label{sample} + +As we will see later in the semester, random processes are at the heart +of many data science techniques (for example, train-test splits, +bootstrapping, and cross-validation). \texttt{.sample()} +\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sample.html}{(documentation)} +lets us quickly select random entries (a row if called from a +\texttt{DataFrame}, or a value if called from a \texttt{Series}). + +By default, \texttt{.sample()} selects entries \emph{without} +replacement. Pass in the argument \texttt{replace=True} to sample with +replacement. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Sample a single row} +\NormalTok{babynames.sample()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +186466 & CA & F & 2009 & Karol & 26 \\ +\end{longtable} + +Naturally, this can be chained with other methods and operators +(\texttt{iloc}, etc.). + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Sample 5 random rows, and select all columns after column 2} +\NormalTok{babynames.sample(}\DecValTok{5}\NormalTok{).iloc[:, }\DecValTok{2}\NormalTok{:]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +293565 & 1977 & Dominique & 17 \\ +22880 & 1946 & Trudy & 70 \\ +285883 & 1972 & William & 1930 \\ +144771 & 1998 & Hayleigh & 6 \\ +184846 & 2008 & Graci & 5 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Randomly sample 4 names from the year 2000, with replacement, and select all columns after column 2} +\NormalTok{babynames[babynames[}\StringTok{"Year"}\NormalTok{] }\OperatorTok{==} \DecValTok{2000}\NormalTok{].sample(}\DecValTok{4}\NormalTok{, replace }\OperatorTok{=} \VariableTok{True}\NormalTok{).iloc[:, }\DecValTok{2}\NormalTok{:]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +343283 & 2000 & Arian & 21 \\ +343478 & 2000 & Terence & 16 \\ +344052 & 2000 & Gaven & 8 \\ +150023 & 2000 & Kiersten & 31 \\ +\end{longtable} + +\subsection{\texorpdfstring{\texttt{.value\_counts()}}{.value\_counts()}}\label{value_counts} + +The \texttt{Series.value\_counts()} +\href{https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html}{(documentation)} +method counts the number of occurrence of each unique value in a +\texttt{Series}. In other words, it \emph{counts} the number of times +each unique \emph{value} appears. This is often useful for determining +the most or least common entries in a \texttt{Series}. + +In the example below, we can determine the name with the most years in +which at least one person has taken that name by counting the number of +times each name appears in the \texttt{"Name"} column of +\texttt{babynames}. Note that the return value is also a +\texttt{Series}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[}\StringTok{"Name"}\NormalTok{].value\_counts().head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Name +Jean 223 +Francis 221 +Guadalupe 218 +Jessie 217 +Marion 214 +Name: count, dtype: int64 +\end{verbatim} + +\subsection{\texorpdfstring{\texttt{.unique()}}{.unique()}}\label{unique} + +If we have a \texttt{Series} with many repeated values, then +\texttt{.unique()} +\href{https://pandas.pydata.org/docs/reference/api/pandas.unique.html}{(documentation)} +can be used to identify only the \emph{unique} values. Here we return an +array of all the names in \texttt{babynames}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[}\StringTok{"Name"}\NormalTok{].unique()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array(['Mary', 'Helen', 'Dorothy', ..., 'Zae', 'Zai', 'Zayvier'], + dtype=object) +\end{verbatim} + +\subsection{\texorpdfstring{\texttt{.sort\_values()}}{.sort\_values()}}\label{sort_values} + +Ordering a \texttt{DataFrame} can be useful for isolating extreme +values. For example, the first 5 entries of a row sorted in descending +order (that is, from highest to lowest) are the largest 5 values. +\texttt{.sort\_values} +\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html}{(documentation)} +allows us to order a \texttt{DataFrame} or \texttt{Series} by a +specified column. We can choose to either receive the rows in +\texttt{ascending} order (default) or \texttt{descending} order. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Sort the "Count" column from highest to lowest} +\NormalTok{babynames.sort\_values(by}\OperatorTok{=}\StringTok{"Count"}\NormalTok{, ascending}\OperatorTok{=}\VariableTok{False}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +268041 & CA & M & 1957 & Michael & 8260 \\ +267017 & CA & M & 1956 & Michael & 8258 \\ +317387 & CA & M & 1990 & Michael & 8246 \\ +281850 & CA & M & 1969 & Michael & 8245 \\ +283146 & CA & M & 1970 & Michael & 8196 \\ +\end{longtable} + +Unlike when calling \texttt{.value\_counts()} on a \texttt{DataFrame}, +we do not need to explicitly specify the column used for sorting when +calling \texttt{.value\_counts()} on a \texttt{Series}. We can still +specify the ordering paradigm -- that is, whether values are sorted in +ascending or descending order. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Sort the "Name" Series alphabetically} +\NormalTok{babynames[}\StringTok{"Name"}\NormalTok{].sort\_values(ascending}\OperatorTok{=}\VariableTok{True}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +366001 Aadan +384005 Aadan +369120 Aadan +398211 Aadarsh +370306 Aaden +Name: Name, dtype: object +\end{verbatim} + +\section{Parting Note}\label{parting-note-1} + +Manipulating \texttt{DataFrames} is not a skill that is mastered in just +one day. Due to the flexibility of \texttt{pandas}, there are many +different ways to get from point A to point B. We recommend trying +multiple different ways to solve the same problem to gain even more +practice and reach that point of mastery sooner. + +Next, we will start digging deeper into the mechanics behind grouping +data. + +\bookmarksetup{startatroot} + +\chapter{Pandas III}\label{pandas-iii} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Perform advanced aggregation using \texttt{.groupby()} +\item + Use the \texttt{pd.pivot\_table} method to construct a pivot table +\item + Perform simple merges between DataFrames using \texttt{pd.merge()} +\end{itemize} + +\end{tcolorbox} + +We will introduce the concept of aggregating data -- we will familiarize +ourselves with \texttt{GroupBy} objects and used them as tools to +consolidate and summarize a\texttt{DataFrame}. In this lecture, we will +explore working with the different aggregation functions and dive into +some advanced \texttt{.groupby} methods to show just how powerful of a +resource they can be for understanding our data. We will also introduce +other techniques for data aggregation to provide flexibility in how we +manipulate our tables. + +\section{Custom Sorts}\label{custom-sorts} + +First, let's finish our discussion about sorting. Let's try to solve a +sorting problem using different approaches. Assume we want to find the +longest baby names and sort our data accordingly. + +We'll start by loading the \texttt{babynames} dataset. Note that this +dataset is filtered to only contain data from California. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# This code pulls census data and loads it into a DataFrame} +\CommentTok{\# We won\textquotesingle{}t cover it explicitly in this class, but you are welcome to explore it on your own} +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ urllib.request} +\ImportTok{import}\NormalTok{ os.path} +\ImportTok{import}\NormalTok{ zipfile} + +\NormalTok{data\_url }\OperatorTok{=} \StringTok{"https://www.ssa.gov/oact/babynames/state/namesbystate.zip"} +\NormalTok{local\_filename }\OperatorTok{=} \StringTok{"data/babynamesbystate.zip"} +\ControlFlowTok{if} \KeywordTok{not}\NormalTok{ os.path.exists(local\_filename): }\CommentTok{\# If the data exists don\textquotesingle{}t download again} + \ControlFlowTok{with}\NormalTok{ urllib.request.urlopen(data\_url) }\ImportTok{as}\NormalTok{ resp, }\BuiltInTok{open}\NormalTok{(local\_filename, }\StringTok{\textquotesingle{}wb\textquotesingle{}}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ f.write(resp.read())} + +\NormalTok{zf }\OperatorTok{=}\NormalTok{ zipfile.ZipFile(local\_filename, }\StringTok{\textquotesingle{}r\textquotesingle{}}\NormalTok{)} + +\NormalTok{ca\_name }\OperatorTok{=} \StringTok{\textquotesingle{}STATE.CA.TXT\textquotesingle{}} +\NormalTok{field\_names }\OperatorTok{=}\NormalTok{ [}\StringTok{\textquotesingle{}State\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Sex\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Year\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Name\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Count\textquotesingle{}}\NormalTok{]} +\ControlFlowTok{with}\NormalTok{ zf.}\BuiltInTok{open}\NormalTok{(ca\_name) }\ImportTok{as}\NormalTok{ fh:} +\NormalTok{ babynames }\OperatorTok{=}\NormalTok{ pd.read\_csv(fh, header}\OperatorTok{=}\VariableTok{None}\NormalTok{, names}\OperatorTok{=}\NormalTok{field\_names)} + +\NormalTok{babynames.tail(}\DecValTok{10}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +407418 & CA & M & 2022 & Zach & 5 \\ +407419 & CA & M & 2022 & Zadkiel & 5 \\ +407420 & CA & M & 2022 & Zae & 5 \\ +407421 & CA & M & 2022 & Zai & 5 \\ +407422 & CA & M & 2022 & Zay & 5 \\ +407423 & CA & M & 2022 & Zayvier & 5 \\ +407424 & CA & M & 2022 & Zia & 5 \\ +407425 & CA & M & 2022 & Zora & 5 \\ +407426 & CA & M & 2022 & Zuriel & 5 \\ +407427 & CA & M & 2022 & Zylo & 5 \\ +\end{longtable} + +\subsection{Approach 1: Create a Temporary +Column}\label{approach-1-create-a-temporary-column} + +One method to do this is to first start by creating a column that +contains the lengths of the names. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Create a Series of the length of each name} +\NormalTok{babyname\_lengths }\OperatorTok{=}\NormalTok{ babynames[}\StringTok{"Name"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.}\BuiltInTok{len}\NormalTok{()} + +\CommentTok{\# Add a column named "name\_lengths" that includes the length of each name} +\NormalTok{babynames[}\StringTok{"name\_lengths"}\NormalTok{] }\OperatorTok{=}\NormalTok{ babyname\_lengths} +\NormalTok{babynames.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & name\_lengths \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & CA & F & 1910 & Mary & 295 & 4 \\ +1 & CA & F & 1910 & Helen & 239 & 5 \\ +2 & CA & F & 1910 & Dorothy & 220 & 7 \\ +3 & CA & F & 1910 & Margaret & 163 & 8 \\ +4 & CA & F & 1910 & Frances & 134 & 7 \\ +\end{longtable} + +We can then sort the \texttt{DataFrame} by that column using +\texttt{.sort\_values()}: + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Sort by the temporary column} +\NormalTok{babynames }\OperatorTok{=}\NormalTok{ babynames.sort\_values(by}\OperatorTok{=}\StringTok{"name\_lengths"}\NormalTok{, ascending}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\NormalTok{babynames.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & name\_lengths \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +334166 & CA & M & 1996 & Franciscojavier & 8 & 15 \\ +337301 & CA & M & 1997 & Franciscojavier & 5 & 15 \\ +339472 & CA & M & 1998 & Franciscojavier & 6 & 15 \\ +321792 & CA & M & 1991 & Ryanchristopher & 7 & 15 \\ +327358 & CA & M & 1993 & Johnchristopher & 5 & 15 \\ +\end{longtable} + +Finally, we can drop the \texttt{name\_length} column from +\texttt{babynames} to prevent our table from getting cluttered. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Drop the \textquotesingle{}name\_length\textquotesingle{} column} +\NormalTok{babynames }\OperatorTok{=}\NormalTok{ babynames.drop(}\StringTok{"name\_lengths"}\NormalTok{, axis}\OperatorTok{=}\StringTok{\textquotesingle{}columns\textquotesingle{}}\NormalTok{)} +\NormalTok{babynames.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +334166 & CA & M & 1996 & Franciscojavier & 8 \\ +337301 & CA & M & 1997 & Franciscojavier & 5 \\ +339472 & CA & M & 1998 & Franciscojavier & 6 \\ +321792 & CA & M & 1991 & Ryanchristopher & 7 \\ +327358 & CA & M & 1993 & Johnchristopher & 5 \\ +\end{longtable} + +\subsection{\texorpdfstring{Approach 2: Sorting using the \texttt{key} +Argument}{Approach 2: Sorting using the key Argument}}\label{approach-2-sorting-using-the-key-argument} + +Another way to approach this is to use the \texttt{key} argument of +\texttt{.sort\_values()}. Here we can specify that we want to sort +\texttt{"Name"} values by their length. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames.sort\_values(}\StringTok{"Name"}\NormalTok{, key}\OperatorTok{=}\KeywordTok{lambda}\NormalTok{ x: x.}\BuiltInTok{str}\NormalTok{.}\BuiltInTok{len}\NormalTok{(), ascending}\OperatorTok{=}\VariableTok{False}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +334166 & CA & M & 1996 & Franciscojavier & 8 \\ +327472 & CA & M & 1993 & Ryanchristopher & 5 \\ +337301 & CA & M & 1997 & Franciscojavier & 5 \\ +337477 & CA & M & 1997 & Ryanchristopher & 5 \\ +312543 & CA & M & 1987 & Franciscojavier & 5 \\ +\end{longtable} + +\subsection{\texorpdfstring{Approach 3: Sorting using the \texttt{map} +Function}{Approach 3: Sorting using the map Function}}\label{approach-3-sorting-using-the-map-function} + +We can also use the \texttt{map} function on a \texttt{Series} to solve +this. Say we want to sort the \texttt{babynames} table by the number of +\texttt{"dr"}'s and \texttt{"ea"}'s in each \texttt{"Name"}. We'll +define the function \texttt{dr\_ea\_count} to help us out. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# First, define a function to count the number of times "dr" or "ea" appear in each name} +\KeywordTok{def}\NormalTok{ dr\_ea\_count(string):} + \ControlFlowTok{return}\NormalTok{ string.count(}\StringTok{\textquotesingle{}dr\textquotesingle{}}\NormalTok{) }\OperatorTok{+}\NormalTok{ string.count(}\StringTok{\textquotesingle{}ea\textquotesingle{}}\NormalTok{)} + +\CommentTok{\# Then, use \textasciigrave{}map\textasciigrave{} to apply \textasciigrave{}dr\_ea\_count\textasciigrave{} to each name in the "Name" column} +\NormalTok{babynames[}\StringTok{"dr\_ea\_count"}\NormalTok{] }\OperatorTok{=}\NormalTok{ babynames[}\StringTok{"Name"}\NormalTok{].}\BuiltInTok{map}\NormalTok{(dr\_ea\_count)} + +\CommentTok{\# Sort the DataFrame by the new "dr\_ea\_count" column so we can see our handiwork} +\NormalTok{babynames }\OperatorTok{=}\NormalTok{ babynames.sort\_values(by}\OperatorTok{=}\StringTok{"dr\_ea\_count"}\NormalTok{, ascending}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\NormalTok{babynames.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & dr\_ea\_count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +115957 & CA & F & 1990 & Deandrea & 5 & 3 \\ +101976 & CA & F & 1986 & Deandrea & 6 & 3 \\ +131029 & CA & F & 1994 & Leandrea & 5 & 3 \\ +108731 & CA & F & 1988 & Deandrea & 5 & 3 \\ +308131 & CA & M & 1985 & Deandrea & 6 & 3 \\ +\end{longtable} + +We can drop the \texttt{dr\_ea\_count} once we're done using it to +maintain a neat table. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Drop the \textasciigrave{}dr\_ea\_count\textasciigrave{} column} +\NormalTok{babynames }\OperatorTok{=}\NormalTok{ babynames.drop(}\StringTok{"dr\_ea\_count"}\NormalTok{, axis }\OperatorTok{=} \StringTok{\textquotesingle{}columns\textquotesingle{}}\NormalTok{)} +\NormalTok{babynames.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +115957 & CA & F & 1990 & Deandrea & 5 \\ +101976 & CA & F & 1986 & Deandrea & 6 \\ +131029 & CA & F & 1994 & Leandrea & 5 \\ +108731 & CA & F & 1988 & Deandrea & 5 \\ +308131 & CA & M & 1985 & Deandrea & 6 \\ +\end{longtable} + +\section{\texorpdfstring{Aggregating Data with +\texttt{.groupby}}{Aggregating Data with .groupby}}\label{aggregating-data-with-.groupby} + +Up until this point, we have been working with individual rows of +\texttt{DataFrame}s. As data scientists, we often wish to investigate +trends across a larger \emph{subset} of our data. For example, we may +want to compute some summary statistic (the mean, median, sum, etc.) for +a group of rows in our \texttt{DataFrame}. To do this, we'll use +\texttt{pandas} \texttt{GroupBy} objects. Our goal is to group together +rows that fall under the same category and perform an operation that +aggregates across all rows in the category. + +Let's say we wanted to aggregate all rows in \texttt{babynames} for a +given year. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames.groupby(}\StringTok{"Year"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + +\end{verbatim} + +What does this strange output mean? Calling \texttt{.groupby} +\href{https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html}{(documentation)} +has generated a \texttt{GroupBy} object. You can imagine this as a set +of ``mini'' sub-\texttt{DataFrame}s, where each subframe contains all of +the rows from \texttt{babynames} that correspond to a particular year. + +The diagram below shows a simplified view of \texttt{babynames} to help +illustrate this idea. + +We can't work with a \texttt{GroupBy} object directly -- that is why you +saw that strange output earlier rather than a standard view of a +\texttt{DataFrame}. To actually manipulate values within these ``mini'' +\texttt{DataFrame}s, we'll need to call an \emph{aggregation method}. +This is a method that tells \texttt{pandas} how to aggregate the values +within the \texttt{GroupBy} object. Once the aggregation is applied, +\texttt{pandas} will return a normal (now grouped) \texttt{DataFrame}. + +The first aggregation method we'll consider is \texttt{.agg}. The +\texttt{.agg} method takes in a function as its argument; this function +is then applied to each column of a ``mini'' grouped DataFrame. We end +up with a new \texttt{DataFrame} with one aggregated row per subframe. +Let's see this in action by finding the \texttt{sum} of all counts for +each year in \texttt{babynames} -- this is equivalent to finding the +number of babies born in each year. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[[}\StringTok{"Year"}\NormalTok{, }\StringTok{"Count"}\NormalTok{]].groupby(}\StringTok{"Year"}\NormalTok{).agg(}\BuiltInTok{sum}\NormalTok{).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/2718070104.py:1: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Year & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1910 & 9163 \\ +1911 & 9983 \\ +1912 & 17946 \\ +1913 & 22094 \\ +1914 & 26926 \\ +\end{longtable} + +We can relate this back to the diagram we used above. Remember that the +diagram uses a simplified version of \texttt{babynames}, which is why we +see smaller values for the summed counts. + +\begin{figure}[H] + +{\centering \includegraphics{pandas_3/images/agg.png} + +} + +\caption{Performing an aggregation} + +\end{figure}% + +Calling \texttt{.agg} has condensed each subframe back into a single +row. This gives us our final output: a \texttt{DataFrame} that is now +indexed by \texttt{"Year"}, with a single row for each unique year in +the original \texttt{babynames} DataFrame. + +There are many different aggregation functions we can use, all of which +are useful in different applications. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[[}\StringTok{"Year"}\NormalTok{, }\StringTok{"Count"}\NormalTok{]].groupby(}\StringTok{"Year"}\NormalTok{).agg(}\BuiltInTok{min}\NormalTok{).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/86785752.py:1: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Year & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1910 & 5 \\ +1911 & 5 \\ +1912 & 5 \\ +1913 & 5 \\ +1914 & 5 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames[[}\StringTok{"Year"}\NormalTok{, }\StringTok{"Count"}\NormalTok{]].groupby(}\StringTok{"Year"}\NormalTok{).agg(}\BuiltInTok{max}\NormalTok{).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/3032256904.py:1: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Year & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1910 & 295 \\ +1911 & 390 \\ +1912 & 534 \\ +1913 & 614 \\ +1914 & 773 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Same result, but now we explicitly tell pandas to only consider the "Count" column when summing} +\NormalTok{babynames.groupby(}\StringTok{"Year"}\NormalTok{)[[}\StringTok{"Count"}\NormalTok{]].agg(}\BuiltInTok{sum}\NormalTok{).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/1958904241.py:2: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Year & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1910 & 9163 \\ +1911 & 9983 \\ +1912 & 17946 \\ +1913 & 22094 \\ +1914 & 26926 \\ +\end{longtable} + +There are many different aggregations that can be applied to the grouped +data. The primary requirement is that an aggregation function must: + +\begin{itemize} +\tightlist +\item + Take in a \texttt{Series} of data (a single column of the grouped + subframe). +\item + Return a single value that aggregates this \texttt{Series}. +\end{itemize} + +\subsection{Aggregation Functions}\label{aggregation-functions} + +Because of this fairly broad requirement, \texttt{pandas} offers many +ways of computing an aggregation. + +\textbf{In-built} Python operations -- such as \texttt{sum}, +\texttt{max}, and \texttt{min} -- are automatically recognized by +\texttt{pandas}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# What is the minimum count for each name in any year?} +\NormalTok{babynames.groupby(}\StringTok{"Name"}\NormalTok{)[[}\StringTok{"Count"}\NormalTok{]].agg(}\BuiltInTok{min}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/3244314896.py:2: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.min. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "min" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Name & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Aadan & 5 \\ +Aadarsh & 6 \\ +Aaden & 10 \\ +Aadhav & 6 \\ +Aadhini & 6 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# What is the largest single{-}year count of each name?} +\NormalTok{babynames.groupby(}\StringTok{"Name"}\NormalTok{)[[}\StringTok{"Count"}\NormalTok{]].agg(}\BuiltInTok{max}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/3805876622.py:2: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Name & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Aadan & 7 \\ +Aadarsh & 6 \\ +Aaden & 158 \\ +Aadhav & 8 \\ +Aadhini & 6 \\ +\end{longtable} + +As mentioned previously, functions from the \texttt{NumPy} library, such +as \texttt{np.mean}, \texttt{np.max}, \texttt{np.min}, and +\texttt{np.sum}, are also fair game in \texttt{pandas}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# What is the average count for each name across all years?} +\NormalTok{babynames.groupby(}\StringTok{"Name"}\NormalTok{)[[}\StringTok{"Count"}\NormalTok{]].agg(np.mean).head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/308986604.py:2: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.mean. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "mean" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Name & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Aadan & 6.000000 \\ +Aadarsh & 6.000000 \\ +Aaden & 46.214286 \\ +Aadhav & 6.750000 \\ +Aadhini & 6.000000 \\ +\end{longtable} + +\texttt{pandas} also offers a number of in-built functions. Functions +that are native to \texttt{pandas} can be referenced using their string +name within a call to \texttt{.agg}. Some examples include: + +\begin{itemize} +\tightlist +\item + \texttt{.agg("sum")} +\item + \texttt{.agg("max")} +\item + \texttt{.agg("min")} +\item + \texttt{.agg("mean")} +\item + \texttt{.agg("first")} +\item + \texttt{.agg("last")} +\end{itemize} + +The latter two entries in this list -- \texttt{"first"} and +\texttt{"last"} -- are unique to \texttt{pandas}. They return the first +or last entry in a subframe column. Why might this be useful? Consider a +case where \emph{multiple} columns in a group share identical +information. To represent this information in the grouped output, we can +simply grab the first or last entry, which we know will be identical to +all other entries. + +Let's illustrate this with an example. Say we add a new column to +\texttt{babynames} that contains the first letter of each name. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Imagine we had an additional column, "First Letter". We\textquotesingle{}ll explain this code next week} +\NormalTok{babynames[}\StringTok{"First Letter"}\NormalTok{] }\OperatorTok{=}\NormalTok{ babynames[}\StringTok{"Name"}\NormalTok{].}\BuiltInTok{str}\NormalTok{[}\DecValTok{0}\NormalTok{]} + +\CommentTok{\# We construct a simplified DataFrame containing just a subset of columns} +\NormalTok{babynames\_new }\OperatorTok{=}\NormalTok{ babynames[[}\StringTok{"Name"}\NormalTok{, }\StringTok{"First Letter"}\NormalTok{, }\StringTok{"Year"}\NormalTok{]]} +\NormalTok{babynames\_new.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& Name & First Letter & Year \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +115957 & Deandrea & D & 1990 \\ +101976 & Deandrea & D & 1986 \\ +131029 & Leandrea & L & 1994 \\ +108731 & Deandrea & D & 1988 \\ +308131 & Deandrea & D & 1985 \\ +\end{longtable} + +If we form groups for each name in the dataset, \texttt{"First\ Letter"} +will be the same for all members of the group. This means that if we +simply select the first entry for \texttt{"First\ Letter"} in the group, +we'll represent all data in that group. + +We can use a dictionary to apply different aggregation functions to each +column during grouping. + +\begin{figure}[H] + +{\centering \includegraphics{pandas_3/images/first.png} + +} + +\caption{Aggregating using ``first''} + +\end{figure}% + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames\_new.groupby(}\StringTok{"Name"}\NormalTok{).agg(\{}\StringTok{"First Letter"}\NormalTok{:}\StringTok{"first"}\NormalTok{, }\StringTok{"Year"}\NormalTok{:}\StringTok{"max"}\NormalTok{\}).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& First Letter & Year \\ +Name & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Aadan & A & 2014 \\ +Aadarsh & A & 2019 \\ +Aaden & A & 2020 \\ +Aadhav & A & 2019 \\ +Aadhini & A & 2022 \\ +\end{longtable} + +\subsection{Plotting Birth Counts}\label{plotting-birth-counts} + +Let's use \texttt{.agg} to find the total number of babies born in each +year. Recall that using \texttt{.agg} with \texttt{.groupby()} follows +the format: +\texttt{df.groupby(column\_name).agg(aggregation\_function)}. The line +of code below gives us the total number of babies born in each year. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames.groupby(}\StringTok{"Year"}\NormalTok{)[[}\StringTok{"Count"}\NormalTok{]].agg(}\BuiltInTok{sum}\NormalTok{).head(}\DecValTok{5}\NormalTok{)} +\CommentTok{\# Alternative 1} +\CommentTok{\# babynames.groupby("Year")[["Count"]].sum()} +\CommentTok{\# Alternative 2} +\CommentTok{\# babynames.groupby("Year").sum(numeric\_only=True)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/390646742.py:1: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Year & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1910 & 9163 \\ +1911 & 9983 \\ +1912 & 17946 \\ +1913 & 22094 \\ +1914 & 26926 \\ +\end{longtable} + +Here's an illustration of the process: + +Plotting the \texttt{Dataframe} we obtain tells an interesting story. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ plotly.express }\ImportTok{as}\NormalTok{ px} +\NormalTok{puzzle2 }\OperatorTok{=}\NormalTok{ babynames.groupby(}\StringTok{"Year"}\NormalTok{)[[}\StringTok{"Count"}\NormalTok{]].agg(}\BuiltInTok{sum}\NormalTok{)} +\NormalTok{px.line(puzzle2, y }\OperatorTok{=} \StringTok{"Count"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/4066413905.py:2: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead. +\end{verbatim} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\textbf{A word of warning}: we made an enormous assumption when we +decided to use this dataset to estimate birth rate. According to +\href{https://lao.ca.gov/LAOEconTax/Article/Detail/691}{this article +from the Legistlative Analyst Office}, the true number of babies born in +California in 2020 was 421,275. However, our plot shows 362,882 babies +------ what happened? + +\subsection{\texorpdfstring{Summary of the \texttt{.groupby()} +Function}{Summary of the .groupby() Function}}\label{summary-of-the-.groupby-function} + +A \texttt{groupby} operation involves some combination of +\textbf{splitting a \texttt{DataFrame} into grouped subframes}, +\textbf{applying a function}, and \textbf{combining the results}. + +For some arbitrary \texttt{DataFrame} \texttt{df} below, the code +\texttt{df.groupby("year").agg(sum)} does the following: + +\begin{itemize} +\tightlist +\item + \textbf{Splits} the \texttt{DataFrame} into sub-\texttt{DataFrame}s + with rows belonging to the same year. +\item + \textbf{Applies} the \texttt{sum} function to each column of each + sub-\texttt{DataFrame}. +\item + \textbf{Combines} the results of \texttt{sum} into a single + \texttt{DataFrame}, indexed by \texttt{year}. +\end{itemize} + +\subsection{\texorpdfstring{Revisiting the \texttt{.agg()} +Function}{Revisiting the .agg() Function}}\label{revisiting-the-.agg-function} + +\texttt{.agg()} can take in any function that aggregates several values +into one summary value. Some commonly-used aggregation functions can +even be called directly, without explicit use of \texttt{.agg()}. For +example, we can call \texttt{.mean()} on \texttt{.groupby()}: + +\begin{verbatim} +babynames.groupby("Year").mean().head() +\end{verbatim} + +We can now put this all into practice. Say we want to find the baby name +with sex ``F'' that has fallen in popularity the most in California. To +calculate this, we can first create a metric: ``Ratio to Peak'' (RTP). +The RTP is the ratio of babies born with a given name in 2022 to the +\emph{maximum} number of babies born with the name in \emph{any} year. + +Let's start with calculating this for one baby, ``Jennifer''. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# We filter by babies with sex "F" and sort by "Year"} +\NormalTok{f\_babynames }\OperatorTok{=}\NormalTok{ babynames[babynames[}\StringTok{"Sex"}\NormalTok{] }\OperatorTok{==} \StringTok{"F"}\NormalTok{]} +\NormalTok{f\_babynames }\OperatorTok{=}\NormalTok{ f\_babynames.sort\_values([}\StringTok{"Year"}\NormalTok{])} + +\CommentTok{\# Determine how many Jennifers were born in CA per year} +\NormalTok{jenn\_counts\_series }\OperatorTok{=}\NormalTok{ f\_babynames[f\_babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Jennifer"}\NormalTok{][}\StringTok{"Count"}\NormalTok{]} + +\CommentTok{\# Determine the max number of Jennifers born in a year and the number born in 2022 } +\CommentTok{\# to calculate RTP} +\NormalTok{max\_jenn }\OperatorTok{=} \BuiltInTok{max}\NormalTok{(f\_babynames[f\_babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Jennifer"}\NormalTok{][}\StringTok{"Count"}\NormalTok{])} +\NormalTok{curr\_jenn }\OperatorTok{=}\NormalTok{ f\_babynames[f\_babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Jennifer"}\NormalTok{][}\StringTok{"Count"}\NormalTok{].iloc[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{]} +\NormalTok{rtp }\OperatorTok{=}\NormalTok{ curr\_jenn }\OperatorTok{/}\NormalTok{ max\_jenn} +\NormalTok{rtp} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(0.018796372629843364) +\end{verbatim} + +By creating a function to calculate RTP and applying it to our +\texttt{DataFrame} by using \texttt{.groupby()}, we can easily compute +the RTP for all names at once! + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ ratio\_to\_peak(series):} + \ControlFlowTok{return}\NormalTok{ series.iloc[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{] }\OperatorTok{/} \BuiltInTok{max}\NormalTok{(series)} + +\CommentTok{\#Using .groupby() to apply the function} +\NormalTok{rtp\_table }\OperatorTok{=}\NormalTok{ f\_babynames.groupby(}\StringTok{"Name"}\NormalTok{)[[}\StringTok{"Year"}\NormalTok{, }\StringTok{"Count"}\NormalTok{]].agg(ratio\_to\_peak)} +\NormalTok{rtp\_table.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Year & Count \\ +Name & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Aadhini & 1.0 & 1.000000 \\ +Aadhira & 1.0 & 0.500000 \\ +Aadhya & 1.0 & 0.660000 \\ +Aadya & 1.0 & 0.586207 \\ +Aahana & 1.0 & 0.269231 \\ +\end{longtable} + +In the rows shown above, we can see that every row shown has a +\texttt{Year} value of \texttt{1.0}. + +This is the ``\textbf{\texttt{pandas}}-ification'' of logic you saw in +Data 8. Much of the logic you've learned in Data 8 will serve you well +in Data 100. + +\subsection{Nuisance Columns}\label{nuisance-columns} + +Note that you must be careful with which columns you apply the +\texttt{.agg()} function to. If we were to apply our function to the +table as a whole by doing +\texttt{f\_babynames.groupby("Name").agg(ratio\_to\_peak)}, executing +our \texttt{.agg()} call would result in a \texttt{TypeError}. + +We can avoid this issue (and prevent unintentional loss of data) by +explicitly selecting column(s) we want to apply our aggregation function +to \textbf{BEFORE} calling \texttt{.agg()}, + +\subsection{Renaming Columns After +Grouping}\label{renaming-columns-after-grouping} + +By default, \texttt{.groupby} will not rename any aggregated columns. As +we can see in the table above, the aggregated column is still named +\texttt{Count} even though it now represents the RTP. For better +readability, we can rename \texttt{Count} to \texttt{Count\ RTP} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{rtp\_table }\OperatorTok{=}\NormalTok{ rtp\_table.rename(columns }\OperatorTok{=}\NormalTok{ \{}\StringTok{"Count"}\NormalTok{: }\StringTok{"Count RTP"}\NormalTok{\})} +\NormalTok{rtp\_table} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Year & Count RTP \\ +Name & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Aadhini & 1.0 & 1.000000 \\ +Aadhira & 1.0 & 0.500000 \\ +Aadhya & 1.0 & 0.660000 \\ +Aadya & 1.0 & 0.586207 \\ +Aahana & 1.0 & 0.269231 \\ +... & ... & ... \\ +Zyanya & 1.0 & 0.466667 \\ +Zyla & 1.0 & 1.000000 \\ +Zylah & 1.0 & 1.000000 \\ +Zyra & 1.0 & 1.000000 \\ +Zyrah & 1.0 & 0.833333 \\ +\end{longtable} + +\subsection{Some Data Science Payoff}\label{some-data-science-payoff} + +By sorting \texttt{rtp\_table}, we can see the names whose popularity +has decreased the most. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{rtp\_table }\OperatorTok{=}\NormalTok{ rtp\_table.rename(columns }\OperatorTok{=}\NormalTok{ \{}\StringTok{"Count"}\NormalTok{: }\StringTok{"Count RTP"}\NormalTok{\})} +\NormalTok{rtp\_table.sort\_values(}\StringTok{"Count RTP"}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Year & Count RTP \\ +Name & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Debra & 1.0 & 0.001260 \\ +Debbie & 1.0 & 0.002815 \\ +Carol & 1.0 & 0.003180 \\ +Tammy & 1.0 & 0.003249 \\ +Susan & 1.0 & 0.003305 \\ +\end{longtable} + +To visualize the above \texttt{DataFrame}, let's look at the line plot +below: + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ plotly.express }\ImportTok{as}\NormalTok{ px} +\NormalTok{px.line(f\_babynames[f\_babynames[}\StringTok{"Name"}\NormalTok{] }\OperatorTok{==} \StringTok{"Debra"}\NormalTok{], x }\OperatorTok{=} \StringTok{"Year"}\NormalTok{, y }\OperatorTok{=} \StringTok{"Count"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +We can get the list of the top 10 names and then plot popularity with +the following code: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{top10 }\OperatorTok{=}\NormalTok{ rtp\_table.sort\_values(}\StringTok{"Count RTP"}\NormalTok{).head(}\DecValTok{10}\NormalTok{).index} +\NormalTok{px.line(} +\NormalTok{ f\_babynames[f\_babynames[}\StringTok{"Name"}\NormalTok{].isin(top10)], } +\NormalTok{ x }\OperatorTok{=} \StringTok{"Year"}\NormalTok{, } +\NormalTok{ y }\OperatorTok{=} \StringTok{"Count"}\NormalTok{, } +\NormalTok{ color }\OperatorTok{=} \StringTok{"Name"} +\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +As a quick exercise, consider what code would compute the total number +of babies with each name. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames.groupby(}\StringTok{"Name"}\NormalTok{)[[}\StringTok{"Count"}\NormalTok{]].agg(}\BuiltInTok{sum}\NormalTok{).head()} +\CommentTok{\# alternative solution: } +\CommentTok{\# babynames.groupby("Name")[["Count"]].sum()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/1912269730.py:1: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& Count \\ +Name & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Aadan & 18 \\ +Aadarsh & 6 \\ +Aaden & 647 \\ +Aadhav & 27 \\ +Aadhini & 6 \\ +\end{longtable} + +\section{\texorpdfstring{\texttt{.groupby()}, +Continued}{.groupby(), Continued}}\label{groupby-continued} + +We'll work with the \texttt{elections} \texttt{DataFrame} again. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} + +\NormalTok{elections }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/elections.csv"}\NormalTok{)} +\NormalTok{elections.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 & win & 54.574789 \\ +\end{longtable} + +\subsection{\texorpdfstring{Raw \texttt{GroupBy} +Objects}{Raw GroupBy Objects}}\label{raw-groupby-objects} + +The result of \texttt{groupby} applied to a \texttt{DataFrame} is a +\texttt{DataFrameGroupBy} object, \textbf{not} a \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{grouped\_by\_year }\OperatorTok{=}\NormalTok{ elections.groupby(}\StringTok{"Year"}\NormalTok{)} +\BuiltInTok{type}\NormalTok{(grouped\_by\_year)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +pandas.core.groupby.generic.DataFrameGroupBy +\end{verbatim} + +There are several ways to look into \texttt{DataFrameGroupBy} objects: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{grouped\_by\_party }\OperatorTok{=}\NormalTok{ elections.groupby(}\StringTok{"Party"}\NormalTok{)} +\NormalTok{grouped\_by\_party.groups} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +{'American': [22, 126], 'American Independent': [115, 119, 124], 'Anti-Masonic': [6], 'Anti-Monopoly': [38], 'Citizens': [127], 'Communist': [89], 'Constitution': [160, 164, 172], 'Constitutional Union': [24], 'Democratic': [2, 4, 8, 10, 13, 14, 17, 20, 28, 29, 34, 37, 39, 45, 47, 52, 55, 57, 64, 70, 74, 77, 81, 83, 86, 91, 94, 97, 100, 105, 108, 111, 114, 116, 118, 123, 129, 134, 137, 140, 144, 151, 158, 162, 168, 176, 178], 'Democratic-Republican': [0, 1], 'Dixiecrat': [103], 'Farmer–Labor': [78], 'Free Soil': [15, 18], 'Green': [149, 155, 156, 165, 170, 177, 181], 'Greenback': [35], 'Independent': [121, 130, 143, 161, 167, 174], 'Liberal Republican': [31], 'Libertarian': [125, 128, 132, 138, 139, 146, 153, 159, 163, 169, 175, 180], 'National Democratic': [50], 'National Republican': [3, 5], 'National Union': [27], 'Natural Law': [148], 'New Alliance': [136], 'Northern Democratic': [26], 'Populist': [48, 61, 141], 'Progressive': [68, 82, 101, 107], 'Prohibition': [41, 44, 49, 51, 54, 59, 63, 67, 73, 75, 99], 'Reform': [150, 154], 'Republican': [21, 23, 30, 32, 33, 36, 40, 43, 46, 53, 56, 60, 65, 69, 72, 79, 80, 84, 87, 90, 96, 98, 104, 106, 109, 112, 113, 117, 120, 122, 131, 133, 135, 142, 145, 152, 157, 166, 171, 173, 179], 'Socialist': [58, 62, 66, 71, 76, 85, 88, 92, 95, 102], 'Southern Democratic': [25], 'States' Rights': [110], 'Taxpayers': [147], 'Union': [93], 'Union Labor': [42], 'Whig': [7, 9, 11, 12, 16, 19]} +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{grouped\_by\_party.get\_group(}\StringTok{"Socialist"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +58 & 1904 & Eugene V. Debs & Socialist & 402810 & loss & 2.985897 \\ +62 & 1908 & Eugene V. Debs & Socialist & 420852 & loss & 2.850866 \\ +66 & 1912 & Eugene V. Debs & Socialist & 901551 & loss & 6.004354 \\ +71 & 1916 & Allan L. Benson & Socialist & 590524 & loss & 3.194193 \\ +76 & 1920 & Eugene V. Debs & Socialist & 913693 & loss & 3.428282 \\ +85 & 1928 & Norman Thomas & Socialist & 267478 & loss & 0.728623 \\ +88 & 1932 & Norman Thomas & Socialist & 884885 & loss & 2.236211 \\ +92 & 1936 & Norman Thomas & Socialist & 187910 & loss & 0.412876 \\ +95 & 1940 & Norman Thomas & Socialist & 116599 & loss & 0.234237 \\ +102 & 1948 & Norman Thomas & Socialist & 139569 & loss & 0.286312 \\ +\end{longtable} + +\subsection{\texorpdfstring{Other \texttt{GroupBy} +Methods}{Other GroupBy Methods}}\label{other-groupby-methods} + +There are many aggregation methods we can use with \texttt{.agg}. Some +useful options are: + +\begin{itemize} +\tightlist +\item + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.mean.html\#pandas.core.groupby.DataFrameGroupBy.mean}{\texttt{.mean}}: + creates a new \texttt{DataFrame} with the mean value of each group +\item + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.sum.html\#pandas.core.groupby.DataFrameGroupBy.sum}{\texttt{.sum}}: + creates a new \texttt{DataFrame} with the sum of each group +\item + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.max.html\#pandas.core.groupby.DataFrameGroupBy.max}{\texttt{.max}} + and + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.min.html\#pandas.core.groupby.DataFrameGroupBy.min}{\texttt{.min}}: + creates a new \texttt{DataFrame} with the maximum/minimum value of + each group +\item + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.first.html\#pandas.core.groupby.DataFrameGroupBy.first}{\texttt{.first}} + and + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.last.html\#pandas.core.groupby.DataFrameGroupBy.last}{\texttt{.last}}: + creates a new \texttt{DataFrame} with the first/last row in each group +\item + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.size.html\#pandas.core.groupby.DataFrameGroupBy.size}{\texttt{.size}}: + creates a new \textbf{\texttt{Series}} with the number of entries in + each group +\item + \href{https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.count.html\#pandas.core.groupby.DataFrameGroupBy.count}{\texttt{.count}}: + creates a new \textbf{\texttt{DataFrame}} with the number of entries, + excluding missing values. +\end{itemize} + +Let's illustrate some examples by creating a \texttt{DataFrame} called +\texttt{df}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df }\OperatorTok{=}\NormalTok{ pd.DataFrame(\{}\StringTok{\textquotesingle{}letter\textquotesingle{}}\NormalTok{:[}\StringTok{\textquotesingle{}A\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}A\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}B\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}C\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}C\textquotesingle{}}\NormalTok{,}\StringTok{\textquotesingle{}C\textquotesingle{}}\NormalTok{], } + \StringTok{\textquotesingle{}num\textquotesingle{}}\NormalTok{:[}\DecValTok{1}\NormalTok{,}\DecValTok{2}\NormalTok{,}\DecValTok{3}\NormalTok{,}\DecValTok{4}\NormalTok{,np.nan,}\DecValTok{4}\NormalTok{], } + \StringTok{\textquotesingle{}state\textquotesingle{}}\NormalTok{:[np.nan, }\StringTok{\textquotesingle{}tx\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}fl\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}hi\textquotesingle{}}\NormalTok{, np.nan, }\StringTok{\textquotesingle{}ak\textquotesingle{}}\NormalTok{]\})} +\NormalTok{df} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& letter & num & state \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & A & 1.0 & NaN \\ +1 & A & 2.0 & tx \\ +2 & B & 3.0 & fl \\ +3 & C & 4.0 & hi \\ +4 & C & NaN & NaN \\ +5 & C & 4.0 & ak \\ +\end{longtable} + +Note the slight difference between \texttt{.size()} and +\texttt{.count()}: while \texttt{.size()} returns a \texttt{Series} and +counts the number of entries including the missing values, +\texttt{.count()} returns a \texttt{DataFrame} and counts the number of +entries in each column \emph{excluding missing values}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df.groupby(}\StringTok{"letter"}\NormalTok{).size()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +letter +A 2 +B 1 +C 3 +dtype: int64 +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df.groupby(}\StringTok{"letter"}\NormalTok{).count()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& num & state \\ +letter & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +A & 2 & 1 \\ +B & 1 & 1 \\ +C & 2 & 2 \\ +\end{longtable} + +You might recall that the \texttt{value\_counts()} function in the +previous note does something similar. It turns out +\texttt{value\_counts()} and \texttt{groupby.size()} are the same, +except \texttt{value\_counts()} sorts the resulting \texttt{Series} in +descending order automatically. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df[}\StringTok{"letter"}\NormalTok{].value\_counts()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +letter +C 3 +A 2 +B 1 +Name: count, dtype: int64 +\end{verbatim} + +These (and other) aggregation functions are so common that +\texttt{pandas} allows for writing shorthand. Instead of explicitly +stating the use of \texttt{.agg}, we can call the function directly on +the \texttt{GroupBy} object. + +For example, the following are equivalent: + +\begin{itemize} +\tightlist +\item + \texttt{elections.groupby("Candidate").agg(mean)} +\item + \texttt{elections.groupby("Candidate").mean()} +\end{itemize} + +There are many other methods that \texttt{pandas} supports. You can +check them out on the +\href{https://pandas.pydata.org/docs/reference/groupby.html}{\texttt{pandas} +documentation}. + +\subsection{Filtering by Group}\label{filtering-by-group} + +Another common use for \texttt{GroupBy} objects is to filter data by +group. + +\texttt{groupby.filter} takes an argument \texttt{func}, where +\texttt{func} is a function that: + +\begin{itemize} +\tightlist +\item + Takes a \texttt{DataFrame} object as input +\item + Returns a single \texttt{True} or \texttt{False}. +\end{itemize} + +\texttt{groupby.filter} applies \texttt{func} to each +group/sub-\texttt{DataFrame}: + +\begin{itemize} +\tightlist +\item + If \texttt{func} returns \texttt{True} for a group, then all rows + belonging to the group are preserved. +\item + If \texttt{func} returns \texttt{False} for a group, then all rows + belonging to that group are filtered out. +\end{itemize} + +In other words, sub-\texttt{DataFrame}s that correspond to \texttt{True} +are returned in the final result, whereas those with a \texttt{False} +value are not. Importantly, \texttt{groupby.filter} is different from +\texttt{groupby.agg} in that an \emph{entire} sub-\texttt{DataFrame} is +returned in the final \texttt{DataFrame}, not just a single row. As a +result, \texttt{groupby.filter} preserves the original indices and the +column we grouped on does \textbf{NOT} become the index! + +To illustrate how this happens, let's go back to the \texttt{elections} +dataset. Say we want to identify ``tight'' election years -- that is, we +want to find all rows that correspond to election years where all +candidates in that year won a similar portion of the total vote. +Specifically, let's find all rows corresponding to a year where no +candidate won more than 45\% of the total vote. + +In other words, we want to: + +\begin{itemize} +\tightlist +\item + Find the years where the maximum \texttt{\%} in that year is less than + 45\% +\item + Return all \texttt{DataFrame} rows that correspond to these years +\end{itemize} + +For each year, we need to find the maximum \texttt{\%} among \emph{all} +rows for that year. If this maximum \texttt{\%} is lower than 45\%, we +will tell \texttt{pandas} to keep all rows corresponding to that year. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.groupby(}\StringTok{"Year"}\NormalTok{).}\BuiltInTok{filter}\NormalTok{(}\KeywordTok{lambda}\NormalTok{ sf: sf[}\StringTok{"\%"}\NormalTok{].}\BuiltInTok{max}\NormalTok{() }\OperatorTok{\textless{}} \DecValTok{45}\NormalTok{).head(}\DecValTok{9}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +23 & 1860 & Abraham Lincoln & Republican & 1855993 & win & 39.699408 \\ +24 & 1860 & John Bell & Constitutional Union & 590901 & loss & +12.639283 \\ +25 & 1860 & John C. Breckinridge & Southern Democratic & 848019 & loss & +18.138998 \\ +26 & 1860 & Stephen A. Douglas & Northern Democratic & 1380202 & loss & +29.522311 \\ +66 & 1912 & Eugene V. Debs & Socialist & 901551 & loss & 6.004354 \\ +67 & 1912 & Eugene W. Chafin & Prohibition & 208156 & loss & 1.386325 \\ +68 & 1912 & Theodore Roosevelt & Progressive & 4122721 & loss & +27.457433 \\ +69 & 1912 & William Taft & Republican & 3486242 & loss & 23.218466 \\ +70 & 1912 & Woodrow Wilson & Democratic & 6296284 & win & 41.933422 \\ +\end{longtable} + +What's going on here? In this example, we've defined our filtering +function, \texttt{func}, to be +\texttt{lambda\ sf:\ sf{[}"\%"{]}.max()\ \textless{}\ 45}. This +filtering function will find the maximum \texttt{"\%"} value among all +entries in the grouped sub-\texttt{DataFrame}, which we call +\texttt{sf}. If the maximum value is less than 45, then the filter +function will return \texttt{True} and all rows in that grouped +sub-\texttt{DataFrame} will appear in the final output +\texttt{DataFrame}. + +Examine the \texttt{DataFrame} above. Notice how, in this preview of the +first 9 rows, all entries from the years 1860 and 1912 appear. This +means that in 1860 and 1912, no candidate in that year won more than +45\% of the total vote. + +You may ask: how is the \texttt{groupby.filter} procedure different to +the boolean filtering we've seen previously? Boolean filtering considers +\emph{individual} rows when applying a boolean condition. For example, +the code \texttt{elections{[}elections{[}"\%"{]}\ \textless{}\ 45{]}} +will check the \texttt{"\%"} value of every single row in +\texttt{elections}; if it is less than 45, then that row will be kept in +the output. \texttt{groupby.filter}, in contrast, applies a boolean +condition \emph{across} all rows in a group. If not all rows in that +group satisfy the condition specified by the filter, the entire group +will be discarded in the output. + +\subsection{\texorpdfstring{Aggregation with \texttt{lambda} +Functions}{Aggregation with lambda Functions}}\label{aggregation-with-lambda-functions} + +What if we wish to aggregate our \texttt{DataFrame} using a non-standard +function -- for example, a function of our own design? We can do so by +combining \texttt{.agg} with \texttt{lambda} expressions. + +Let's first consider a puzzle to jog our memory. We will attempt to find +the \texttt{Candidate} from each \texttt{Party} with the highest +\texttt{\%} of votes. + +A naive approach may be to group by the \texttt{Party} column and +aggregate by the maximum. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.groupby(}\StringTok{"Party"}\NormalTok{).agg(}\BuiltInTok{max}\NormalTok{).head(}\DecValTok{10}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/4278286395.py:1: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead. +\end{verbatim} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& Year & Candidate & Popular vote & Result & \% \\ +Party & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +American & 1976 & Thomas J. Anderson & 873053 & loss & 21.554001 \\ +American Independent & 1976 & Lester Maddox & 9901118 & loss & +13.571218 \\ +Anti-Masonic & 1832 & William Wirt & 100715 & loss & 7.821583 \\ +Anti-Monopoly & 1884 & Benjamin Butler & 134294 & loss & 1.335838 \\ +Citizens & 1980 & Barry Commoner & 233052 & loss & 0.270182 \\ +Communist & 1932 & William Z. Foster & 103307 & loss & 0.261069 \\ +Constitution & 2016 & Michael Peroutka & 203091 & loss & 0.152398 \\ +Constitutional Union & 1860 & John Bell & 590901 & loss & 12.639283 \\ +Democratic & 2020 & Woodrow Wilson & 81268924 & win & 61.344703 \\ +Democratic-Republican & 1824 & John Quincy Adams & 151271 & win & +57.210122 \\ +\end{longtable} + +This approach is clearly wrong -- the \texttt{DataFrame} claims that +Woodrow Wilson won the presidency in 2020. + +Why is this happening? Here, the \texttt{max} aggregation function is +taken over every column \emph{independently}. Among Democrats, +\texttt{max} is computing: + +\begin{itemize} +\tightlist +\item + The most recent \texttt{Year} a Democratic candidate ran for president + (2020) +\item + The \texttt{Candidate} with the alphabetically ``largest'' name + (``Woodrow Wilson'') +\item + The \texttt{Result} with the alphabetically ``largest'' outcome + (``win'') +\end{itemize} + +Instead, let's try a different approach. We will: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Sort the \texttt{DataFrame} so that rows are in descending order of + \texttt{\%} +\item + Group by \texttt{Party} and select the first row of each + sub-\texttt{DataFrame} +\end{enumerate} + +While it may seem unintuitive, sorting \texttt{elections} by descending +order of \texttt{\%} is extremely helpful. If we then group by +\texttt{Party}, the first row of each \texttt{GroupBy} object will +contain information about the \texttt{Candidate} with the highest voter +\texttt{\%}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections\_sorted\_by\_percent }\OperatorTok{=}\NormalTok{ elections.sort\_values(}\StringTok{"\%"}\NormalTok{, ascending}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\NormalTok{elections\_sorted\_by\_percent.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +114 & 1964 & Lyndon Johnson & Democratic & 43127041 & win & 61.344703 \\ +91 & 1936 & Franklin Roosevelt & Democratic & 27752648 & win & +60.978107 \\ +120 & 1972 & Richard Nixon & Republican & 47168710 & win & 60.907806 \\ +79 & 1920 & Warren Harding & Republican & 16144093 & win & 60.574501 \\ +133 & 1984 & Ronald Reagan & Republican & 54455472 & win & 59.023326 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections\_sorted\_by\_percent.groupby(}\StringTok{"Party"}\NormalTok{).agg(}\KeywordTok{lambda}\NormalTok{ x : x.iloc[}\DecValTok{0}\NormalTok{]).head(}\DecValTok{10}\NormalTok{)} + +\CommentTok{\# Equivalent to the below code} +\CommentTok{\# elections\_sorted\_by\_percent.groupby("Party").agg(\textquotesingle{}first\textquotesingle{}).head(10)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& Year & Candidate & Popular vote & Result & \% \\ +Party & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +American & 1856 & Millard Fillmore & 873053 & loss & 21.554001 \\ +American Independent & 1968 & George Wallace & 9901118 & loss & +13.571218 \\ +Anti-Masonic & 1832 & William Wirt & 100715 & loss & 7.821583 \\ +Anti-Monopoly & 1884 & Benjamin Butler & 134294 & loss & 1.335838 \\ +Citizens & 1980 & Barry Commoner & 233052 & loss & 0.270182 \\ +Communist & 1932 & William Z. Foster & 103307 & loss & 0.261069 \\ +Constitution & 2008 & Chuck Baldwin & 199750 & loss & 0.152398 \\ +Constitutional Union & 1860 & John Bell & 590901 & loss & 12.639283 \\ +Democratic & 1964 & Lyndon Johnson & 43127041 & win & 61.344703 \\ +Democratic-Republican & 1824 & Andrew Jackson & 151271 & loss & +57.210122 \\ +\end{longtable} + +Here's an illustration of the process: + +Notice how our code correctly determines that Lyndon Johnson from the +Democratic Party has the highest voter \texttt{\%}. + +More generally, \texttt{lambda} functions are used to design custom +aggregation functions that aren't pre-defined by Python. The input +parameter \texttt{x} to the \texttt{lambda} function is a +\texttt{GroupBy} object. Therefore, it should make sense why +\texttt{lambda\ x\ :\ x.iloc{[}0{]}} selects the first row in each +groupby object. + +In fact, there's a few different ways to approach this problem. Each +approach has different tradeoffs in terms of readability, performance, +memory consumption, complexity, etc. We've given a few examples below. + +\textbf{Note}: Understanding these alternative solutions is not +required. They are given to demonstrate the vast number of +problem-solving approaches in \texttt{pandas}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Using the idxmax function} +\NormalTok{best\_per\_party }\OperatorTok{=}\NormalTok{ elections.loc[elections.groupby(}\StringTok{\textquotesingle{}Party\textquotesingle{}}\NormalTok{)[}\StringTok{\textquotesingle{}\%\textquotesingle{}}\NormalTok{].idxmax()]} +\NormalTok{best\_per\_party.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +22 & 1856 & Millard Fillmore & American & 873053 & loss & 21.554001 \\ +115 & 1968 & George Wallace & American Independent & 9901118 & loss & +13.571218 \\ +6 & 1832 & William Wirt & Anti-Masonic & 100715 & loss & 7.821583 \\ +38 & 1884 & Benjamin Butler & Anti-Monopoly & 134294 & loss & +1.335838 \\ +127 & 1980 & Barry Commoner & Citizens & 233052 & loss & 0.270182 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Using the .drop\_duplicates function} +\NormalTok{best\_per\_party2 }\OperatorTok{=}\NormalTok{ elections.sort\_values(}\StringTok{\textquotesingle{}\%\textquotesingle{}}\NormalTok{).drop\_duplicates([}\StringTok{\textquotesingle{}Party\textquotesingle{}}\NormalTok{], keep}\OperatorTok{=}\StringTok{\textquotesingle{}last\textquotesingle{}}\NormalTok{)} +\NormalTok{best\_per\_party2.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +148 & 1996 & John Hagelin & Natural Law & 113670 & loss & 0.118219 \\ +164 & 2008 & Chuck Baldwin & Constitution & 199750 & loss & 0.152398 \\ +110 & 1956 & T. Coleman Andrews & States\textquotesingle{} Rights & +107929 & loss & 0.174883 \\ +147 & 1996 & Howard Phillips & Taxpayers & 184656 & loss & 0.192045 \\ +136 & 1988 & Lenora Fulani & New Alliance & 217221 & loss & 0.237804 \\ +\end{longtable} + +\section{Aggregating Data with Pivot +Tables}\label{aggregating-data-with-pivot-tables} + +We know now that \texttt{.groupby} gives us the ability to group and +aggregate data across our \texttt{DataFrame}. The examples above formed +groups using just one column in the \texttt{DataFrame}. It's possible to +group by multiple columns at once by passing in a list of column names +to \texttt{.groupby}. + +Let's consider the \texttt{babynames} dataset again. In this problem, we +will find the total number of baby names associated with each sex for +each year. To do this, we'll group by \emph{both} the \texttt{"Year"} +and \texttt{"Sex"} columns. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & First Letter \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +115957 & CA & F & 1990 & Deandrea & 5 & D \\ +101976 & CA & F & 1986 & Deandrea & 6 & D \\ +131029 & CA & F & 1994 & Leandrea & 5 & L \\ +108731 & CA & F & 1988 & Deandrea & 5 & D \\ +308131 & CA & M & 1985 & Deandrea & 6 & D \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Find the total number of baby names associated with each sex for each } +\CommentTok{\# year in the data} +\NormalTok{babynames.groupby([}\StringTok{"Year"}\NormalTok{, }\StringTok{"Sex"}\NormalTok{])[[}\StringTok{"Count"}\NormalTok{]].agg(}\BuiltInTok{sum}\NormalTok{).head(}\DecValTok{6}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/3186035650.py:3: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& & Count \\ +Year & Sex & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\multirow{2}{=}{1910} & F & 5950 \\ +& M & 3213 \\ +\multirow{2}{=}{1911} & F & 6602 \\ +& M & 3381 \\ +\multirow{2}{=}{1912} & F & 9804 \\ +& M & 8142 \\ +\end{longtable} + +Notice that both \texttt{"Year"} and \texttt{"Sex"} serve as the index +of the \texttt{DataFrame} (they are both rendered in bold). We've +created a \emph{multi-index} \texttt{DataFrame} where two different +index values, the year and sex, are used to uniquely identify each row. + +This isn't the most intuitive way of representing this data -- and, +because multi-indexed DataFrames have multiple dimensions in their +index, they can often be difficult to use. + +Another strategy to aggregate across two columns is to create a pivot +table. You saw these back in +\href{https://inferentialthinking.com/chapters/08/3/Cross-Classifying_by_More_than_One_Variable.html\#pivot-tables-rearranging-the-output-of-group}{Data +8}. One set of values is used to create the index of the pivot table; +another set is used to define the column names. The values contained in +each cell of the table correspond to the aggregated data for each +index-column pair. + +Here's an illustration of the process: + +The best way to understand pivot tables is to see one in action. Let's +return to our original goal of summing the total number of names +associated with each combination of year and sex. We'll call the +\texttt{pandas} +\href{https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.pivot_table.html}{\texttt{.pivot\_table}} +method to create a new table. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# The \textasciigrave{}pivot\_table\textasciigrave{} method is used to generate a Pandas pivot table} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\NormalTok{babynames.pivot\_table(} +\NormalTok{ index }\OperatorTok{=} \StringTok{"Year"}\NormalTok{,} +\NormalTok{ columns }\OperatorTok{=} \StringTok{"Sex"}\NormalTok{, } +\NormalTok{ values }\OperatorTok{=} \StringTok{"Count"}\NormalTok{, } +\NormalTok{ aggfunc }\OperatorTok{=}\NormalTok{ np.}\BuiltInTok{sum}\NormalTok{, } +\NormalTok{).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/2548053048.py:3: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +Sex & F & M \\ +Year & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1910 & 5950 & 3213 \\ +1911 & 6602 & 3381 \\ +1912 & 9804 & 8142 \\ +1913 & 11860 & 10234 \\ +1914 & 13815 & 13111 \\ +\end{longtable} + +Looks a lot better! Now, our \texttt{DataFrame} is structured with clear +index-column combinations. Each entry in the pivot table represents the +summed count of names for a given combination of \texttt{"Year"} and +\texttt{"Sex"}. + +Let's take a closer look at the code implemented above. + +\begin{itemize} +\tightlist +\item + \texttt{index\ =\ "Year"} specifies the column name in the original + \texttt{DataFrame} that should be used as the index of the pivot table +\item + \texttt{columns\ =\ "Sex"} specifies the column name in the original + \texttt{DataFrame} that should be used to generate the columns of the + pivot table +\item + \texttt{values\ =\ "Count"} indicates what values from the original + \texttt{DataFrame} should be used to populate the entry for each + index-column combination +\item + \texttt{aggfunc\ =\ np.sum} tells \texttt{pandas} what function to use + when aggregating the data specified by \texttt{values}. Here, we are + summing the name counts for each pair of \texttt{"Year"} and + \texttt{"Sex"} +\end{itemize} + +We can even include multiple values in the index or columns of our pivot +tables. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{babynames\_pivot }\OperatorTok{=}\NormalTok{ babynames.pivot\_table(} +\NormalTok{ index}\OperatorTok{=}\StringTok{"Year"}\NormalTok{, }\CommentTok{\# the rows (turned into index)} +\NormalTok{ columns}\OperatorTok{=}\StringTok{"Sex"}\NormalTok{, }\CommentTok{\# the column values} +\NormalTok{ values}\OperatorTok{=}\NormalTok{[}\StringTok{"Count"}\NormalTok{, }\StringTok{"Name"}\NormalTok{], } +\NormalTok{ aggfunc}\OperatorTok{=}\BuiltInTok{max}\NormalTok{, }\CommentTok{\# group operation} +\NormalTok{)} +\NormalTok{babynames\_pivot.head(}\DecValTok{6}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57880/970182367.py:1: FutureWarning: + +The provided callable is currently using DataFrameGroupBy.max. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "max" instead. +\end{verbatim} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& \multicolumn{2}{l}{% +Count} & \multicolumn{2}{l@{}}{% +Name} \\ +Sex & F & M & F & M \\ +Year & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1910 & 295 & 237 & Yvonne & William \\ +1911 & 390 & 214 & Zelma & Willis \\ +1912 & 534 & 501 & Yvonne & Woodrow \\ +1913 & 584 & 614 & Zelma & Yoshio \\ +1914 & 773 & 769 & Zelma & Yoshio \\ +1915 & 998 & 1033 & Zita & Yukio \\ +\end{longtable} + +Note that each row provides the number of girls and number of boys +having that year's most common name, and also lists the alphabetically +largest girl name and boy name. The counts for number of girls/boys in +the resulting \texttt{DataFrame} do not correspond to the names listed. +For example, in 1910, the most popular girl name is given to 295 girls, +but that name was likely not Yvonne. + +\section{Joining Tables}\label{joining-tables} + +When working on data science projects, we're unlikely to have absolutely +all the data we want contained in a single \texttt{DataFrame} -- a +real-world data scientist needs to grapple with data coming from +multiple sources. If we have access to multiple datasets with related +information, we can join two or more tables into a single +\texttt{DataFrame}. + +To put this into practice, we'll revisit the \texttt{elections} dataset. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{elections.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 & win & 54.574789 \\ +\end{longtable} + +Say we want to understand the popularity of the names of each +presidential candidate in 2022. To do this, we'll need the combined data +of \texttt{babynames} \emph{and} \texttt{elections}. + +We'll start by creating a new column containing the first name of each +presidential candidate. This will help us join each name in +\texttt{elections} to the corresponding name data in \texttt{babynames}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# This \textasciigrave{}str\textasciigrave{} operation splits each candidate\textquotesingle{}s full name at each } +\CommentTok{\# blank space, then takes just the candidate\textquotesingle{}s first name} +\NormalTok{elections[}\StringTok{"First Name"}\NormalTok{] }\OperatorTok{=}\NormalTok{ elections[}\StringTok{"Candidate"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.split().}\BuiltInTok{str}\NormalTok{[}\DecValTok{0}\NormalTok{]} +\NormalTok{elections.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% & First Name \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 & Andrew \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 & John \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 & +Andrew \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 & John \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 & win & 54.574789 & +Andrew \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Here, we\textquotesingle{}ll only consider \textasciigrave{}babynames\textasciigrave{} data from 2022} +\NormalTok{babynames\_2022 }\OperatorTok{=}\NormalTok{ babynames[babynames[}\StringTok{"Year"}\NormalTok{]}\OperatorTok{==}\DecValTok{2022}\NormalTok{]} +\NormalTok{babynames\_2022.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& State & Sex & Year & Name & Count & First Letter \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +237964 & CA & F & 2022 & Leandra & 10 & L \\ +404916 & CA & M & 2022 & Leandro & 99 & L \\ +405892 & CA & M & 2022 & Andreas & 14 & A \\ +235927 & CA & F & 2022 & Andrea & 322 & A \\ +405695 & CA & M & 2022 & Deandre & 18 & D \\ +\end{longtable} + +Now, we're ready to join the two tables. +\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html}{\texttt{pd.merge}} +is the \texttt{pandas} method used to join \texttt{DataFrame}s together. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{merged }\OperatorTok{=}\NormalTok{ pd.merge(left }\OperatorTok{=}\NormalTok{ elections, right }\OperatorTok{=}\NormalTok{ babynames\_2022, }\OperatorTok{\textbackslash{}} +\NormalTok{ left\_on }\OperatorTok{=} \StringTok{"First Name"}\NormalTok{, right\_on }\OperatorTok{=} \StringTok{"Name"}\NormalTok{)} +\NormalTok{merged.head()} +\CommentTok{\# Notice that pandas automatically specifies \textasciigrave{}Year\_x\textasciigrave{} and \textasciigrave{}Year\_y\textasciigrave{} } +\CommentTok{\# when both merged DataFrames have the same column name to avoid confusion} + +\CommentTok{\# Second option} +\CommentTok{\# merged = elections.merge(right = babynames\_2022, \textbackslash{}} + \CommentTok{\# left\_on = "First Name", right\_on = "Name")} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllllll@{}} +\toprule\noalign{} +& Year\_x & Candidate & Party & Popular vote & Result & \% & First Name +& State & Sex & Year\_y & Name & Count & First Letter \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.210122 & Andrew & CA & M & 2022 & Andrew & 741 & A \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.789878 & John & CA & M & 2022 & John & 490 & J \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.203927 & +Andrew & CA & M & 2022 & Andrew & 741 & A \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.796073 & John & CA & M & 2022 & John & 490 & J \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 & win & 54.574789 & +Andrew & CA & M & 2022 & Andrew & 741 & A \\ +\end{longtable} + +Let's take a closer look at the parameters: + +\begin{itemize} +\tightlist +\item + \texttt{left} and \texttt{right} parameters are used to specify the + \texttt{DataFrame}s to be joined. +\item + \texttt{left\_on} and \texttt{right\_on} parameters are assigned to + the string names of the columns to be used when performing the join. + These two \texttt{on} parameters tell \texttt{pandas} what values + should act as pairing keys to determine which rows to merge across the + \texttt{DataFrame}s. We'll talk more about this idea of a pairing key + next lecture. +\end{itemize} + +\section{Parting Note}\label{parting-note-2} + +Congratulations! We finally tackled \texttt{pandas}. Don't worry if you +are still not feeling very comfortable with it---you will have plenty of +chances to practice over the next few weeks. + +Next, we will get our hands dirty with some real-world datasets and use +our \texttt{pandas} knowledge to conduct some exploratory data analysis. + +\bookmarksetup{startatroot} + +\chapter{Data Cleaning and EDA}\label{data-cleaning-and-eda} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} + +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\CommentTok{\#\%matplotlib inline} +\NormalTok{plt.rcParams[}\StringTok{\textquotesingle{}figure.figsize\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ (}\DecValTok{12}\NormalTok{, }\DecValTok{9}\NormalTok{)} + +\NormalTok{sns.}\BuiltInTok{set}\NormalTok{()} +\NormalTok{sns.set\_context(}\StringTok{\textquotesingle{}talk\textquotesingle{}}\NormalTok{)} +\NormalTok{np.set\_printoptions(threshold}\OperatorTok{=}\DecValTok{20}\NormalTok{, precision}\OperatorTok{=}\DecValTok{2}\NormalTok{, suppress}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{pd.set\_option(}\StringTok{\textquotesingle{}display.max\_rows\textquotesingle{}}\NormalTok{, }\DecValTok{30}\NormalTok{)} +\NormalTok{pd.set\_option(}\StringTok{\textquotesingle{}display.max\_columns\textquotesingle{}}\NormalTok{, }\VariableTok{None}\NormalTok{)} +\NormalTok{pd.set\_option(}\StringTok{\textquotesingle{}display.precision\textquotesingle{}}\NormalTok{, }\DecValTok{2}\NormalTok{)} +\CommentTok{\# This option stops scientific notation for pandas} +\NormalTok{pd.set\_option(}\StringTok{\textquotesingle{}display.float\_format\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}}\SpecialCharTok{\{:.2f\}}\StringTok{\textquotesingle{}}\NormalTok{.}\BuiltInTok{format}\NormalTok{)} + +\CommentTok{\# Silence some spurious seaborn warnings} +\ImportTok{import}\NormalTok{ warnings} +\NormalTok{warnings.filterwarnings(}\StringTok{"ignore"}\NormalTok{, category}\OperatorTok{=}\PreprocessorTok{FutureWarning}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Recognize common file formats +\item + Categorize data by its variable type +\item + Build awareness of issues with data faithfulness and develop targeted + solutions +\end{itemize} + +\end{tcolorbox} + +In the past few lectures, we've learned that \texttt{pandas} is a +toolkit to restructure, modify, and explore a dataset. What we haven't +yet touched on is \emph{how} to make these data transformation +decisions. When we receive a new set of data from the ``real world,'' +how do we know what processing we should do to convert this data into a +usable form? + +\textbf{Data cleaning}, also called \textbf{data wrangling}, is the +process of transforming raw data to facilitate subsequent analysis. It +is often used to address issues like: + +\begin{itemize} +\tightlist +\item + Unclear structure or formatting +\item + Missing or corrupted values +\item + Unit conversions +\item + \ldots and so on +\end{itemize} + +\textbf{Exploratory Data Analysis (EDA)} is the process of understanding +a new dataset. It is an open-ended, informal analysis that involves +familiarizing ourselves with the variables present in the data, +discovering potential hypotheses, and identifying possible issues with +the data. This last point can often motivate further data cleaning to +address any problems with the dataset's format; because of this, EDA and +data cleaning are often thought of as an ``infinite loop,'' with each +process driving the other. + +In this lecture, we will consider the key properties of data to consider +when performing data cleaning and EDA. In doing so, we'll develop a +``checklist'' of sorts for you to consider when approaching a new +dataset. Throughout this process, we'll build a deeper understanding of +this early (but very important!) stage of the data science lifecycle. + +\section{Structure}\label{structure} + +We often prefer rectangular data for data analysis. Rectangular +structures are easy to manipulate and analyze. A key element of data +cleaning is about transforming data to be more rectangular. + +There are two kinds of rectangular data: tables and matrices. Tables +have named columns with different data types and are manipulated using +data transformation languages. Matrices contain numeric data of the same +type and are manipulated using linear algebra. + +\subsection{File Formats}\label{file-formats} + +There are many file types for storing structured data: TSV, JSON, XML, +ASCII, SAS, etc. We'll only cover CSV, TSV, and JSON in lecture, but +you'll likely encounter other formats as you work with different +datasets. Reading documentation is your best bet for understanding how +to process the multitude of different file types. + +\subsubsection{CSV}\label{csv} + +CSVs, which stand for \textbf{Comma-Separated Values}, are a common +tabular data format. In the past two \texttt{pandas} lectures, we +briefly touched on the idea of file format: the way data is encoded in a +file for storage. Specifically, our \texttt{elections} and +\texttt{babynames} datasets were stored and loaded as CSVs: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.read\_csv(}\StringTok{"data/elections.csv"}\NormalTok{).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.21 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.79 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.20 \\ +3 & 1828 & John Quincy Adams & National Republican & 500897 & loss & +43.80 \\ +4 & 1832 & Andrew Jackson & Democratic & 702735 & win & 54.57 \\ +\end{longtable} + +To better understand the properties of a CSV, let's take a look at the +first few rows of the raw data file to see what it looks like before +being loaded into a \texttt{DataFrame}. We'll use the \texttt{repr()} +function to return the raw string with its special characters: + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{"data/elections.csv"}\NormalTok{, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ table:} +\NormalTok{ i }\OperatorTok{=} \DecValTok{0} + \ControlFlowTok{for}\NormalTok{ row }\KeywordTok{in}\NormalTok{ table:} + \BuiltInTok{print}\NormalTok{(}\BuiltInTok{repr}\NormalTok{(row))} +\NormalTok{ i }\OperatorTok{+=} \DecValTok{1} + \ControlFlowTok{if}\NormalTok{ i }\OperatorTok{\textgreater{}} \DecValTok{3}\NormalTok{:} + \ControlFlowTok{break} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'Year,Candidate,Party,Popular vote,Result,%\n' +'1824,Andrew Jackson,Democratic-Republican,151271,loss,57.21012204\n' +'1824,John Quincy Adams,Democratic-Republican,113142,win,42.78987796\n' +'1828,Andrew Jackson,Democratic,642806,win,56.20392707\n' +\end{verbatim} + +Each row, or \textbf{record}, in the data is delimited by a newline +\texttt{\textbackslash{}n}. Each column, or \textbf{field}, in the data +is delimited by a comma \texttt{,} (hence, comma-separated!). + +\subsubsection{TSV}\label{tsv} + +Another common file type is \textbf{TSV (Tab-Separated Values)}. In a +TSV, records are still delimited by a newline +\texttt{\textbackslash{}n}, while fields are delimited by +\texttt{\textbackslash{}t} tab character. + +Let's check out the first few rows of the raw TSV file. Again, we'll use +the \texttt{repr()} function so that \texttt{print} shows the special +characters. + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{"data/elections.txt"}\NormalTok{, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ table:} +\NormalTok{ i }\OperatorTok{=} \DecValTok{0} + \ControlFlowTok{for}\NormalTok{ row }\KeywordTok{in}\NormalTok{ table:} + \BuiltInTok{print}\NormalTok{(}\BuiltInTok{repr}\NormalTok{(row))} +\NormalTok{ i }\OperatorTok{+=} \DecValTok{1} + \ControlFlowTok{if}\NormalTok{ i }\OperatorTok{\textgreater{}} \DecValTok{3}\NormalTok{:} + \ControlFlowTok{break} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'\ufeffYear\tCandidate\tParty\tPopular vote\tResult\t%\n' +'1824\tAndrew Jackson\tDemocratic-Republican\t151271\tloss\t57.21012204\n' +'1824\tJohn Quincy Adams\tDemocratic-Republican\t113142\twin\t42.78987796\n' +'1828\tAndrew Jackson\tDemocratic\t642806\twin\t56.20392707\n' +\end{verbatim} + +TSVs can be loaded into \texttt{pandas} using \texttt{pd.read\_csv}. +We'll need to specify the \textbf{delimiter} with +parameter\texttt{sep=\textquotesingle{}\textbackslash{}t\textquotesingle{}} +\href{https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html}{(documentation)}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.read\_csv(}\StringTok{"data/elections.txt"}\NormalTok{, sep}\OperatorTok{=}\StringTok{\textquotesingle{}}\CharTok{\textbackslash{}t}\StringTok{\textquotesingle{}}\NormalTok{).head(}\DecValTok{3}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.21 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.79 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.20 \\ +\end{longtable} + +An issue with CSVs and TSVs comes up whenever there are commas or tabs +within the records. How does \texttt{pandas} differentiate between a +comma delimiter vs.~a comma within the field itself, for example +\texttt{8,900}? To remedy this, check out the +\href{https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html}{\texttt{quotechar} +parameter}. + +\subsubsection{JSON}\label{json} + +\textbf{JSON (JavaScript Object Notation)} files behave similarly to +Python dictionaries. A raw JSON is shown below. + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{"data/elections.json"}\NormalTok{, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ table:} +\NormalTok{ i }\OperatorTok{=} \DecValTok{0} + \ControlFlowTok{for}\NormalTok{ row }\KeywordTok{in}\NormalTok{ table:} + \BuiltInTok{print}\NormalTok{(row)} +\NormalTok{ i }\OperatorTok{+=} \DecValTok{1} + \ControlFlowTok{if}\NormalTok{ i }\OperatorTok{\textgreater{}} \DecValTok{8}\NormalTok{:} + \ControlFlowTok{break} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +[ + + { + + "Year": 1824, + + "Candidate": "Andrew Jackson", + + "Party": "Democratic-Republican", + + "Popular vote": 151271, + + "Result": "loss", + + "%": 57.21012204 + + }, +\end{verbatim} + +JSON files can be loaded into \texttt{pandas} using +\texttt{pd.read\_json}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.read\_json(}\StringTok{\textquotesingle{}data/elections.json\textquotesingle{}}\NormalTok{).head(}\DecValTok{3}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& Year & Candidate & Party & Popular vote & Result & \% \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1824 & Andrew Jackson & Democratic-Republican & 151271 & loss & +57.21 \\ +1 & 1824 & John Quincy Adams & Democratic-Republican & 113142 & win & +42.79 \\ +2 & 1828 & Andrew Jackson & Democratic & 642806 & win & 56.20 \\ +\end{longtable} + +\paragraph{EDA with JSON: Berkeley COVID-19 +Data}\label{eda-with-json-berkeley-covid-19-data} + +The City of Berkeley Open Data +\href{https://data.cityofberkeley.info/Health/COVID-19-Confirmed-Cases/xn6j-b766}{website} +has a dataset with COVID-19 Confirmed Cases among Berkeley residents by +date. Let's download the file and save it as a JSON (note the source URL +file type is also a JSON). In the interest of reproducible data science, +we will download the data programatically. We have defined some helper +functions in the +\href{https://ds100.org/fa23/resources/assets/lectures/lec05/lec05-eda.html}{\texttt{ds100\_utils.py}} +file that we can reuse these helper functions in many different +notebooks. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ ds100\_utils }\ImportTok{import}\NormalTok{ fetch\_and\_cache} + +\NormalTok{covid\_file }\OperatorTok{=}\NormalTok{ fetch\_and\_cache(} + \StringTok{"https://data.cityofberkeley.info/api/views/xn6j{-}b766/rows.json?accessType=DOWNLOAD"}\NormalTok{,} + \StringTok{"confirmed{-}cases.json"}\NormalTok{,} +\NormalTok{ force}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\NormalTok{covid\_file }\CommentTok{\# a file path wrapper object} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Using cached version that was downloaded (UTC): Tue Aug 27 03:33:01 2024 +\end{verbatim} + +\begin{verbatim} +PosixPath('data/confirmed-cases.json') +\end{verbatim} + +\subparagraph{File Size}\label{file-size} + +Let's start our analysis by getting a rough estimate of the size of the +dataset to inform the tools we use to view the data. For relatively +small datasets, we can use a text editor or spreadsheet. For larger +datasets, more programmatic exploration or distributed computing tools +may be more fitting. Here we will use \texttt{Python} tools to probe the +file. + +Since there seem to be text files, let's investigate the number of +lines, which often corresponds to the number of records + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ os} + +\BuiltInTok{print}\NormalTok{(covid\_file, }\StringTok{"is"}\NormalTok{, os.path.getsize(covid\_file) }\OperatorTok{/} \FloatTok{1e6}\NormalTok{, }\StringTok{"MB"}\NormalTok{)} + +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(covid\_file, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} + \BuiltInTok{print}\NormalTok{(covid\_file, }\StringTok{"is"}\NormalTok{, }\BuiltInTok{sum}\NormalTok{(}\DecValTok{1} \ControlFlowTok{for}\NormalTok{ l }\KeywordTok{in}\NormalTok{ f), }\StringTok{"lines."}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +data/confirmed-cases.json is 0.116367 MB +data/confirmed-cases.json is 1110 lines. +\end{verbatim} + +\subparagraph{Unix Commands}\label{unix-commands} + +As part of the EDA workflow, Unix commands can come in very handy. In +fact, there's an entire book called +\href{https://datascienceatthecommandline.com/}{``Data Science at the +Command Line''} that explores this idea in depth! In Jupyter/IPython, +you can prefix lines with \texttt{!} to execute arbitrary Unix commands, +and within those lines, you can refer to Python variables and +expressions with the syntax \texttt{\{expr\}}. + +Here, we use the \texttt{ls} command to list files, using the +\texttt{-lh} flags, which request ``long format with information in +human-readable form.'' We also use the \texttt{wc} command for ``word +count,'' but with the \texttt{-l} flag, which asks for line counts +instead of words. + +These two give us the same information as the code above, albeit in a +slightly different form: + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{!}\NormalTok{ls }\OperatorTok{{-}}\NormalTok{lh \{covid\_file\}} +\OperatorTok{!}\NormalTok{wc }\OperatorTok{{-}}\NormalTok{l \{covid\_file\}} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +-rw-r--r-- 1 jianingding21 staff 114K Aug 27 03:33 data/confirmed-cases.json + 1109 data/confirmed-cases.json +\end{verbatim} + +\subparagraph{File Contents}\label{file-contents} + +Let's explore the data format using \texttt{Python}. + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(covid\_file, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} + \ControlFlowTok{for}\NormalTok{ i, row }\KeywordTok{in} \BuiltInTok{enumerate}\NormalTok{(f):} + \BuiltInTok{print}\NormalTok{(}\BuiltInTok{repr}\NormalTok{(row)) }\CommentTok{\# print raw strings} + \ControlFlowTok{if}\NormalTok{ i }\OperatorTok{\textgreater{}=} \DecValTok{4}\NormalTok{: }\ControlFlowTok{break} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'{\n' +' "meta" : {\n' +' "view" : {\n' +' "id" : "xn6j-b766",\n' +' "name" : "COVID-19 Confirmed Cases",\n' +\end{verbatim} + +We can use the \texttt{head} Unix command (which is where +\texttt{pandas}' \texttt{head} method comes from!) to see the first few +lines of the file: + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{!}\NormalTok{head }\OperatorTok{{-}}\DecValTok{5}\NormalTok{ \{covid\_file\}} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +{ + "meta" : { + "view" : { + "id" : "xn6j-b766", + "name" : "COVID-19 Confirmed Cases", +\end{verbatim} + +In order to load the JSON file into \texttt{pandas}, Let's first do some +EDA with Oython's \texttt{json} package to understand the particular +structure of this JSON file so that we can decide what (if anything) to +load into \texttt{pandas}. Python has relatively good support for JSON +data since it closely matches the internal python object model. In the +following cell we import the entire JSON datafile into a python +dictionary using the \texttt{json} package. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ json} + +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(covid\_file, }\StringTok{"rb"}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ covid\_json }\OperatorTok{=}\NormalTok{ json.load(f)} +\end{Highlighting} +\end{Shaded} + +The \texttt{covid\_json} variable is now a dictionary encoding the data +in the file: + +\begin{Shaded} +\begin{Highlighting}[] +\BuiltInTok{type}\NormalTok{(covid\_json)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +dict +\end{verbatim} + +We can examine what keys are in the top level JSON object by listing out +the keys. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{covid\_json.keys()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +dict_keys(['meta', 'data']) +\end{verbatim} + +\textbf{Observation}: The JSON dictionary contains a \texttt{meta} key +which likely refers to metadata (data about the data). Metadata is often +maintained with the data and can be a good source of additional +information. + +We can investigate the metadata further by examining the keys associated +with the metadata. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{covid\_json[}\StringTok{\textquotesingle{}meta\textquotesingle{}}\NormalTok{].keys()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +dict_keys(['view']) +\end{verbatim} + +The \texttt{meta} key contains another dictionary called \texttt{view}. +This likely refers to metadata about a particular ``view'' of some +underlying database. We will learn more about views when we study SQL +later in the class. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{covid\_json[}\StringTok{\textquotesingle{}meta\textquotesingle{}}\NormalTok{][}\StringTok{\textquotesingle{}view\textquotesingle{}}\NormalTok{].keys()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +dict_keys(['id', 'name', 'assetType', 'attribution', 'averageRating', 'category', 'createdAt', 'description', 'displayType', 'downloadCount', 'hideFromCatalog', 'hideFromDataJson', 'newBackend', 'numberOfComments', 'oid', 'provenance', 'publicationAppendEnabled', 'publicationDate', 'publicationGroup', 'publicationStage', 'rowsUpdatedAt', 'rowsUpdatedBy', 'tableId', 'totalTimesRated', 'viewCount', 'viewLastModified', 'viewType', 'approvals', 'columns', 'grants', 'metadata', 'owner', 'query', 'rights', 'tableAuthor', 'tags', 'flags']) +\end{verbatim} + +Notice that this a nested/recursive data structure. As we dig deeper we +reveal more and more keys and the corresponding data: + +\begin{verbatim} +meta +|-> data + | ... (haven't explored yet) +|-> view + | -> id + | -> name + | -> attribution + ... + | -> description + ... + | -> columns + ... +\end{verbatim} + +There is a key called description in the view sub dictionary. This +likely contains a description of the data: + +\begin{Shaded} +\begin{Highlighting}[] +\BuiltInTok{print}\NormalTok{(covid\_json[}\StringTok{\textquotesingle{}meta\textquotesingle{}}\NormalTok{][}\StringTok{\textquotesingle{}view\textquotesingle{}}\NormalTok{][}\StringTok{\textquotesingle{}description\textquotesingle{}}\NormalTok{])} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Counts of confirmed COVID-19 cases among Berkeley residents by date. +\end{verbatim} + +\subparagraph{Examining the Data Field for +Records}\label{examining-the-data-field-for-records} + +We can look at a few entries in the \texttt{data} field. This is what +we'll load into \texttt{pandas}. + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{for}\NormalTok{ i }\KeywordTok{in} \BuiltInTok{range}\NormalTok{(}\DecValTok{3}\NormalTok{):} + \BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"}\SpecialCharTok{\{}\NormalTok{i}\SpecialCharTok{:03\}}\SpecialStringTok{ | }\SpecialCharTok{\{}\NormalTok{covid\_json[}\StringTok{\textquotesingle{}data\textquotesingle{}}\NormalTok{][i]}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +000 | ['row-kzbg.v7my-c3y2', '00000000-0000-0000-0405-CB14DE51DAA7', 0, 1643733903, None, 1643733903, None, '{ }', '2020-02-28T00:00:00', '1', '1'] +001 | ['row-jkyx_9u4r-h2yw', '00000000-0000-0000-F806-86D0DBE0E17F', 0, 1643733903, None, 1643733903, None, '{ }', '2020-02-29T00:00:00', '0', '1'] +002 | ['row-qifg_4aug-y3ym', '00000000-0000-0000-2DCE-4D1872F9B216', 0, 1643733903, None, 1643733903, None, '{ }', '2020-03-01T00:00:00', '0', '1'] +\end{verbatim} + +Observations: * These look like equal-length records, so maybe +\texttt{data} is a table! * But what do each of values in the record +mean? Where can we find column headers? + +For that, we'll need the \texttt{columns} key in the metadata +dictionary. This returns a list: + +\begin{Shaded} +\begin{Highlighting}[] +\BuiltInTok{type}\NormalTok{(covid\_json[}\StringTok{\textquotesingle{}meta\textquotesingle{}}\NormalTok{][}\StringTok{\textquotesingle{}view\textquotesingle{}}\NormalTok{][}\StringTok{\textquotesingle{}columns\textquotesingle{}}\NormalTok{])} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +list +\end{verbatim} + +\subparagraph{Summary of exploring the JSON +file}\label{summary-of-exploring-the-json-file} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + The above \textbf{metadata} tells us a lot about the columns in the + data including column names, potential data anomalies, and a basic + statistic. +\item + Because of its non-tabular structure, JSON makes it easier (than CSV) + to create \textbf{self-documenting data}, meaning that information + about the data is stored in the same file as the data. +\item + Self-documenting data can be helpful since it maintains its own + description and these descriptions are more likely to be updated as + data changes. +\end{enumerate} + +\subparagraph{\texorpdfstring{Loading COVID Data into +\texttt{pandas}}{Loading COVID Data into pandas}}\label{loading-covid-data-into-pandas} + +Finally, let's load the data (not the metadata) into a \texttt{pandas} +\texttt{DataFrame}. In the following block of code we: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Translate the JSON records into a \texttt{DataFrame}: + + \begin{itemize} + \tightlist + \item + fields: + \texttt{covid\_json{[}\textquotesingle{}meta\textquotesingle{}{]}{[}\textquotesingle{}view\textquotesingle{}{]}{[}\textquotesingle{}columns\textquotesingle{}{]}} + \item + records: + \texttt{covid\_json{[}\textquotesingle{}data\textquotesingle{}{]}} + \end{itemize} +\item + Remove columns that have no metadata description. This would be a bad + idea in general, but here we remove these columns since the above + analysis suggests they are unlikely to contain useful information. +\item + Examine the \texttt{tail} of the table. +\end{enumerate} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Load the data from JSON and assign column titles} +\NormalTok{covid }\OperatorTok{=}\NormalTok{ pd.DataFrame(} +\NormalTok{ covid\_json[}\StringTok{\textquotesingle{}data\textquotesingle{}}\NormalTok{],} +\NormalTok{ columns}\OperatorTok{=}\NormalTok{[c[}\StringTok{\textquotesingle{}name\textquotesingle{}}\NormalTok{] }\ControlFlowTok{for}\NormalTok{ c }\KeywordTok{in}\NormalTok{ covid\_json[}\StringTok{\textquotesingle{}meta\textquotesingle{}}\NormalTok{][}\StringTok{\textquotesingle{}view\textquotesingle{}}\NormalTok{][}\StringTok{\textquotesingle{}columns\textquotesingle{}}\NormalTok{]])} + +\NormalTok{covid.tail()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllll@{}} +\toprule\noalign{} +& sid & id & position & created\_at & created\_meta & updated\_at & +updated\_meta & meta & Date & New Cases & Cumulative Cases \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +699 & row-49b6\_x8zv.gyum & 00000000-0000-0000-A18C-9174A6D05774 & 0 & +1643733903 & None & 1643733903 & None & \{ \} & 2022-01-27T00:00:00 & +106 & 10694 \\ +700 & row-gs55-p5em.y4v9 & 00000000-0000-0000-F41D-5724AEABB4D6 & 0 & +1643733903 & None & 1643733903 & None & \{ \} & 2022-01-28T00:00:00 & +223 & 10917 \\ +701 & row-3pyj.tf95-qu67 & 00000000-0000-0000-BEE3-B0188D2518BD & 0 & +1643733903 & None & 1643733903 & None & \{ \} & 2022-01-29T00:00:00 & +139 & 11056 \\ +702 & row-cgnd.8syv.jvjn & 00000000-0000-0000-C318-63CF75F7F740 & 0 & +1643733903 & None & 1643733903 & None & \{ \} & 2022-01-30T00:00:00 & 33 +& 11089 \\ +703 & row-qywv\_24x6-237y & 00000000-0000-0000-FE92-9789FED3AA20 & 0 & +1643733903 & None & 1643733903 & None & \{ \} & 2022-01-31T00:00:00 & 42 +& 11131 \\ +\end{longtable} + +\subsection{Primary and Foreign Keys}\label{primary-and-foreign-keys} + +Last time, we introduced \texttt{.merge} as the \texttt{pandas} method +for joining multiple \texttt{DataFrame}s together. In our discussion of +joins, we touched on the idea of using a ``key'' to determine what rows +should be merged from each table. Let's take a moment to examine this +idea more closely. + +The \textbf{primary key} is the column or set of columns in a table that +\emph{uniquely} determine the values of the remaining columns. It can be +thought of as the unique identifier for each individual row in the +table. For example, a table of Data 100 students might use each +student's Cal ID as the primary key. + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& Cal ID & Name & Major \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 3034619471 & Oski & Data Science \\ +1 & 3035619472 & Ollie & Computer Science \\ +2 & 3025619473 & Orrie & Data Science \\ +3 & 3046789372 & Ollie & Economics \\ +\end{longtable} + +The \textbf{foreign key} is the column or set of columns in a table that +reference primary keys in other tables. Knowing a dataset's foreign keys +can be useful when assigning the \texttt{left\_on} and +\texttt{right\_on} parameters of \texttt{.merge}. In the table of office +hour tickets below, \texttt{"Cal\ ID"} is a foreign key referencing the +previous table. + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& OH Request & Cal ID & Question \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1 & 3034619471 & HW 2 Q1 \\ +1 & 2 & 3035619472 & HW 2 Q3 \\ +2 & 3 & 3025619473 & Lab 3 Q4 \\ +3 & 4 & 3035619472 & HW 2 Q7 \\ +\end{longtable} + +\subsection{Variable Types}\label{variable-types} + +Variables are columns. A variable is a measurement of a particular +concept. Variables have two common properties: data type/storage type +and variable type/feature type. The data type of a variable indicates +how each variable value is stored in memory (integer, floating point, +boolean, etc.) and affects which \texttt{pandas} functions are used. The +variable type is a conceptualized measurement of information (and +therefore indicates what values a variable can take on). Variable type +is identified through expert knowledge, exploring the data itself, or +consulting the data codebook. The variable type affects how one +visualizes and inteprets the data. In this class, ``variable types'' are +conceptual. + +After loading data into a file, it's a good idea to take the time to +understand what pieces of information are encoded in the dataset. In +particular, we want to identify what variable types are present in our +data. Broadly speaking, we can categorize variables into one of two +overarching types. + +\textbf{Quantitative variables} describe some numeric quantity or +amount. We can divide quantitative data further into: + +\begin{itemize} +\tightlist +\item + \textbf{Continuous quantitative variables}: numeric data that can be + measured on a continuous scale to arbitrary precision. Continuous + variables do not have a strict set of possible values -- they can be + recorded to any number of decimal places. For example, weights, GPA, + or CO2 concentrations. +\item + \textbf{Discrete quantitative variables}: numeric data that can only + take on a finite set of possible values. For example, someone's age or + the number of siblings they have. +\end{itemize} + +\textbf{Qualitative variables}, also known as \textbf{categorical +variables}, describe data that isn't measuring some quantity or amount. +The sub-categories of categorical data are: + +\begin{itemize} +\tightlist +\item + \textbf{Ordinal qualitative variables}: categories with ordered + levels. Specifically, ordinal variables are those where the difference + between levels has no consistent, quantifiable meaning. Some examples + include levels of education (high school, undergrad, grad, etc.), + income bracket (low, medium, high), or Yelp rating. +\item + \textbf{Nominal qualitative variables}: categories with no specific + order. For example, someone's political affiliation or Cal ID number. +\end{itemize} + +\begin{figure}[H] + +{\centering \includegraphics{eda/images/variable.png} + +} + +\caption{Classification of variable types} + +\end{figure}% + +Note that many variables don't sit neatly in just one of these +categories. Qualitative variables could have numeric levels, and +conversely, quantitative variables could be stored as strings. + +\section{Granularity, Scope, and +Temporality}\label{granularity-scope-and-temporality} + +After understanding the structure of the dataset, the next task is to +determine what exactly the data represents. We'll do so by considering +the data's granularity, scope, and temporality. + +\subsection{Granularity}\label{granularity} + +The \textbf{granularity} of a dataset is what a single row represents. +You can also think of it as the level of detail included in the data. To +determine the data's granularity, ask: what does each row in the dataset +represent? Fine-grained data contains a high level of detail, with a +single row representing a small individual unit. For example, each +record may represent one person. Coarse-grained data is encoded such +that a single row represents a large individual unit -- for example, +each record may represent a group of people. + +\subsection{Scope}\label{scope} + +The \textbf{scope} of a dataset is the subset of the population covered +by the data. If we were investigating student performance in Data +Science courses, a dataset with a narrow scope might encompass all +students enrolled in Data 100 whereas a dataset with an expansive scope +might encompass all students in California. + +\subsection{Temporality}\label{temporality} + +The \textbf{temporality} of a dataset describes the periodicity over +which the data was collected as well as when the data was most recently +collected or updated. + +Time and date fields of a dataset could represent a few things: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + when the ``event'' happened +\item + when the data was collected, or when it was entered into the system +\item + when the data was copied into the database +\end{enumerate} + +To fully understand the temporality of the data, it also may be +necessary to standardize time zones or inspect recurring time-based +trends in the data (do patterns recur in 24-hour periods? Over the +course of a month? Seasonally?). The convention for standardizing time +is the Coordinated Universal Time (UTC), an international time standard +measured at 0 degrees latitude that stays consistent throughout the year +(no daylight savings). We can represent Berkeley's time zone, Pacific +Standard Time (PST), as UTC-7 (with daylight savings). + +\subsubsection{\texorpdfstring{Temporality with \texttt{pandas}' +\texttt{dt} +accessors}{Temporality with pandas' dt accessors}}\label{temporality-with-pandas-dt-accessors} + +Let's briefly look at how we can use \texttt{pandas}' \texttt{dt} +accessors to work with dates/times in a dataset using the dataset you'll +see in Lab 3: the Berkeley PD Calls for Service dataset. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{calls }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/Berkeley\_PD\_{-}\_Calls\_for\_Service.csv"}\NormalTok{)} +\NormalTok{calls.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllll@{}} +\toprule\noalign{} +& CASENO & OFFENSE & EVENTDT & EVENTTM & CVLEGEND & CVDOW & InDbDate & +Block\_Location & BLKADDR & City & State \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 21014296 & THEFT MISD. (UNDER \$950) & 04/01/2021 12:00:00 AM & +10:58 & LARCENY & 4 & 06/15/2021 12:00:00 AM & Berkeley, +CA\textbackslash n(37.869058, -122.270455) & NaN & Berkeley & CA \\ +1 & 21014391 & THEFT MISD. (UNDER \$950) & 04/01/2021 12:00:00 AM & +10:38 & LARCENY & 4 & 06/15/2021 12:00:00 AM & Berkeley, +CA\textbackslash n(37.869058, -122.270455) & NaN & Berkeley & CA \\ +2 & 21090494 & THEFT MISD. (UNDER \$950) & 04/19/2021 12:00:00 AM & +12:15 & LARCENY & 1 & 06/15/2021 12:00:00 AM & 2100 BLOCK HASTE +ST\textbackslash nBerkeley, CA\textbackslash n(37.864908,... & 2100 +BLOCK HASTE ST & Berkeley & CA \\ +3 & 21090204 & THEFT FELONY (OVER \$950) & 02/13/2021 12:00:00 AM & +17:00 & LARCENY & 6 & 06/15/2021 12:00:00 AM & 2600 BLOCK WARRING +ST\textbackslash nBerkeley, CA\textbackslash n(37.86393... & 2600 BLOCK +WARRING ST & Berkeley & CA \\ +4 & 21090179 & BURGLARY AUTO & 02/08/2021 12:00:00 AM & 6:20 & BURGLARY +- VEHICLE & 1 & 06/15/2021 12:00:00 AM & 2700 BLOCK GARBER +ST\textbackslash nBerkeley, CA\textbackslash n(37.86066,... & 2700 BLOCK +GARBER ST & Berkeley & CA \\ +\end{longtable} + +Looks like there are three columns with dates/times: \texttt{EVENTDT}, +\texttt{EVENTTM}, and \texttt{InDbDate}. + +Most likely, \texttt{EVENTDT} stands for the date when the event took +place, \texttt{EVENTTM} stands for the time of day the event took place +(in 24-hr format), and \texttt{InDbDate} is the date this call is +recorded onto the database. + +If we check the data type of these columns, we will see they are stored +as strings. We can convert them to \texttt{datetime} objects using +pandas \texttt{to\_datetime} function. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{calls[}\StringTok{"EVENTDT"}\NormalTok{] }\OperatorTok{=}\NormalTok{ pd.to\_datetime(calls[}\StringTok{"EVENTDT"}\NormalTok{])} +\NormalTok{calls.head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +/var/folders/m7/89sj44pj21ddhplt2bn4qjcm0000gr/T/ipykernel_57962/874729699.py:1: UserWarning: + +Could not infer format, so each element will be parsed individually, falling back to `dateutil`. To ensure parsing is consistent and as-expected, please specify a format. +\end{verbatim} + +\begin{longtable}[]{@{}llllllllllll@{}} +\toprule\noalign{} +& CASENO & OFFENSE & EVENTDT & EVENTTM & CVLEGEND & CVDOW & InDbDate & +Block\_Location & BLKADDR & City & State \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 21014296 & THEFT MISD. (UNDER \$950) & 2021-04-01 & 10:58 & LARCENY +& 4 & 06/15/2021 12:00:00 AM & Berkeley, CA\textbackslash n(37.869058, +-122.270455) & NaN & Berkeley & CA \\ +1 & 21014391 & THEFT MISD. (UNDER \$950) & 2021-04-01 & 10:38 & LARCENY +& 4 & 06/15/2021 12:00:00 AM & Berkeley, CA\textbackslash n(37.869058, +-122.270455) & NaN & Berkeley & CA \\ +2 & 21090494 & THEFT MISD. (UNDER \$950) & 2021-04-19 & 12:15 & LARCENY +& 1 & 06/15/2021 12:00:00 AM & 2100 BLOCK HASTE +ST\textbackslash nBerkeley, CA\textbackslash n(37.864908,... & 2100 +BLOCK HASTE ST & Berkeley & CA \\ +3 & 21090204 & THEFT FELONY (OVER \$950) & 2021-02-13 & 17:00 & LARCENY +& 6 & 06/15/2021 12:00:00 AM & 2600 BLOCK WARRING +ST\textbackslash nBerkeley, CA\textbackslash n(37.86393... & 2600 BLOCK +WARRING ST & Berkeley & CA \\ +4 & 21090179 & BURGLARY AUTO & 2021-02-08 & 6:20 & BURGLARY - VEHICLE & +1 & 06/15/2021 12:00:00 AM & 2700 BLOCK GARBER +ST\textbackslash nBerkeley, CA\textbackslash n(37.86066,... & 2700 BLOCK +GARBER ST & Berkeley & CA \\ +\end{longtable} + +Now, we can use the \texttt{dt} accessor on this column. + +We can get the month: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{calls[}\StringTok{"EVENTDT"}\NormalTok{].dt.month.head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 4 +1 4 +2 4 +3 2 +4 2 +Name: EVENTDT, dtype: int32 +\end{verbatim} + +Which day of the week the date is on: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{calls[}\StringTok{"EVENTDT"}\NormalTok{].dt.dayofweek.head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 3 +1 3 +2 0 +3 5 +4 0 +Name: EVENTDT, dtype: int32 +\end{verbatim} + +Check the mimimum values to see if there are any suspicious-looking, 70s +dates: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{calls.sort\_values(}\StringTok{"EVENTDT"}\NormalTok{).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllll@{}} +\toprule\noalign{} +& CASENO & OFFENSE & EVENTDT & EVENTTM & CVLEGEND & CVDOW & InDbDate & +Block\_Location & BLKADDR & City & State \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +2513 & 20057398 & BURGLARY COMMERCIAL & 2020-12-17 & 16:05 & BURGLARY - +COMMERCIAL & 4 & 06/15/2021 12:00:00 AM & 600 BLOCK GILMAN +ST\textbackslash nBerkeley, CA\textbackslash n(37.878405,... & 600 BLOCK +GILMAN ST & Berkeley & CA \\ +624 & 20057207 & ASSAULT/BATTERY MISD. & 2020-12-17 & 16:50 & ASSAULT & +4 & 06/15/2021 12:00:00 AM & 2100 BLOCK SHATTUCK +AVE\textbackslash nBerkeley, CA\textbackslash n(37.871... & 2100 BLOCK +SHATTUCK AVE & Berkeley & CA \\ +154 & 20092214 & THEFT FROM AUTO & 2020-12-17 & 18:30 & LARCENY - FROM +VEHICLE & 4 & 06/15/2021 12:00:00 AM & 800 BLOCK SHATTUCK +AVE\textbackslash nBerkeley, CA\textbackslash n(37.8918... & 800 BLOCK +SHATTUCK AVE & Berkeley & CA \\ +659 & 20057324 & THEFT MISD. (UNDER \$950) & 2020-12-17 & 15:44 & +LARCENY & 4 & 06/15/2021 12:00:00 AM & 1800 BLOCK 4TH +ST\textbackslash nBerkeley, CA\textbackslash n(37.869888, -... & 1800 +BLOCK 4TH ST & Berkeley & CA \\ +993 & 20057573 & BURGLARY RESIDENTIAL & 2020-12-17 & 22:15 & BURGLARY - +RESIDENTIAL & 4 & 06/15/2021 12:00:00 AM & 1700 BLOCK STUART +ST\textbackslash nBerkeley, CA\textbackslash n(37.857495... & 1700 BLOCK +STUART ST & Berkeley & CA \\ +\end{longtable} + +Doesn't look like it! We are good! + +We can also do many things with the \texttt{dt} accessor like switching +time zones and converting time back to UNIX/POSIX time. Check out the +documentation on +\href{https://pandas.pydata.org/docs/user_guide/basics.html\#basics-dt-accessors}{\texttt{.dt} +accessor} and +\href{https://pandas.pydata.org/docs/user_guide/timeseries.html\#}{time +series/date functionality}. + +\section{Faithfulness}\label{faithfulness} + +At this stage in our data cleaning and EDA workflow, we've achieved +quite a lot: we've identified how our data is structured, come to terms +with what information it encodes, and gained insight as to how it was +generated. Throughout this process, we should always recall the original +intent of our work in Data Science -- to use data to better understand +and model the real world. To achieve this goal, we need to ensure that +the data we use is faithful to reality; that is, that our data +accurately captures the ``real world.'' + +Data used in research or industry is often ``messy'' -- there may be +errors or inaccuracies that impact the faithfulness of the dataset. +Signs that data may not be faithful include: + +\begin{itemize} +\tightlist +\item + Unrealistic or ``incorrect'' values, such as negative counts, + locations that don't exist, or dates set in the future +\item + Violations of obvious dependencies, like an age that does not match a + birthday +\item + Clear signs that data was entered by hand, which can lead to spelling + errors or fields that are incorrectly shifted +\item + Signs of data falsification, such as fake email addresses or repeated + use of the same names +\item + Duplicated records or fields containing the same information +\item + Truncated data, e.g.~Microsoft Excel would limit the number of rows to + 655536 and the number of columns to 255 +\end{itemize} + +We often solve some of these more common issues in the following ways: + +\begin{itemize} +\tightlist +\item + Spelling errors: apply corrections or drop records that aren't in a + dictionary +\item + Time zone inconsistencies: convert to a common time zone (e.g.~UTC) +\item + Duplicated records or fields: identify and eliminate duplicates (using + primary keys) +\item + Unspecified or inconsistent units: infer the units and check that + values are in reasonable ranges in the data +\end{itemize} + +\subsection{Missing Values}\label{missing-values} + +Another common issue encountered with real-world datasets is that of +missing data. One strategy to resolve this is to simply drop any records +with missing values from the dataset. This does, however, introduce the +risk of inducing biases -- it is possible that the missing or corrupt +records may be systemically related to some feature of interest in the +data. Another solution is to keep the data as \texttt{NaN} values. + +A third method to address missing data is to perform +\textbf{imputation}: infer the missing values using other data available +in the dataset. There is a wide variety of imputation techniques that +can be implemented; some of the most common are listed below. + +\begin{itemize} +\tightlist +\item + Average imputation: replace missing values with the average value for + that field +\item + Hot deck imputation: replace missing values with some random value +\item + Regression imputation: develop a model to predict missing values and + replace with the predicted value from the model. +\item + Multiple imputation: replace missing values with multiple random + values +\end{itemize} + +Regardless of the strategy used to deal with missing data, we should +think carefully about \emph{why} particular records or fields may be +missing -- this can help inform whether or not the absence of these +values is significant or meaningful. + +\section{EDA Demo 1: Tuberculosis in the United +States}\label{eda-demo-1-tuberculosis-in-the-united-states} + +Now, let's walk through the data-cleaning and EDA workflow to see what +can we learn about the presence of Tuberculosis in the United States! + +We will examine the data included in the +\href{https://www.cdc.gov/mmwr/volumes/71/wr/mm7112a1.htm?s_cid=mm7112a1_w\#T1_down}{original +CDC article} published in 2021. + +\subsection{CSVs and Field Names}\label{csvs-and-field-names} + +Suppose Table 1 was saved as a CSV file located in +\texttt{data/cdc\_tuberculosis.csv}. + +We can then explore the CSV (which is a text file, and does not contain +binary-encoded data) in many ways: 1. Using a text editor like emacs, +vim, VSCode, etc. 2. Opening the CSV directly in DataHub (read-only), +Excel, Google Sheets, etc. 3. The \texttt{Python} file object 4. +\texttt{pandas}, using \texttt{pd.read\_csv()} + +To try out options 1 and 2, you can view or download the Tuberculosis +from the +\href{https://data100.datahub.berkeley.edu/hub/user-redirect/git-pull?repo=https\%3A\%2F\%2Fgithub.com\%2FDS-100\%2Ffa23-student&urlpath=lab\%2Ftree\%2Ffa23-student\%2Flecture\%2Flec05\%2Flec04-eda.ipynb&branch=main}{lecture +demo notebook} under the \texttt{data} folder in the left hand menu. +Notice how the CSV file is a type of \textbf{rectangular data (i.e., +tabular data) stored as comma-separated values}. + +Next, let's try out option 3 using the \texttt{Python} file object. +We'll look at the first four lines: + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{"data/cdc\_tuberculosis.csv"}\NormalTok{, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ i }\OperatorTok{=} \DecValTok{0} + \ControlFlowTok{for}\NormalTok{ row }\KeywordTok{in}\NormalTok{ f:} + \BuiltInTok{print}\NormalTok{(row)} +\NormalTok{ i }\OperatorTok{+=} \DecValTok{1} + \ControlFlowTok{if}\NormalTok{ i }\OperatorTok{\textgreater{}} \DecValTok{3}\NormalTok{:} + \ControlFlowTok{break} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +,No. of TB cases,,,TB incidence,, + +U.S. jurisdiction,2019,2020,2021,2019,2020,2021 + +Total,"8,900","7,173","7,860",2.71,2.16,2.37 + +Alabama,87,72,92,1.77,1.43,1.83 +\end{verbatim} + +Whoa, why are there blank lines interspaced between the lines of the +CSV? + +You may recall that all line breaks in text files are encoded as the +special newline character \texttt{\textbackslash{}n}. Python's +\texttt{print()} prints each string (including the newline), and an +additional newline on top of that. + +If you're curious, we can use the \texttt{repr()} function to return the +raw string with all special characters: + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{"data/cdc\_tuberculosis.csv"}\NormalTok{, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ i }\OperatorTok{=} \DecValTok{0} + \ControlFlowTok{for}\NormalTok{ row }\KeywordTok{in}\NormalTok{ f:} + \BuiltInTok{print}\NormalTok{(}\BuiltInTok{repr}\NormalTok{(row)) }\CommentTok{\# print raw strings} +\NormalTok{ i }\OperatorTok{+=} \DecValTok{1} + \ControlFlowTok{if}\NormalTok{ i }\OperatorTok{\textgreater{}} \DecValTok{3}\NormalTok{:} + \ControlFlowTok{break} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +',No. of TB cases,,,TB incidence,,\n' +'U.S. jurisdiction,2019,2020,2021,2019,2020,2021\n' +'Total,"8,900","7,173","7,860",2.71,2.16,2.37\n' +'Alabama,87,72,92,1.77,1.43,1.83\n' +\end{verbatim} + +Finally, let's try option 4 and use the tried-and-true Data 100 +approach: \texttt{pandas}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_df }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/cdc\_tuberculosis.csv"}\NormalTok{)} +\NormalTok{tb\_df.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Unnamed: 0 & No. of TB cases & Unnamed: 2 & Unnamed: 3 & TB incidence +& Unnamed: 5 & Unnamed: 6 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & U.S. jurisdiction & 2019 & 2020 & 2021 & 2019.00 & 2020.00 & +2021.00 \\ +1 & Total & 8,900 & 7,173 & 7,860 & 2.71 & 2.16 & 2.37 \\ +2 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 \\ +3 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 \\ +4 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 \\ +\end{longtable} + +You may notice some strange things about this table: what's up with the +``Unnamed'' column names and the first row? + +Congratulations --- you're ready to wrangle your data! Because of how +things are stored, we'll need to clean the data a bit to name our +columns better. + +A reasonable first step is to identify the row with the right header. +The \texttt{pd.read\_csv()} function +(\href{https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html}{documentation}) +has the convenient \texttt{header} parameter that we can set to use the +elements in row 1 as the appropriate columns: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_df }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/cdc\_tuberculosis.csv"}\NormalTok{, header}\OperatorTok{=}\DecValTok{1}\NormalTok{) }\CommentTok{\# row index} +\NormalTok{tb\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & 2019 & 2020 & 2021 & 2019.1 & 2020.1 & 2021.1 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Total & 8,900 & 7,173 & 7,860 & 2.71 & 2.16 & 2.37 \\ +1 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 \\ +2 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 \\ +3 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 \\ +4 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 \\ +\end{longtable} + +Wait\ldots but now we can't differentiate betwen the ``Number of TB +cases'' and ``TB incidence'' year columns. \texttt{pandas} has tried to +make our lives easier by automatically adding ``.1'' to the latter +columns, but this doesn't help us, as humans, understand the data. + +We can do this manually with \texttt{df.rename()} +(\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rename.html?highlight=rename\#pandas.DataFrame.rename}{documentation}): + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{rename\_dict }\OperatorTok{=}\NormalTok{ \{}\StringTok{\textquotesingle{}2019\textquotesingle{}}\NormalTok{: }\StringTok{\textquotesingle{}TB cases 2019\textquotesingle{}}\NormalTok{,} + \StringTok{\textquotesingle{}2020\textquotesingle{}}\NormalTok{: }\StringTok{\textquotesingle{}TB cases 2020\textquotesingle{}}\NormalTok{,} + \StringTok{\textquotesingle{}2021\textquotesingle{}}\NormalTok{: }\StringTok{\textquotesingle{}TB cases 2021\textquotesingle{}}\NormalTok{,} + \StringTok{\textquotesingle{}2019.1\textquotesingle{}}\NormalTok{: }\StringTok{\textquotesingle{}TB incidence 2019\textquotesingle{}}\NormalTok{,} + \StringTok{\textquotesingle{}2020.1\textquotesingle{}}\NormalTok{: }\StringTok{\textquotesingle{}TB incidence 2020\textquotesingle{}}\NormalTok{,} + \StringTok{\textquotesingle{}2021.1\textquotesingle{}}\NormalTok{: }\StringTok{\textquotesingle{}TB incidence 2021\textquotesingle{}}\NormalTok{\}} +\NormalTok{tb\_df }\OperatorTok{=}\NormalTok{ tb\_df.rename(columns}\OperatorTok{=}\NormalTok{rename\_dict)} +\NormalTok{tb\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & TB cases 2019 & TB cases 2020 & TB cases 2021 & TB +incidence 2019 & TB incidence 2020 & TB incidence 2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Total & 8,900 & 7,173 & 7,860 & 2.71 & 2.16 & 2.37 \\ +1 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 \\ +2 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 \\ +3 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 \\ +4 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 \\ +\end{longtable} + +\subsection{Record Granularity}\label{record-granularity} + +You might already be wondering: what's up with that first record? + +Row 0 is what we call a \textbf{rollup record}, or summary record. It's +often useful when displaying tables to humans. The \textbf{granularity} +of record 0 (Totals) vs the rest of the records (States) is different. + +Okay, EDA step two. How was the rollup record aggregated? + +Let's check if Total TB cases is the sum of all state TB cases. If we +sum over all rows, we should get \textbf{2x} the total cases in each of +our TB cases by year (why do you think this is?). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_df.}\BuiltInTok{sum}\NormalTok{(axis}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +U.S. jurisdiction TotalAlabamaAlaskaArizonaArkansasCaliforniaCol... +TB cases 2019 8,9008758183642,111666718245583029973261085237... +TB cases 2020 7,1737258136591,706525417194122219282169239376... +TB cases 2021 7,8609258129691,750585443194992281064255127494... +TB incidence 2019 109.94 +TB incidence 2020 93.09 +TB incidence 2021 102.94 +dtype: object +\end{verbatim} + +Whoa, what's going on with the TB cases in 2019, 2020, and 2021? Check +out the column types: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_df.dtypes} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +U.S. jurisdiction object +TB cases 2019 object +TB cases 2020 object +TB cases 2021 object +TB incidence 2019 float64 +TB incidence 2020 float64 +TB incidence 2021 float64 +dtype: object +\end{verbatim} + +Since there are commas in the values for TB cases, the numbers are read +as the \texttt{object} datatype, or \textbf{storage type} (close to the +\texttt{Python} string datatype), so \texttt{pandas} is concatenating +strings instead of adding integers (recall that Python can ``sum'', or +concatenate, strings together: \texttt{"data"\ +\ "100"} evaluates to +\texttt{"data100"}). + +Fortunately \texttt{read\_csv} also has a \texttt{thousands} parameter +(\href{https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html}{documentation}): + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# improve readability: chaining method calls with outer parentheses/line breaks} +\NormalTok{tb\_df }\OperatorTok{=}\NormalTok{ (} +\NormalTok{ pd.read\_csv(}\StringTok{"data/cdc\_tuberculosis.csv"}\NormalTok{, header}\OperatorTok{=}\DecValTok{1}\NormalTok{, thousands}\OperatorTok{=}\StringTok{\textquotesingle{},\textquotesingle{}}\NormalTok{)} +\NormalTok{ .rename(columns}\OperatorTok{=}\NormalTok{rename\_dict)} +\NormalTok{)} +\NormalTok{tb\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & TB cases 2019 & TB cases 2020 & TB cases 2021 & TB +incidence 2019 & TB incidence 2020 & TB incidence 2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Total & 8900 & 7173 & 7860 & 2.71 & 2.16 & 2.37 \\ +1 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 \\ +2 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 \\ +3 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 \\ +4 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_df.}\BuiltInTok{sum}\NormalTok{()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +U.S. jurisdiction TotalAlabamaAlaskaArizonaArkansasCaliforniaCol... +TB cases 2019 17800 +TB cases 2020 14346 +TB cases 2021 15720 +TB incidence 2019 109.94 +TB incidence 2020 93.09 +TB incidence 2021 102.94 +dtype: object +\end{verbatim} + +The total TB cases look right. Phew! + +Let's just look at the records with \textbf{state-level granularity}: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{state\_tb\_df }\OperatorTok{=}\NormalTok{ tb\_df[}\DecValTok{1}\NormalTok{:]} +\NormalTok{state\_tb\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & TB cases 2019 & TB cases 2020 & TB cases 2021 & TB +incidence 2019 & TB incidence 2020 & TB incidence 2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 \\ +2 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 \\ +3 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 \\ +4 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 \\ +5 & California & 2111 & 1706 & 1750 & 5.35 & 4.32 & 4.46 \\ +\end{longtable} + +\subsection{Gather Census Data}\label{gather-census-data} + +U.S. Census population estimates +\href{https://www.census.gov/data/tables/time-series/demo/popest/2010s-state-total.html}{source} +(2019), +\href{https://www.census.gov/data/tables/time-series/demo/popest/2020s-state-total.html}{source} +(2020-2021). + +Running the below cells cleans the data. There are a few new methods +here: * \texttt{df.convert\_dtypes()} +(\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.convert_dtypes.html}{documentation}) +conveniently converts all float dtypes into ints and is out of scope for +the class. * \texttt{df.drop\_na()} +(\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.dropna.html}{documentation}) +will be explained in more detail next time. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# 2010s census data} +\NormalTok{census\_2010s\_df }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/nst{-}est2019{-}01.csv"}\NormalTok{, header}\OperatorTok{=}\DecValTok{3}\NormalTok{, thousands}\OperatorTok{=}\StringTok{","}\NormalTok{)} +\NormalTok{census\_2010s\_df }\OperatorTok{=}\NormalTok{ (} +\NormalTok{ census\_2010s\_df} +\NormalTok{ .reset\_index()} +\NormalTok{ .drop(columns}\OperatorTok{=}\NormalTok{[}\StringTok{"index"}\NormalTok{, }\StringTok{"Census"}\NormalTok{, }\StringTok{"Estimates Base"}\NormalTok{])} +\NormalTok{ .rename(columns}\OperatorTok{=}\NormalTok{\{}\StringTok{"Unnamed: 0"}\NormalTok{: }\StringTok{"Geographic Area"}\NormalTok{\})} +\NormalTok{ .convert\_dtypes() }\CommentTok{\# "smart" converting of columns, use at your own risk} +\NormalTok{ .dropna() }\CommentTok{\# we\textquotesingle{}ll introduce this next time} +\NormalTok{)} +\NormalTok{census\_2010s\_df[}\StringTok{\textquotesingle{}Geographic Area\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ census\_2010s\_df[}\StringTok{\textquotesingle{}Geographic Area\textquotesingle{}}\NormalTok{].}\BuiltInTok{str}\NormalTok{.strip(}\StringTok{\textquotesingle{}.\textquotesingle{}}\NormalTok{)} + +\CommentTok{\# with pd.option\_context(\textquotesingle{}display.min\_rows\textquotesingle{}, 30): \# shows more rows} +\CommentTok{\# display(census\_2010s\_df)} + +\NormalTok{census\_2010s\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllll@{}} +\toprule\noalign{} +& Geographic Area & 2010 & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & +2017 & 2018 & 2019 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & United States & 309321666 & 311556874 & 313830990 & 315993715 & +318301008 & 320635163 & 322941311 & 324985539 & 326687501 & 328239523 \\ +1 & Northeast & 55380134 & 55604223 & 55775216 & 55901806 & 56006011 & +56034684 & 56042330 & 56059240 & 56046620 & 55982803 \\ +2 & Midwest & 66974416 & 67157800 & 67336743 & 67560379 & 67745167 & +67860583 & 67987540 & 68126781 & 68236628 & 68329004 \\ +3 & South & 114866680 & 116006522 & 117241208 & 118364400 & 119624037 & +120997341 & 122351760 & 123542189 & 124569433 & 125580448 \\ +4 & West & 72100436 & 72788329 & 73477823 & 74167130 & 74925793 & +75742555 & 76559681 & 77257329 & 77834820 & 78347268 \\ +\end{longtable} + +Occasionally, you will want to modify code that you have imported. To +reimport those modifications you can either use \texttt{python}'s +\texttt{importlib} library: + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ importlib }\ImportTok{import} \BuiltInTok{reload} +\BuiltInTok{reload}\NormalTok{(utils)} +\end{Highlighting} +\end{Shaded} + +or use \texttt{iPython} magic which will intelligently import code when +files change: + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%}\NormalTok{load\_ext autoreload} +\OperatorTok{\%}\NormalTok{autoreload }\DecValTok{2} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# census 2020s data} +\NormalTok{census\_2020s\_df }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/NST{-}EST2022{-}POP.csv"}\NormalTok{, header}\OperatorTok{=}\DecValTok{3}\NormalTok{, thousands}\OperatorTok{=}\StringTok{","}\NormalTok{)} +\NormalTok{census\_2020s\_df }\OperatorTok{=}\NormalTok{ (} +\NormalTok{ census\_2020s\_df} +\NormalTok{ .reset\_index()} +\NormalTok{ .drop(columns}\OperatorTok{=}\NormalTok{[}\StringTok{"index"}\NormalTok{, }\StringTok{"Unnamed: 1"}\NormalTok{])} +\NormalTok{ .rename(columns}\OperatorTok{=}\NormalTok{\{}\StringTok{"Unnamed: 0"}\NormalTok{: }\StringTok{"Geographic Area"}\NormalTok{\})} +\NormalTok{ .convert\_dtypes() }\CommentTok{\# "smart" converting of columns, use at your own risk} +\NormalTok{ .dropna() }\CommentTok{\# we\textquotesingle{}ll introduce this next time} +\NormalTok{)} +\NormalTok{census\_2020s\_df[}\StringTok{\textquotesingle{}Geographic Area\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ census\_2020s\_df[}\StringTok{\textquotesingle{}Geographic Area\textquotesingle{}}\NormalTok{].}\BuiltInTok{str}\NormalTok{.strip(}\StringTok{\textquotesingle{}.\textquotesingle{}}\NormalTok{)} + +\NormalTok{census\_2020s\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& Geographic Area & 2020 & 2021 & 2022 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & United States & 331511512 & 332031554 & 333287557 \\ +1 & Northeast & 57448898 & 57259257 & 57040406 \\ +2 & Midwest & 68961043 & 68836505 & 68787595 \\ +3 & South & 126450613 & 127346029 & 128716192 \\ +4 & West & 78650958 & 78589763 & 78743364 \\ +\end{longtable} + +\subsection{\texorpdfstring{Joining Data (Merging +\texttt{DataFrame}s)}{Joining Data (Merging DataFrames)}}\label{joining-data-merging-dataframes} + +Time to \texttt{merge}! Here we use the \texttt{DataFrame} method +\texttt{df1.merge(right=df2,\ ...)} on \texttt{DataFrame} \texttt{df1} +(\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html}{documentation}). +Contrast this with the function +\texttt{pd.merge(left=df1,\ right=df2,\ ...)} +(\href{https://pandas.pydata.org/docs/reference/api/pandas.merge.html?highlight=pandas\%20merge\#pandas.merge}{documentation}). +Feel free to use either. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# merge TB DataFrame with two US census DataFrames} +\NormalTok{tb\_census\_df }\OperatorTok{=}\NormalTok{ (} +\NormalTok{ tb\_df} +\NormalTok{ .merge(right}\OperatorTok{=}\NormalTok{census\_2010s\_df,} +\NormalTok{ left\_on}\OperatorTok{=}\StringTok{"U.S. jurisdiction"}\NormalTok{, right\_on}\OperatorTok{=}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{ .merge(right}\OperatorTok{=}\NormalTok{census\_2020s\_df,} +\NormalTok{ left\_on}\OperatorTok{=}\StringTok{"U.S. jurisdiction"}\NormalTok{, right\_on}\OperatorTok{=}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{)} +\NormalTok{tb\_census\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllllllllllllllllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & TB cases 2019 & TB cases 2020 & TB cases 2021 & TB +incidence 2019 & TB incidence 2020 & TB incidence 2021 & Geographic +Area\_x & 2010 & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & +2019 & Geographic Area\_y & 2020 & 2021 & 2022 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 & Alabama & 4785437 & +4799069 & 4815588 & 4830081 & 4841799 & 4852347 & 4863525 & 4874486 & +4887681 & 4903185 & Alabama & 5031362 & 5049846 & 5074296 \\ +1 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 & Alaska & 713910 & +722128 & 730443 & 737068 & 736283 & 737498 & 741456 & 739700 & 735139 & +731545 & Alaska & 732923 & 734182 & 733583 \\ +2 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 & Arizona & 6407172 & +6472643 & 6554978 & 6632764 & 6730413 & 6829676 & 6941072 & 7044008 & +7158024 & 7278717 & Arizona & 7179943 & 7264877 & 7359197 \\ +3 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 & Arkansas & 2921964 & +2940667 & 2952164 & 2959400 & 2967392 & 2978048 & 2989918 & 3001345 & +3009733 & 3017804 & Arkansas & 3014195 & 3028122 & 3045637 \\ +4 & California & 2111 & 1706 & 1750 & 5.35 & 4.32 & 4.46 & California & +37319502 & 37638369 & 37948800 & 38260787 & 38596972 & 38918045 & +39167117 & 39358497 & 39461588 & 39512223 & California & 39501653 & +39142991 & 39029342 \\ +\end{longtable} + +Having all of these columns is a little unwieldy. We could either drop +the unneeded columns now, or just merge on smaller census +\texttt{DataFrame}s. Let's do the latter. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# try merging again, but cleaner this time} +\NormalTok{tb\_census\_df }\OperatorTok{=}\NormalTok{ (} +\NormalTok{ tb\_df} +\NormalTok{ .merge(right}\OperatorTok{=}\NormalTok{census\_2010s\_df[[}\StringTok{"Geographic Area"}\NormalTok{, }\StringTok{"2019"}\NormalTok{]],} +\NormalTok{ left\_on}\OperatorTok{=}\StringTok{"U.S. jurisdiction"}\NormalTok{, right\_on}\OperatorTok{=}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{ .drop(columns}\OperatorTok{=}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{ .merge(right}\OperatorTok{=}\NormalTok{census\_2020s\_df[[}\StringTok{"Geographic Area"}\NormalTok{, }\StringTok{"2020"}\NormalTok{, }\StringTok{"2021"}\NormalTok{]],} +\NormalTok{ left\_on}\OperatorTok{=}\StringTok{"U.S. jurisdiction"}\NormalTok{, right\_on}\OperatorTok{=}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{ .drop(columns}\OperatorTok{=}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{)} +\NormalTok{tb\_census\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & TB cases 2019 & TB cases 2020 & TB cases 2021 & TB +incidence 2019 & TB incidence 2020 & TB incidence 2021 & 2019 & 2020 & +2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 & 4903185 & 5031362 & +5049846 \\ +1 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 & 731545 & 732923 & +734182 \\ +2 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 & 7278717 & 7179943 & +7264877 \\ +3 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 & 3017804 & 3014195 & +3028122 \\ +4 & California & 2111 & 1706 & 1750 & 5.35 & 4.32 & 4.46 & 39512223 & +39501653 & 39142991 \\ +\end{longtable} + +\subsection{Reproducing Data: Compute +Incidence}\label{reproducing-data-compute-incidence} + +Let's recompute incidence to make sure we know where the original CDC +numbers came from. + +From the +\href{https://www.cdc.gov/mmwr/volumes/71/wr/mm7112a1.htm?s_cid=mm7112a1_w\#T1_down}{CDC +report}: TB incidence is computed as ``Cases per 100,000 persons using +mid-year population estimates from the U.S. Census Bureau.'' + +If we define a group as 100,000 people, then we can compute the TB +incidence for a given state population as + +\[\text{TB incidence} = \frac{\text{TB cases in population}}{\text{groups in population}} = \frac{\text{TB cases in population}}{\text{population}/100000} \] + +\[= \frac{\text{TB cases in population}}{\text{population}} \times 100000\] + +Let's try this for 2019: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_census\_df[}\StringTok{"recompute incidence 2019"}\NormalTok{] }\OperatorTok{=}\NormalTok{ tb\_census\_df[}\StringTok{"TB cases 2019"}\NormalTok{]}\OperatorTok{/}\NormalTok{tb\_census\_df[}\StringTok{"2019"}\NormalTok{]}\OperatorTok{*}\DecValTok{100000} +\NormalTok{tb\_census\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & TB cases 2019 & TB cases 2020 & TB cases 2021 & TB +incidence 2019 & TB incidence 2020 & TB incidence 2021 & 2019 & 2020 & +2021 & recompute incidence 2019 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 & 4903185 & 5031362 & +5049846 & 1.77 \\ +1 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 & 731545 & 732923 & +734182 & 7.93 \\ +2 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 & 7278717 & 7179943 & +7264877 & 2.51 \\ +3 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 & 3017804 & 3014195 & +3028122 & 2.12 \\ +4 & California & 2111 & 1706 & 1750 & 5.35 & 4.32 & 4.46 & 39512223 & +39501653 & 39142991 & 5.34 \\ +\end{longtable} + +Awesome!!! + +Let's use a for-loop and Python format strings to compute TB incidence +for all years. Python f-strings are just used for the purposes of this +demo, but they're handy to know when you explore data beyond this course +(\href{https://docs.python.org/3/tutorial/inputoutput.html}{documentation}). + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# recompute incidence for all years} +\ControlFlowTok{for}\NormalTok{ year }\KeywordTok{in}\NormalTok{ [}\DecValTok{2019}\NormalTok{, }\DecValTok{2020}\NormalTok{, }\DecValTok{2021}\NormalTok{]:} +\NormalTok{ tb\_census\_df[}\SpecialStringTok{f"recompute incidence }\SpecialCharTok{\{}\NormalTok{year}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{] }\OperatorTok{=}\NormalTok{ tb\_census\_df[}\SpecialStringTok{f"TB cases }\SpecialCharTok{\{}\NormalTok{year}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{]}\OperatorTok{/}\NormalTok{tb\_census\_df[}\SpecialStringTok{f"}\SpecialCharTok{\{}\NormalTok{year}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{]}\OperatorTok{*}\DecValTok{100000} +\NormalTok{tb\_census\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllllll@{}} +\toprule\noalign{} +& U.S. jurisdiction & TB cases 2019 & TB cases 2020 & TB cases 2021 & TB +incidence 2019 & TB incidence 2020 & TB incidence 2021 & 2019 & 2020 & +2021 & recompute incidence 2019 & recompute incidence 2020 & recompute +incidence 2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 & 4903185 & 5031362 & +5049846 & 1.77 & 1.43 & 1.82 \\ +1 & Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 & 731545 & 732923 & +734182 & 7.93 & 7.91 & 7.90 \\ +2 & Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 & 7278717 & 7179943 & +7264877 & 2.51 & 1.89 & 1.78 \\ +3 & Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 & 3017804 & 3014195 & +3028122 & 2.12 & 1.96 & 2.28 \\ +4 & California & 2111 & 1706 & 1750 & 5.35 & 4.32 & 4.46 & 39512223 & +39501653 & 39142991 & 5.34 & 4.32 & 4.47 \\ +\end{longtable} + +These numbers look pretty close!!! There are a few errors in the +hundredths place, particularly in 2021. It may be useful to further +explore reasons behind this discrepancy. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_census\_df.describe()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllllllllll@{}} +\toprule\noalign{} +& TB cases 2019 & TB cases 2020 & TB cases 2021 & TB incidence 2019 & TB +incidence 2020 & TB incidence 2021 & 2019 & 2020 & 2021 & recompute +incidence 2019 & recompute incidence 2020 & recompute incidence 2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +count & 51.00 & 51.00 & 51.00 & 51.00 & 51.00 & 51.00 & 51.00 & 51.00 & +51.00 & 51.00 & 51.00 & 51.00 \\ +mean & 174.51 & 140.65 & 154.12 & 2.10 & 1.78 & 1.97 & 6436069.08 & +6500225.73 & 6510422.63 & 2.10 & 1.78 & 1.97 \\ +std & 341.74 & 271.06 & 286.78 & 1.50 & 1.34 & 1.48 & 7360660.47 & +7408168.46 & 7394300.08 & 1.50 & 1.34 & 1.47 \\ +min & 1.00 & 0.00 & 2.00 & 0.17 & 0.00 & 0.21 & 578759.00 & 577605.00 & +579483.00 & 0.17 & 0.00 & 0.21 \\ +25\% & 25.50 & 29.00 & 23.00 & 1.29 & 1.21 & 1.23 & 1789606.00 & +1820311.00 & 1844920.00 & 1.30 & 1.21 & 1.23 \\ +50\% & 70.00 & 67.00 & 69.00 & 1.80 & 1.52 & 1.70 & 4467673.00 & +4507445.00 & 4506589.00 & 1.81 & 1.52 & 1.69 \\ +75\% & 180.50 & 139.00 & 150.00 & 2.58 & 1.99 & 2.22 & 7446805.00 & +7451987.00 & 7502811.00 & 2.58 & 1.99 & 2.22 \\ +max & 2111.00 & 1706.00 & 1750.00 & 7.91 & 7.92 & 7.92 & 39512223.00 & +39501653.00 & 39142991.00 & 7.93 & 7.91 & 7.90 \\ +\end{longtable} + +\subsection{Bonus EDA: Reproducing the Reported +Statistic}\label{bonus-eda-reproducing-the-reported-statistic} + +\textbf{How do we reproduce that reported statistic in the original +\href{https://www.cdc.gov/mmwr/volumes/71/wr/mm7112a1.htm?s_cid=mm7112a1_w}{CDC +report}?} + +\begin{quote} +Reported TB incidence (cases per 100,000 persons) increased +\textbf{9.4\%}, from \textbf{2.2} during 2020 to \textbf{2.4} during +2021 but was lower than incidence during 2019 (2.7). Increases occurred +among both U.S.-born and non--U.S.-born persons. +\end{quote} + +This is TB incidence computed across the entire U.S. population! How do +we reproduce this? * We need to reproduce the ``Total'' TB incidences in +our rolled record. * But our current \texttt{tb\_census\_df} only has 51 +entries (50 states plus Washington, D.C.). There is no rolled record. * +What happened\ldots? + +Let's get exploring! + +Before we keep exploring, we'll set all indexes to more meaningful +values, instead of just numbers that pertain to some row at some point. +This will make our cleaning slightly easier. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_df }\OperatorTok{=}\NormalTok{ tb\_df.set\_index(}\StringTok{"U.S. jurisdiction"}\NormalTok{)} +\NormalTok{tb\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& TB cases 2019 & TB cases 2020 & TB cases 2021 & TB incidence 2019 & TB +incidence 2020 & TB incidence 2021 \\ +U.S. jurisdiction & & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Total & 8900 & 7173 & 7860 & 2.71 & 2.16 & 2.37 \\ +Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 \\ +Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 \\ +Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 \\ +Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{census\_2010s\_df }\OperatorTok{=}\NormalTok{ census\_2010s\_df.set\_index(}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{census\_2010s\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllllllll@{}} +\toprule\noalign{} +& 2010 & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\ +Geographic Area & & & & & & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +United States & 309321666 & 311556874 & 313830990 & 315993715 & +318301008 & 320635163 & 322941311 & 324985539 & 326687501 & 328239523 \\ +Northeast & 55380134 & 55604223 & 55775216 & 55901806 & 56006011 & +56034684 & 56042330 & 56059240 & 56046620 & 55982803 \\ +Midwest & 66974416 & 67157800 & 67336743 & 67560379 & 67745167 & +67860583 & 67987540 & 68126781 & 68236628 & 68329004 \\ +South & 114866680 & 116006522 & 117241208 & 118364400 & 119624037 & +120997341 & 122351760 & 123542189 & 124569433 & 125580448 \\ +West & 72100436 & 72788329 & 73477823 & 74167130 & 74925793 & 75742555 & +76559681 & 77257329 & 77834820 & 78347268 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{census\_2020s\_df }\OperatorTok{=}\NormalTok{ census\_2020s\_df.set\_index(}\StringTok{"Geographic Area"}\NormalTok{)} +\NormalTok{census\_2020s\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& 2020 & 2021 & 2022 \\ +Geographic Area & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +United States & 331511512 & 332031554 & 333287557 \\ +Northeast & 57448898 & 57259257 & 57040406 \\ +Midwest & 68961043 & 68836505 & 68787595 \\ +South & 126450613 & 127346029 & 128716192 \\ +West & 78650958 & 78589763 & 78743364 \\ +\end{longtable} + +It turns out that our merge above only kept state records, even though +our original \texttt{tb\_df} had the ``Total'' rolled record: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_df.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& TB cases 2019 & TB cases 2020 & TB cases 2021 & TB incidence 2019 & TB +incidence 2020 & TB incidence 2021 \\ +U.S. jurisdiction & & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Total & 8900 & 7173 & 7860 & 2.71 & 2.16 & 2.37 \\ +Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 \\ +Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 \\ +Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 \\ +Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 \\ +\end{longtable} + +Recall that \texttt{merge} by default does an \textbf{inner} merge by +default, meaning that it only preserves keys that are present in +\textbf{both} \texttt{DataFrame}s. + +The rolled records in our census \texttt{DataFrame} have different +\texttt{Geographic\ Area} fields, which was the key we merged on: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{census\_2010s\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllllllll@{}} +\toprule\noalign{} +& 2010 & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\ +Geographic Area & & & & & & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +United States & 309321666 & 311556874 & 313830990 & 315993715 & +318301008 & 320635163 & 322941311 & 324985539 & 326687501 & 328239523 \\ +Northeast & 55380134 & 55604223 & 55775216 & 55901806 & 56006011 & +56034684 & 56042330 & 56059240 & 56046620 & 55982803 \\ +Midwest & 66974416 & 67157800 & 67336743 & 67560379 & 67745167 & +67860583 & 67987540 & 68126781 & 68236628 & 68329004 \\ +South & 114866680 & 116006522 & 117241208 & 118364400 & 119624037 & +120997341 & 122351760 & 123542189 & 124569433 & 125580448 \\ +West & 72100436 & 72788329 & 73477823 & 74167130 & 74925793 & 75742555 & +76559681 & 77257329 & 77834820 & 78347268 \\ +\end{longtable} + +The Census \texttt{DataFrame} has several rolled records. The aggregate +record we are looking for actually has the Geographic Area named +``United States''. + +One straightforward way to get the right merge is to rename the value +itself. Because we now have the Geographic Area index, we'll use +\texttt{df.rename()} +(\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rename.html}{documentation}): + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# rename rolled record for 2010s} +\NormalTok{census\_2010s\_df.rename(index}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}United States\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}Total\textquotesingle{}}\NormalTok{\}, inplace}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{census\_2010s\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllllllll@{}} +\toprule\noalign{} +& 2010 & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 \\ +Geographic Area & & & & & & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Total & 309321666 & 311556874 & 313830990 & 315993715 & 318301008 & +320635163 & 322941311 & 324985539 & 326687501 & 328239523 \\ +Northeast & 55380134 & 55604223 & 55775216 & 55901806 & 56006011 & +56034684 & 56042330 & 56059240 & 56046620 & 55982803 \\ +Midwest & 66974416 & 67157800 & 67336743 & 67560379 & 67745167 & +67860583 & 67987540 & 68126781 & 68236628 & 68329004 \\ +South & 114866680 & 116006522 & 117241208 & 118364400 & 119624037 & +120997341 & 122351760 & 123542189 & 124569433 & 125580448 \\ +West & 72100436 & 72788329 & 73477823 & 74167130 & 74925793 & 75742555 & +76559681 & 77257329 & 77834820 & 78347268 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# same, but for 2020s rename rolled record} +\NormalTok{census\_2020s\_df.rename(index}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}United States\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}Total\textquotesingle{}}\NormalTok{\}, inplace}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{census\_2020s\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& 2020 & 2021 & 2022 \\ +Geographic Area & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Total & 331511512 & 332031554 & 333287557 \\ +Northeast & 57448898 & 57259257 & 57040406 \\ +Midwest & 68961043 & 68836505 & 68787595 \\ +South & 126450613 & 127346029 & 128716192 \\ +West & 78650958 & 78589763 & 78743364 \\ +\end{longtable} + +Next let's rerun our merge. Note the different chaining, because we are +now merging on indexes (\texttt{df.merge()} +\href{https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html}{documentation}). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{tb\_census\_df }\OperatorTok{=}\NormalTok{ (} +\NormalTok{ tb\_df} +\NormalTok{ .merge(right}\OperatorTok{=}\NormalTok{census\_2010s\_df[[}\StringTok{"2019"}\NormalTok{]],} +\NormalTok{ left\_index}\OperatorTok{=}\VariableTok{True}\NormalTok{, right\_index}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{ .merge(right}\OperatorTok{=}\NormalTok{census\_2020s\_df[[}\StringTok{"2020"}\NormalTok{, }\StringTok{"2021"}\NormalTok{]],} +\NormalTok{ left\_index}\OperatorTok{=}\VariableTok{True}\NormalTok{, right\_index}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{)} +\NormalTok{tb\_census\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllll@{}} +\toprule\noalign{} +& TB cases 2019 & TB cases 2020 & TB cases 2021 & TB incidence 2019 & TB +incidence 2020 & TB incidence 2021 & 2019 & 2020 & 2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Total & 8900 & 7173 & 7860 & 2.71 & 2.16 & 2.37 & 328239523 & 331511512 +& 332031554 \\ +Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 & 4903185 & 5031362 & +5049846 \\ +Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 & 731545 & 732923 & 734182 \\ +Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 & 7278717 & 7179943 & +7264877 \\ +Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 & 3017804 & 3014195 & +3028122 \\ +\end{longtable} + +Finally, let's recompute our incidences: + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# recompute incidence for all years} +\ControlFlowTok{for}\NormalTok{ year }\KeywordTok{in}\NormalTok{ [}\DecValTok{2019}\NormalTok{, }\DecValTok{2020}\NormalTok{, }\DecValTok{2021}\NormalTok{]:} +\NormalTok{ tb\_census\_df[}\SpecialStringTok{f"recompute incidence }\SpecialCharTok{\{}\NormalTok{year}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{] }\OperatorTok{=}\NormalTok{ tb\_census\_df[}\SpecialStringTok{f"TB cases }\SpecialCharTok{\{}\NormalTok{year}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{]}\OperatorTok{/}\NormalTok{tb\_census\_df[}\SpecialStringTok{f"}\SpecialCharTok{\{}\NormalTok{year}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{]}\OperatorTok{*}\DecValTok{100000} +\NormalTok{tb\_census\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllllllllll@{}} +\toprule\noalign{} +& TB cases 2019 & TB cases 2020 & TB cases 2021 & TB incidence 2019 & TB +incidence 2020 & TB incidence 2021 & 2019 & 2020 & 2021 & recompute +incidence 2019 & recompute incidence 2020 & recompute incidence 2021 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Total & 8900 & 7173 & 7860 & 2.71 & 2.16 & 2.37 & 328239523 & 331511512 +& 332031554 & 2.71 & 2.16 & 2.37 \\ +Alabama & 87 & 72 & 92 & 1.77 & 1.43 & 1.83 & 4903185 & 5031362 & +5049846 & 1.77 & 1.43 & 1.82 \\ +Alaska & 58 & 58 & 58 & 7.91 & 7.92 & 7.92 & 731545 & 732923 & 734182 & +7.93 & 7.91 & 7.90 \\ +Arizona & 183 & 136 & 129 & 2.51 & 1.89 & 1.77 & 7278717 & 7179943 & +7264877 & 2.51 & 1.89 & 1.78 \\ +Arkansas & 64 & 59 & 69 & 2.12 & 1.96 & 2.28 & 3017804 & 3014195 & +3028122 & 2.12 & 1.96 & 2.28 \\ +\end{longtable} + +We reproduced the total U.S. incidences correctly! + +We're almost there. Let's revisit the quote: + +\begin{quote} +Reported TB incidence (cases per 100,000 persons) increased +\textbf{9.4\%}, from \textbf{2.2} during 2020 to \textbf{2.4} during +2021 but was lower than incidence during 2019 (2.7). Increases occurred +among both U.S.-born and non--U.S.-born persons. +\end{quote} + +Recall that percent change from \(A\) to \(B\) is computed as +\(\text{percent change} = \frac{B - A}{A} \times 100\). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{incidence\_2020 }\OperatorTok{=}\NormalTok{ tb\_census\_df.loc[}\StringTok{\textquotesingle{}Total\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}recompute incidence 2020\textquotesingle{}}\NormalTok{]} +\NormalTok{incidence\_2020} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(2.1637257652759883) +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{incidence\_2021 }\OperatorTok{=}\NormalTok{ tb\_census\_df.loc[}\StringTok{\textquotesingle{}Total\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}recompute incidence 2021\textquotesingle{}}\NormalTok{]} +\NormalTok{incidence\_2021} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(2.3672448914298068) +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{difference }\OperatorTok{=}\NormalTok{ (incidence\_2021 }\OperatorTok{{-}}\NormalTok{ incidence\_2020)}\OperatorTok{/}\NormalTok{incidence\_2020 }\OperatorTok{*} \DecValTok{100} +\NormalTok{difference} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(9.405957511804143) +\end{verbatim} + +\section{EDA Demo 2: Mauna Loa CO2 Data -- A Lesson in Data +Faithfulness}\label{eda-demo-2-mauna-loa-co2-data-a-lesson-in-data-faithfulness} + +\href{https://gml.noaa.gov/ccgg/trends/data.html}{Mauna Loa Observatory} +has been monitoring CO2 concentrations since 1958. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2\_file }\OperatorTok{=} \StringTok{"data/co2\_mm\_mlo.txt"} +\end{Highlighting} +\end{Shaded} + +Let's do some \textbf{EDA}!! + +\subsection{\texorpdfstring{Reading this file into +\texttt{Pandas}?}{Reading this file into Pandas?}}\label{reading-this-file-into-pandas} + +Let's instead check out this \texttt{.txt} file. Some questions to keep +in mind: Do we trust this file extension? What structure is it? + +Lines 71-78 (inclusive) are shown below: + +\begin{verbatim} +line number | file contents + +71 | # decimal average interpolated trend #days +72 | # date (season corr) +73 | 1958 3 1958.208 315.71 315.71 314.62 -1 +74 | 1958 4 1958.292 317.45 317.45 315.29 -1 +75 | 1958 5 1958.375 317.50 317.50 314.71 -1 +76 | 1958 6 1958.458 -99.99 317.10 314.85 -1 +77 | 1958 7 1958.542 315.86 315.86 314.98 -1 +78 | 1958 8 1958.625 314.93 314.93 315.94 -1 +\end{verbatim} + +Notice how: + +\begin{itemize} +\tightlist +\item + The values are separated by white space, possibly tabs. +\item + The data line up down the rows. For example, the month appears in 7th + to 8th position of each line. +\item + The 71st and 72nd lines in the file contain column headings split over + two lines. +\end{itemize} + +We can use~\texttt{read\_csv}~to read the data into a \texttt{pandas} +\texttt{DataFrame}, and we provide several arguments to specify that the +separators are white space, there is no header (\textbf{we will set our +own column names}), and to skip the first 72 rows of the file. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2 }\OperatorTok{=}\NormalTok{ pd.read\_csv(} +\NormalTok{ co2\_file, header }\OperatorTok{=} \VariableTok{None}\NormalTok{, skiprows }\OperatorTok{=} \DecValTok{72}\NormalTok{,} +\NormalTok{ sep }\OperatorTok{=} \VerbatimStringTok{r\textquotesingle{}\textbackslash{}s+\textquotesingle{}} \CommentTok{\#delimiter for continuous whitespace (stay tuned for regex next lecture))} +\NormalTok{)} +\NormalTok{co2.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1958 & 3 & 1958.21 & 315.71 & 315.71 & 314.62 & -1 \\ +1 & 1958 & 4 & 1958.29 & 317.45 & 317.45 & 315.29 & -1 \\ +2 & 1958 & 5 & 1958.38 & 317.50 & 317.50 & 314.71 & -1 \\ +3 & 1958 & 6 & 1958.46 & -99.99 & 317.10 & 314.85 & -1 \\ +4 & 1958 & 7 & 1958.54 & 315.86 & 315.86 & 314.98 & -1 \\ +\end{longtable} + +Congratulations! You've wrangled the data! + +\ldots But our columns aren't named. \textbf{We need to do more EDA.} + +\subsection{Exploring Variable Feature +Types}\label{exploring-variable-feature-types} + +The NOAA \href{https://gml.noaa.gov/ccgg/trends/}{webpage} might have +some useful tidbits (in this case it doesn't). + +Using this information, we'll rerun \texttt{pd.read\_csv}, but this time +with some \textbf{custom column names.} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2 }\OperatorTok{=}\NormalTok{ pd.read\_csv(} +\NormalTok{ co2\_file, header }\OperatorTok{=} \VariableTok{None}\NormalTok{, skiprows }\OperatorTok{=} \DecValTok{72}\NormalTok{,} +\NormalTok{ sep }\OperatorTok{=} \StringTok{\textquotesingle{}\textbackslash{}s+\textquotesingle{}}\NormalTok{, }\CommentTok{\#regex for continuous whitespace (next lecture)} +\NormalTok{ names }\OperatorTok{=}\NormalTok{ [}\StringTok{\textquotesingle{}Yr\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Mo\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}DecDate\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Int\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Trend\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Days\textquotesingle{}}\NormalTok{]} +\NormalTok{)} +\NormalTok{co2.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Yr & Mo & DecDate & Avg & Int & Trend & Days \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1958 & 3 & 1958.21 & 315.71 & 315.71 & 314.62 & -1 \\ +1 & 1958 & 4 & 1958.29 & 317.45 & 317.45 & 315.29 & -1 \\ +2 & 1958 & 5 & 1958.38 & 317.50 & 317.50 & 314.71 & -1 \\ +3 & 1958 & 6 & 1958.46 & -99.99 & 317.10 & 314.85 & -1 \\ +4 & 1958 & 7 & 1958.54 & 315.86 & 315.86 & 314.98 & -1 \\ +\end{longtable} + +\subsection{Visualizing CO2}\label{visualizing-co2} + +Scientific studies tend to have very clean data, right\ldots? Let's jump +right in and make a time series plot of CO2 monthly averages. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.lineplot(x}\OperatorTok{=}\StringTok{\textquotesingle{}DecDate\textquotesingle{}}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{, data}\OperatorTok{=}\NormalTok{co2)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-62-output-1.pdf} + +The code above uses the \texttt{seaborn} plotting library (abbreviated +\texttt{sns}). We will cover this in the Visualization lecture, but now +you don't need to worry about how it works! + +Yikes! Plotting the data uncovered a problem. The sharp vertical lines +suggest that we have some \textbf{missing values}. What happened here? + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Yr & Mo & DecDate & Avg & Int & Trend & Days \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1958 & 3 & 1958.21 & 315.71 & 315.71 & 314.62 & -1 \\ +1 & 1958 & 4 & 1958.29 & 317.45 & 317.45 & 315.29 & -1 \\ +2 & 1958 & 5 & 1958.38 & 317.50 & 317.50 & 314.71 & -1 \\ +3 & 1958 & 6 & 1958.46 & -99.99 & 317.10 & 314.85 & -1 \\ +4 & 1958 & 7 & 1958.54 & 315.86 & 315.86 & 314.98 & -1 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2.tail()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Yr & Mo & DecDate & Avg & Int & Trend & Days \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +733 & 2019 & 4 & 2019.29 & 413.32 & 413.32 & 410.49 & 26 \\ +734 & 2019 & 5 & 2019.38 & 414.66 & 414.66 & 411.20 & 28 \\ +735 & 2019 & 6 & 2019.46 & 413.92 & 413.92 & 411.58 & 27 \\ +736 & 2019 & 7 & 2019.54 & 411.77 & 411.77 & 411.43 & 23 \\ +737 & 2019 & 8 & 2019.62 & 409.95 & 409.95 & 411.84 & 29 \\ +\end{longtable} + +Some data have unusual values like -1 and -99.99. + +Let's check the description at the top of the file again. + +\begin{itemize} +\tightlist +\item + -1 signifies a missing value for the number of days \texttt{Days} the + equipment was in operation that month. +\item + -99.99 denotes a missing monthly average \texttt{Avg} +\end{itemize} + +How can we fix this? First, let's explore other aspects of our data. +Understanding our data will help us decide what to do with the missing +values. + +\subsection{Sanity Checks: Reasoning about the +data}\label{sanity-checks-reasoning-about-the-data} + +First, we consider the shape of the data. How many rows should we have? + +\begin{itemize} +\tightlist +\item + If chronological order, we should have one record per month. +\item + Data from March 1958 to August 2019. +\item + We should have \$ 12 \times (2019-1957) - 2 - 4 = 738 \$ records. +\end{itemize} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2.shape} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(738, 7) +\end{verbatim} + +Nice!! The number of rows (i.e.~records) match our expectations. + +Let's now check the quality of each feature. + +\subsection{\texorpdfstring{Understanding Missing Value 1: +\texttt{Days}}{Understanding Missing Value 1: Days}}\label{understanding-missing-value-1-days} + +\texttt{Days} is a time field, so let's analyze other time fields to see +if there is an explanation for missing values of days of operation. + +Let's start with \textbf{months}, \texttt{Mo}. + +Are we missing any records? The number of months should have 62 or 61 +instances (March 1957-August 2019). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2[}\StringTok{"Mo"}\NormalTok{].value\_counts().sort\_index()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Mo +1 61 +2 61 +3 62 +4 62 +5 62 +6 62 +7 62 +8 62 +9 61 +10 61 +11 61 +12 61 +Name: count, dtype: int64 +\end{verbatim} + +As expected Jan, Feb, Sep, Oct, Nov, and Dec have 61 occurrences and the +rest 62. + +Next let's explore \textbf{days} \texttt{Days} itself, which is the +number of days that the measurement equipment worked. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.displot(co2[}\StringTok{\textquotesingle{}Days\textquotesingle{}}\NormalTok{])}\OperatorTok{;} +\NormalTok{plt.title(}\StringTok{"Distribution of days feature"}\NormalTok{)}\OperatorTok{;} \CommentTok{\# suppresses unneeded plotting output} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-67-output-1.pdf} + +In terms of data quality, a handful of months have averages based on +measurements taken on fewer than half the days. In addition, there are +nearly 200 missing values--\textbf{that's about 27\% of the data}! + +Finally, let's check the last time feature, \textbf{year} \texttt{Yr}. + +Let's check to see if there is any connection between missing-ness and +the year of the recording. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.scatterplot(x}\OperatorTok{=}\StringTok{"Yr"}\NormalTok{, y}\OperatorTok{=}\StringTok{"Days"}\NormalTok{, data}\OperatorTok{=}\NormalTok{co2)}\OperatorTok{;} +\NormalTok{plt.title(}\StringTok{"Day field by Year"}\NormalTok{)}\OperatorTok{;} \CommentTok{\# the ; suppresses output} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-68-output-1.pdf} + +\textbf{Observations}: + +\begin{itemize} +\tightlist +\item + All of the missing data are in the early years of operation. +\item + It appears there may have been problems with equipment in the mid to + late 80s. +\end{itemize} + +\textbf{Potential Next Steps}: + +\begin{itemize} +\tightlist +\item + Confirm these explanations through documentation about the historical + readings. +\item + Maybe drop the earliest recordings? However, we would want to delay + such action until after we have examined the time trends and assess + whether there are any potential problems. +\end{itemize} + +\subsection{\texorpdfstring{Understanding Missing Value 2: +\texttt{Avg}}{Understanding Missing Value 2: Avg}}\label{understanding-missing-value-2-avg} + +Next, let's return to the -99.99 values in \texttt{Avg} to analyze the +overall quality of the CO2 measurements. We'll plot a histogram of the +average CO2 measurements + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Histograms of average CO2 measurements} +\NormalTok{sns.displot(co2[}\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{])}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-69-output-1.pdf} + +The non-missing values are in the 300-400 range (a regular range of CO2 +levels). + +We also see that there are only a few missing \texttt{Avg} values +(\textbf{\textless1\% of values}). Let's examine all of them: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2[co2[}\StringTok{"Avg"}\NormalTok{] }\OperatorTok{\textless{}} \DecValTok{0}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Yr & Mo & DecDate & Avg & Int & Trend & Days \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +3 & 1958 & 6 & 1958.46 & -99.99 & 317.10 & 314.85 & -1 \\ +7 & 1958 & 10 & 1958.79 & -99.99 & 312.66 & 315.61 & -1 \\ +71 & 1964 & 2 & 1964.12 & -99.99 & 320.07 & 319.61 & -1 \\ +72 & 1964 & 3 & 1964.21 & -99.99 & 320.73 & 319.55 & -1 \\ +73 & 1964 & 4 & 1964.29 & -99.99 & 321.77 & 319.48 & -1 \\ +213 & 1975 & 12 & 1975.96 & -99.99 & 330.59 & 331.60 & 0 \\ +313 & 1984 & 4 & 1984.29 & -99.99 & 346.84 & 344.27 & 2 \\ +\end{longtable} + +There doesn't seem to be a pattern to these values, other than that most +records also were missing \texttt{Days} data. + +\subsection{\texorpdfstring{Drop, \texttt{NaN}, or Impute Missing +\texttt{Avg} +Data?}{Drop, NaN, or Impute Missing Avg Data?}}\label{drop-nan-or-impute-missing-avg-data} + +How should we address the invalid \texttt{Avg} data? + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Drop records +\item + Set to NaN +\item + Impute using some strategy +\end{enumerate} + +Remember we want to fix the following plot: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.lineplot(x}\OperatorTok{=}\StringTok{\textquotesingle{}DecDate\textquotesingle{}}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{, data}\OperatorTok{=}\NormalTok{co2)} +\NormalTok{plt.title(}\StringTok{"CO2 Average By Month"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-71-output-1.pdf} + +Since we are plotting \texttt{Avg} vs \texttt{DecDate}, we should just +focus on dealing with missing values for \texttt{Avg}. + +Let's consider a few options: 1. Drop those records 2. Replace -99.99 +with NaN 3. Substitute it with a likely value for the average CO2? + +What do you think are the pros and cons of each possible action? + +Let's examine each of these three options. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# 1. Drop missing values} +\NormalTok{co2\_drop }\OperatorTok{=}\NormalTok{ co2[co2[}\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{] }\OperatorTok{\textgreater{}} \DecValTok{0}\NormalTok{]} +\NormalTok{co2\_drop.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Yr & Mo & DecDate & Avg & Int & Trend & Days \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1958 & 3 & 1958.21 & 315.71 & 315.71 & 314.62 & -1 \\ +1 & 1958 & 4 & 1958.29 & 317.45 & 317.45 & 315.29 & -1 \\ +2 & 1958 & 5 & 1958.38 & 317.50 & 317.50 & 314.71 & -1 \\ +4 & 1958 & 7 & 1958.54 & 315.86 & 315.86 & 314.98 & -1 \\ +5 & 1958 & 8 & 1958.62 & 314.93 & 314.93 & 315.94 & -1 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# 2. Replace NaN with {-}99.99} +\NormalTok{co2\_NA }\OperatorTok{=}\NormalTok{ co2.replace(}\OperatorTok{{-}}\FloatTok{99.99}\NormalTok{, np.nan)} +\NormalTok{co2\_NA.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Yr & Mo & DecDate & Avg & Int & Trend & Days \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1958 & 3 & 1958.21 & 315.71 & 315.71 & 314.62 & -1 \\ +1 & 1958 & 4 & 1958.29 & 317.45 & 317.45 & 315.29 & -1 \\ +2 & 1958 & 5 & 1958.38 & 317.50 & 317.50 & 314.71 & -1 \\ +3 & 1958 & 6 & 1958.46 & NaN & 317.10 & 314.85 & -1 \\ +4 & 1958 & 7 & 1958.54 & 315.86 & 315.86 & 314.98 & -1 \\ +\end{longtable} + +We'll also use a third version of the data. + +First, we note that the dataset already comes with a \textbf{substitute +value} for the -99.99. + +From the file description: + +\begin{quote} +The \texttt{interpolated} column includes average values from the +preceding column (\texttt{average}) and \textbf{interpolated values} +where data are missing. Interpolated values are computed in two +steps\ldots{} +\end{quote} + +The \texttt{Int} feature has values that exactly match those in +\texttt{Avg}, except when \texttt{Avg} is -99.99, and then a +\textbf{reasonable} estimate is used instead. + +So, the third version of our data will use the \texttt{Int} feature +instead of \texttt{Avg}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# 3. Use interpolated column which estimates missing Avg values} +\NormalTok{co2\_impute }\OperatorTok{=}\NormalTok{ co2.copy()} +\NormalTok{co2\_impute[}\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ co2[}\StringTok{\textquotesingle{}Int\textquotesingle{}}\NormalTok{]} +\NormalTok{co2\_impute.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& Yr & Mo & DecDate & Avg & Int & Trend & Days \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 1958 & 3 & 1958.21 & 315.71 & 315.71 & 314.62 & -1 \\ +1 & 1958 & 4 & 1958.29 & 317.45 & 317.45 & 315.29 & -1 \\ +2 & 1958 & 5 & 1958.38 & 317.50 & 317.50 & 314.71 & -1 \\ +3 & 1958 & 6 & 1958.46 & 317.10 & 317.10 & 314.85 & -1 \\ +4 & 1958 & 7 & 1958.54 & 315.86 & 315.86 & 314.98 & -1 \\ +\end{longtable} + +What's a \textbf{reasonable} estimate? + +To answer this question, let's zoom in on a short time period, say the +measurements in 1958 (where we know we have two missing values). + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# results of plotting data in 1958} + +\KeywordTok{def}\NormalTok{ line\_and\_points(data, ax, title):} + \CommentTok{\# assumes single year, hence Mo} +\NormalTok{ ax.plot(}\StringTok{\textquotesingle{}Mo\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{, data}\OperatorTok{=}\NormalTok{data)} +\NormalTok{ ax.scatter(}\StringTok{\textquotesingle{}Mo\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{, data}\OperatorTok{=}\NormalTok{data)} +\NormalTok{ ax.set\_xlim(}\DecValTok{2}\NormalTok{, }\DecValTok{13}\NormalTok{)} +\NormalTok{ ax.set\_title(title)} +\NormalTok{ ax.set\_xticks(np.arange(}\DecValTok{3}\NormalTok{, }\DecValTok{13}\NormalTok{))} + +\KeywordTok{def}\NormalTok{ data\_year(data, year):} + \ControlFlowTok{return}\NormalTok{ data[data[}\StringTok{"Yr"}\NormalTok{] }\OperatorTok{==} \DecValTok{1958}\NormalTok{]} + +\CommentTok{\# uses matplotlib subplots} +\CommentTok{\# you may see more next week; focus on output for now} +\NormalTok{fig, axes }\OperatorTok{=}\NormalTok{ plt.subplots(ncols }\OperatorTok{=} \DecValTok{3}\NormalTok{, figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{12}\NormalTok{, }\DecValTok{4}\NormalTok{), sharey}\OperatorTok{=}\VariableTok{True}\NormalTok{)} + +\NormalTok{year }\OperatorTok{=} \DecValTok{1958} +\NormalTok{line\_and\_points(data\_year(co2\_drop, year), axes[}\DecValTok{0}\NormalTok{], title}\OperatorTok{=}\StringTok{"1. Drop Missing"}\NormalTok{)} +\NormalTok{line\_and\_points(data\_year(co2\_NA, year), axes[}\DecValTok{1}\NormalTok{], title}\OperatorTok{=}\StringTok{"2. Missing Set to NaN"}\NormalTok{)} +\NormalTok{line\_and\_points(data\_year(co2\_impute, year), axes[}\DecValTok{2}\NormalTok{], title}\OperatorTok{=}\StringTok{"3. Missing Interpolated"}\NormalTok{)} + +\NormalTok{fig.suptitle(}\SpecialStringTok{f"Monthly Averages for }\SpecialCharTok{\{}\NormalTok{year}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\NormalTok{plt.tight\_layout()} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-75-output-1.pdf} + +In the big picture since there are only 7 \texttt{Avg} values missing +(\textbf{\textless1\%} of 738 months), any of these approaches would +work. + +However there is some appeal to \textbf{option C, Imputing}: + +\begin{itemize} +\tightlist +\item + Shows seasonal trends for CO2 +\item + We are plotting all months in our data as a line plot +\end{itemize} + +Let's replot our original figure with option 3: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.lineplot(x}\OperatorTok{=}\StringTok{\textquotesingle{}DecDate\textquotesingle{}}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{, data}\OperatorTok{=}\NormalTok{co2\_impute)} +\NormalTok{plt.title(}\StringTok{"CO2 Average By Month, Imputed"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-76-output-1.pdf} + +Looks pretty close to what we see on the NOAA +\href{https://gml.noaa.gov/ccgg/trends/}{website}! + +\subsection{Presenting the Data: A Discussion on Data +Granularity}\label{presenting-the-data-a-discussion-on-data-granularity} + +From the description: + +\begin{itemize} +\tightlist +\item + Monthly measurements are averages of average day measurements. +\item + The NOAA GML website has datasets for daily/hourly measurements too. +\end{itemize} + +The data you present depends on your research question. + +\textbf{How do CO2 levels vary by season?} + +\begin{itemize} +\tightlist +\item + You might want to keep average monthly data. +\end{itemize} + +\textbf{Are CO2 levels rising over the past 50+ years, consistent with +global warming predictions?} + +\begin{itemize} +\tightlist +\item + You might be happier with a \textbf{coarser granularity} of average + year data! +\end{itemize} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{co2\_year }\OperatorTok{=}\NormalTok{ co2\_impute.groupby(}\StringTok{\textquotesingle{}Yr\textquotesingle{}}\NormalTok{).mean()} +\NormalTok{sns.lineplot(x}\OperatorTok{=}\StringTok{\textquotesingle{}Yr\textquotesingle{}}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}Avg\textquotesingle{}}\NormalTok{, data}\OperatorTok{=}\NormalTok{co2\_year)} +\NormalTok{plt.title(}\StringTok{"CO2 Average By Year"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{eda/eda_files/figure-pdf/cell-77-output-1.pdf} + +Indeed, we see a rise by nearly 100 ppm of CO2 since Mauna Loa began +recording in 1958. + +\section{Summary}\label{summary} + +We went over a lot of content this lecture; let's summarize the most +important points: + +\subsection{Dealing with Missing +Values}\label{dealing-with-missing-values} + +There are a few options we can take to deal with missing data: + +\begin{itemize} +\tightlist +\item + Drop missing records +\item + Keep \texttt{NaN} missing values +\item + Impute using an interpolated column +\end{itemize} + +\subsection{EDA and Data Wrangling}\label{eda-and-data-wrangling} + +There are several ways to approach EDA and Data Wrangling: + +\begin{itemize} +\tightlist +\item + Examine the \textbf{data and metadata}: what is the date, size, + organization, and structure of the data? +\item + Examine each \textbf{field/attribute/dimension} individually. +\item + Examine pairs of related dimensions (e.g.~breaking down grades by + major). +\item + Along the way, we can: + + \begin{itemize} + \tightlist + \item + \textbf{Visualize} or summarize the data. + \item + \textbf{Validate assumptions} about data and its collection process. + Pay particular attention to when the data was collected. + \item + Identify and \textbf{address anomalies}. + \item + Apply data transformations and corrections (we'll cover this in the + upcoming lecture). + \item + \textbf{Record everything you do!} Developing in Jupyter Notebook + promotes \emph{reproducibility} of your own work! + \end{itemize} +\end{itemize} + +\bookmarksetup{startatroot} + +\chapter{Regular Expressions}\label{regular-expressions} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Understand Python string manipulation, \texttt{pandas} \texttt{Series} + methods +\item + Parse and create regex, with a reference table +\item + Use vocabulary (closure, metacharacters, groups, etc.) to describe + regex metacharacters +\end{itemize} + +\end{tcolorbox} + +\section{Why Work with Text?}\label{why-work-with-text} + +Last lecture, we learned of the difference between quantitative and +qualitative variable types. The latter includes string data --- the +primary focus of lecture 6. In this note, we'll discuss the necessary +tools to manipulate text: Python string manipulation and regular +expressions. + +There are two main reasons for working with text. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Canonicalization: Convert data that has multiple formats into a + standard form. + + \begin{itemize} + \tightlist + \item + By manipulating text, we can join tables with mismatched string + labels. + \end{itemize} +\item + Extract information into a new feature. + + \begin{itemize} + \tightlist + \item + For example, we can extract date and time features from text. + \end{itemize} +\end{enumerate} + +\section{Python String Methods}\label{python-string-methods} + +First, we'll introduce a few methods useful for string manipulation. The +following table includes a number of string operations supported by +Python and \texttt{pandas}. The Python functions operate on a single +string, while their equivalent in \texttt{pandas} are +\textbf{vectorized} --- they operate on a \texttt{Series} of string +data. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3333}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3889}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Operation +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Python +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +\texttt{Pandas} (\texttt{Series}) +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Transformation & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{s.lower()} +\item + \texttt{s.upper()} +\end{itemize} +\end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{ser.str.lower()} +\item + \texttt{ser.str.upper()} +\end{itemize} +\end{minipage} \\ +Replacement + Deletion & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{s.replace(\_)} +\end{itemize} +\end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{ser.str.replace(\_)} +\end{itemize} +\end{minipage} \\ +Split & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{s.split(\_)} +\end{itemize} +\end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{ser.str.split(\_)} +\end{itemize} +\end{minipage} \\ +Substring & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{s{[}1:4{]}} +\end{itemize} +\end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{ser.str{[}1:4{]}} +\end{itemize} +\end{minipage} \\ +Membership & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{\textquotesingle{}\_\textquotesingle{}\ in\ s} +\end{itemize} +\end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{ser.str.contains(\_)} +\end{itemize} +\end{minipage} \\ +Length & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{len(s)} +\end{itemize} +\end{minipage} & \begin{minipage}[t]{\linewidth}\raggedright +\begin{itemize} +\tightlist +\item + \texttt{ser.str.len()} +\end{itemize} +\end{minipage} \\ +\end{longtable} + +We'll discuss the differences between Python string functions and +\texttt{pandas} \texttt{Series} methods in the following section on +canonicalization. + +\subsection{Canonicalization}\label{canonicalization} + +Assume we want to merge the given tables. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} + +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{\textquotesingle{}data/county\_and\_state.csv\textquotesingle{}}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ county\_and\_state }\OperatorTok{=}\NormalTok{ pd.read\_csv(f)} + +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{\textquotesingle{}data/county\_and\_population.csv\textquotesingle{}}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ county\_and\_pop }\OperatorTok{=}\NormalTok{ pd.read\_csv(f)} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{display(county\_and\_state), display(county\_and\_pop)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& County & State \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & De Witt County & IL \\ +1 & Lac qui Parle County & MN \\ +2 & Lewis and Clark County & MT \\ +3 & St John the Baptist Parish & LS \\ +\end{longtable} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& County & Population \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & DeWitt & 16798 \\ +1 & Lac Qui Parle & 8067 \\ +2 & Lewis \& Clark & 55716 \\ +3 & St. John the Baptist & 43044 \\ +\end{longtable} + +Last time, we used a \textbf{primary key} and \textbf{foreign key} to +join two tables. While neither of these keys exist in our +\texttt{DataFrame}s, the \texttt{"County"} columns look similar enough. +Can we convert these columns into one standard, canonical form to merge +the two tables? + +\subsubsection{Canonicalization with Python String +Manipulation}\label{canonicalization-with-python-string-manipulation} + +The following function uses Python string manipulation to convert a +single county name into canonical form. It does so by eliminating +whitespace, punctuation, and unnecessary text. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ canonicalize\_county(county\_name):} + \ControlFlowTok{return}\NormalTok{ (} +\NormalTok{ county\_name} +\NormalTok{ .lower()} +\NormalTok{ .replace(}\StringTok{\textquotesingle{} \textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ .replace(}\StringTok{\textquotesingle{}\&\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}and\textquotesingle{}}\NormalTok{)} +\NormalTok{ .replace(}\StringTok{\textquotesingle{}.\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ .replace(}\StringTok{\textquotesingle{}county\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ .replace(}\StringTok{\textquotesingle{}parish\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ )} + +\NormalTok{canonicalize\_county(}\StringTok{"St. John the Baptist"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'stjohnthebaptist' +\end{verbatim} + +We will use the \texttt{pandas} \texttt{map} function to apply the +\texttt{canonicalize\_county} function to every row in both +\texttt{DataFrame}s. In doing so, we'll create a new column in each +called \texttt{clean\_county\_python} with the canonical form. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{county\_and\_pop[}\StringTok{\textquotesingle{}clean\_county\_python\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ county\_and\_pop[}\StringTok{\textquotesingle{}County\textquotesingle{}}\NormalTok{].}\BuiltInTok{map}\NormalTok{(canonicalize\_county)} +\NormalTok{county\_and\_state[}\StringTok{\textquotesingle{}clean\_county\_python\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ county\_and\_state[}\StringTok{\textquotesingle{}County\textquotesingle{}}\NormalTok{].}\BuiltInTok{map}\NormalTok{(canonicalize\_county)} +\NormalTok{display(county\_and\_state), display(county\_and\_pop)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& County & State & clean\_county\_python \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & De Witt County & IL & dewitt \\ +1 & Lac qui Parle County & MN & lacquiparle \\ +2 & Lewis and Clark County & MT & lewisandclark \\ +3 & St John the Baptist Parish & LS & stjohnthebaptist \\ +\end{longtable} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& County & Population & clean\_county\_python \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & DeWitt & 16798 & dewitt \\ +1 & Lac Qui Parle & 8067 & lacquiparle \\ +2 & Lewis \& Clark & 55716 & lewisandclark \\ +3 & St. John the Baptist & 43044 & stjohnthebaptist \\ +\end{longtable} + +\subsubsection{Canonicalization with Pandas Series +Methods}\label{canonicalization-with-pandas-series-methods} + +Alternatively, we can use \texttt{pandas} \texttt{Series} methods to +create this standardized column. To do so, we must call the +\texttt{.str} attribute of our \texttt{Series} object prior to calling +any methods, like \texttt{.lower} and \texttt{.replace}. Notice how +these method names match their equivalent built-in Python string +functions. + +Chaining multiple \texttt{Series} methods in this manner eliminates the +need to use the \texttt{map} function (as this code is vectorized). + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ canonicalize\_county\_series(county\_series):} + \ControlFlowTok{return}\NormalTok{ (} +\NormalTok{ county\_series} +\NormalTok{ .}\BuiltInTok{str}\NormalTok{.lower()} +\NormalTok{ .}\BuiltInTok{str}\NormalTok{.replace(}\StringTok{\textquotesingle{} \textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ .}\BuiltInTok{str}\NormalTok{.replace(}\StringTok{\textquotesingle{}\&\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}and\textquotesingle{}}\NormalTok{)} +\NormalTok{ .}\BuiltInTok{str}\NormalTok{.replace(}\StringTok{\textquotesingle{}.\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ .}\BuiltInTok{str}\NormalTok{.replace(}\StringTok{\textquotesingle{}county\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ .}\BuiltInTok{str}\NormalTok{.replace(}\StringTok{\textquotesingle{}parish\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{)} +\NormalTok{ )} + +\NormalTok{county\_and\_pop[}\StringTok{\textquotesingle{}clean\_county\_pandas\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ canonicalize\_county\_series(county\_and\_pop[}\StringTok{\textquotesingle{}County\textquotesingle{}}\NormalTok{])} +\NormalTok{county\_and\_state[}\StringTok{\textquotesingle{}clean\_county\_pandas\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ canonicalize\_county\_series(county\_and\_state[}\StringTok{\textquotesingle{}County\textquotesingle{}}\NormalTok{])} +\NormalTok{display(county\_and\_pop), display(county\_and\_state)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& County & Population & clean\_county\_python & clean\_county\_pandas \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & DeWitt & 16798 & dewitt & dewitt \\ +1 & Lac Qui Parle & 8067 & lacquiparle & lacquiparle \\ +2 & Lewis \& Clark & 55716 & lewisandclark & lewisandclark \\ +3 & St. John the Baptist & 43044 & stjohnthebaptist & +stjohnthebaptist \\ +\end{longtable} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& County & State & clean\_county\_python & clean\_county\_pandas \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & De Witt County & IL & dewitt & dewitt \\ +1 & Lac qui Parle County & MN & lacquiparle & lacquiparle \\ +2 & Lewis and Clark County & MT & lewisandclark & lewisandclark \\ +3 & St John the Baptist Parish & LS & stjohnthebaptist & +stjohnthebaptist \\ +\end{longtable} + +\subsection{Extraction}\label{extraction} + +Extraction explores the idea of obtaining useful information from text +data. This will be particularily important in model building, which +we'll study in a few weeks. + +Say we want to read some data from a \texttt{.txt} file. + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(}\StringTok{\textquotesingle{}data/log.txt\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}r\textquotesingle{}}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ log\_lines }\OperatorTok{=}\NormalTok{ f.readlines()} + +\NormalTok{log\_lines} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +['169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /stat141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n', + '193.205.203.3 - - [2/Feb/2005:17:23:6 -0800] "GET /stat141/Notes/dim.html HTTP/1.0" 404 302 "http://eeyore.ucdavis.edu/stat141/Notes/session.html"\n', + '169.237.46.240 - "" [3/Feb/2006:10:18:37 -0800] "GET /stat141/homework/Solutions/hw1Sol.pdf HTTP/1.1"\n'] +\end{verbatim} + +Suppose we want to extract the day, month, year, hour, minutes, seconds, +and time zone. Unfortunately, these items are not in a fixed position +from the beginning of the string, so slicing by some fixed offset won't +work. + +Instead, we can use some clever thinking. Notice how the relevant +information is contained within a set of brackets, further separated by +\texttt{/} and \texttt{:}. We can hone in on this region of text, and +split the data on these characters. Python's built-in \texttt{.split} +function makes this easy. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{first }\OperatorTok{=}\NormalTok{ log\_lines[}\DecValTok{0}\NormalTok{] }\CommentTok{\# Only considering the first row of data} + +\NormalTok{pertinent }\OperatorTok{=}\NormalTok{ first.split(}\StringTok{"["}\NormalTok{)[}\DecValTok{1}\NormalTok{].split(}\StringTok{\textquotesingle{}]\textquotesingle{}}\NormalTok{)[}\DecValTok{0}\NormalTok{]} +\NormalTok{day, month, rest }\OperatorTok{=}\NormalTok{ pertinent.split(}\StringTok{\textquotesingle{}/\textquotesingle{}}\NormalTok{)} +\NormalTok{year, hour, minute, rest }\OperatorTok{=}\NormalTok{ rest.split(}\StringTok{\textquotesingle{}:\textquotesingle{}}\NormalTok{)} +\NormalTok{seconds, time\_zone }\OperatorTok{=}\NormalTok{ rest.split(}\StringTok{\textquotesingle{} \textquotesingle{}}\NormalTok{)} +\NormalTok{day, month, year, hour, minute, seconds, time\_zone} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +('26', 'Jan', '2014', '10', '47', '58', '-0800') +\end{verbatim} + +There are two problems with this code: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Python's built-in functions limit us to extract data one record at a + time, + + \begin{itemize} + \tightlist + \item + This can be resolved using the \texttt{map} function or + \texttt{pandas} \texttt{Series} methods. + \end{itemize} +\item + The code is quite verbose. + + \begin{itemize} + \tightlist + \item + This is a larger issue that is trickier to solve + \end{itemize} +\end{enumerate} + +In the next section, we'll introduce regular expressions - a tool that +solves problem 2. + +\section{RegEx Basics}\label{regex-basics} + +A \textbf{regular expression (``RegEx'')} is a sequence of characters +that specifies a search pattern. They are written to extract specific +information from text. Regular expressions are essentially part of a +smaller programming language embedded in Python, made available through +the \texttt{re} module. As such, they have a stand-alone syntax and +methods for various capabilities. + +Regular expressions are useful in many applications beyond data science. +For example, Social Security Numbers (SSNs) are often validated with +regular expressions. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{r"[0{-}9]\{3\}{-}[0{-}9]\{2\}{-}[0{-}9]\{4\}"} \CommentTok{\# Regular Expression Syntax} + +\CommentTok{\# 3 of any digit, then a dash,} +\CommentTok{\# then 2 of any digit, then a dash,} +\CommentTok{\# then 4 of any digit} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'[0-9]{3}-[0-9]{2}-[0-9]{4}' +\end{verbatim} + +There are a ton of resources to learn and experiment with regular +expressions. A few are provided below: + +\begin{itemize} +\tightlist +\item + \href{https://docs.python.org/3/howto/regex.html}{Official Regex + Guide} +\item + \href{https://ds100.org/sp22/resources/assets/hw/regex_reference.pdf}{Data + 100 Reference Sheet} +\item + \href{https://regex101.com/}{Regex101.com} + + \begin{itemize} + \tightlist + \item + Be sure to check Python under the category on the left. + \end{itemize} +\end{itemize} + +\subsection{Basics RegEx Syntax}\label{basics-regex-syntax} + +There are four basic operations with regular expressions. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.1875}} + >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.1771}} + >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.1458}} + >{\raggedright\arraybackslash}p{(\columnwidth - 8\tabcolsep) * \real{0.2083}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Operation +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Order +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Syntax Example +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Matches +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Doesn't Match +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\texttt{Or}: \texttt{\textbar{}} & 4 & AA\textbar BAAB & AA BAAB & every +other string \\ +\texttt{Concatenation} & 3 & AABAAB & AABAAB & every other string \\ +\texttt{Closure}: \texttt{*} (zero or more) & 2 & AB*A & AA ABBBBBBA & +AB ABABA \\ +\texttt{Group}: \texttt{()} (parenthesis) & 1 & A(A\textbar B)AAB (AB)*A +& AAAAB ABAAB A ABABABABA & every other string AA ABBA \\ +\end{longtable} + +Notice how these metacharacter operations are ordered. Rather than being +literal characters, these \textbf{metacharacters} manipulate adjacent +characters. \texttt{()} takes precedence, followed by \texttt{*}, and +finally \texttt{\textbar{}}. This allows us to differentiate between +very different regex commands like \texttt{AB*} and \texttt{(AB)*}. The +former reads ``\texttt{A} then zero or more copies of \texttt{B}'', +while the latter specifies ``zero or more copies of \texttt{AB}''. + +\subsubsection{Examples}\label{examples} + +\textbf{Question 1}: Give a regular expression that matches +\texttt{moon}, \texttt{moooon}, etc. Your expression should match any +even number of \texttt{o}s except zero (i.e.~don't match \texttt{mn}). + +\textbf{Answer 1}: \texttt{moo(oo)*n} + +\begin{itemize} +\tightlist +\item + Hardcoding \texttt{oo} before the capture group ensures that + \texttt{mn} is not matched. +\item + A capture group of \texttt{(oo)*} ensures the number of \texttt{o}'s + is even. +\end{itemize} + +\textbf{Question 2}: Using only basic operations, formulate a regex that +matches \texttt{muun}, \texttt{muuuun}, \texttt{moon}, \texttt{moooon}, +etc. Your expression should match any even number of \texttt{u}s or +\texttt{o}s except zero (i.e.~don't match \texttt{mn}). + +\textbf{Answer 2}: \texttt{m(uu(uu)*\textbar{}oo(oo)*)n} + +\begin{itemize} +\tightlist +\item + The leading \texttt{m} and trailing \texttt{n} ensures that only + strings beginning with \texttt{m} and ending with \texttt{n} are + matched. +\item + Notice how the outer capture group surrounds the \texttt{\textbar{}}. + + \begin{itemize} + \tightlist + \item + Consider the regex \texttt{m(uu(uu)*)\textbar{}(oo(oo)*)n}. This + incorrectly matches \texttt{muu} and \texttt{oooon}. + + \begin{itemize} + \tightlist + \item + Each OR clause is everything to the left and right of + \texttt{\textbar{}}. The incorrect solution matches only half of + the string, and ignores either the beginning \texttt{m} or + trailing \texttt{n}. + \item + A set of parenthesis must surround \texttt{\textbar{}}. That way, + each OR clause is everything to the left and right of + \texttt{\textbar{}} \textbf{within} the group. This ensures both + the beginning \texttt{m} \emph{and} trailing \texttt{n} are + matched. + \end{itemize} + \end{itemize} +\end{itemize} + +\section{RegEx Expanded}\label{regex-expanded} + +Provided below are more complex regular expression functions. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.4667}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.1714}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.1619}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.1810}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Operation +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Syntax Example +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Matches +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Doesn't Match +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\texttt{Any\ Character}: \texttt{.} (except newline) & .U.U.U. & CUMULUS +JUGULUM & SUCCUBUS TUMULTUOUS \\ +\texttt{Character\ Class}: \texttt{{[}{]}} (match one character in +\texttt{{[}{]}}) & {[}A-Za-z{]}{[}a-z{]}* & word Capitalized & camelCase +4illegal \\ +\texttt{Repeated\ "a"\ Times}: \texttt{\{a\}} & j{[}aeiou{]}\{3\}hn & +jaoehn jooohn & jhn jaeiouhn \\ +\texttt{Repeated\ "from\ a\ to\ b"\ Times}: \texttt{\{a,\ b\}} & +j{[}ou{]}\{1,2\}hn & john juohn & jhn jooohn \\ +\texttt{At\ Least\ One}: \texttt{+} & jo+hn & john joooooohn & jhn +jjohn \\ +\texttt{Zero\ or\ One}: \texttt{?} & joh?n & jon john & any other +string \\ +\end{longtable} + +A character class matches a single character in its class. These +characters can be hardcoded ------ in the case of \texttt{{[}aeiou{]}} +------ or shorthand can be specified to mean a range of characters. +Examples include: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \texttt{{[}A-Z{]}}: Any capitalized letter +\item + \texttt{{[}a-z{]}}: Any lowercase letter +\item + \texttt{{[}0-9{]}}: Any single digit +\item + \texttt{{[}A-Za-z{]}}: Any capitalized of lowercase letter +\item + \texttt{{[}A-Za-z0-9{]}}: Any capitalized or lowercase letter or + single digit +\end{enumerate} + +\subsubsection{Examples}\label{examples-1} + +Let's analyze a few examples of complex regular expressions. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.4722}} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.4722}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Matches +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Does Not Match +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\begin{minipage}[t]{\linewidth}\raggedright +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \texttt{.*SPB.*} +\end{enumerate} +\end{minipage} & \\ +RASPBERRY SPBOO & SUBSPACE SUBSPECIES \\ +\begin{minipage}[t]{\linewidth}\raggedright +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{1} +\tightlist +\item + \texttt{{[}0-9{]}\{3\}-{[}0-9{]}\{2\}-{[}0-9{]}\{4\}} +\end{enumerate} +\end{minipage} & \\ +231-41-5121 573-57-1821 & 231415121 57-3571821 \\ +\begin{minipage}[t]{\linewidth}\raggedright +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{2} +\tightlist +\item + \texttt{{[}a-z{]}+@({[}a-z{]}+\textbackslash{}.)+(edu\textbar{}com)} +\end{enumerate} +\end{minipage} & \\ +horse@pizza.com horse@pizza.food.com & frank\_99@yahoo.com hug@cs \\ +\end{longtable} + +\textbf{Explanations} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \texttt{.*SPB.*} only matches strings that contain the substring + \texttt{SPB}. + + \begin{itemize} + \tightlist + \item + The \texttt{.*} metacharacter matches any amount of non-negative + characters. Newlines do not count.\\ + \end{itemize} +\item + This regular expression matches 3 of any digit, then a dash, then 2 of + any digit, then a dash, then 4 of any digit. + + \begin{itemize} + \tightlist + \item + You'll recognize this as the familiar Social Security Number regular + expression. + \end{itemize} +\item + Matches any email with a \texttt{com} or \texttt{edu} domain, where + all characters of the email are letters. + + \begin{itemize} + \tightlist + \item + At least one \texttt{.} must precede the domain name. Including a + backslash \texttt{\textbackslash{}} before any metacharacter (in + this case, the \texttt{.}) tells RegEx to match that character + exactly. + \end{itemize} +\end{enumerate} + +\section{Convenient RegEx}\label{convenient-regex} + +Here are a few more convenient regular expressions. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.4667}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.1714}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.1619}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.1810}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Operation +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Syntax Example +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Matches +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Doesn't Match +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\texttt{built\ in\ character\ class} & \texttt{\textbackslash{}w+} +\texttt{\textbackslash{}d+} \texttt{\textbackslash{}s+} & Fawef\_03 +231123 \texttt{whitespace} & this person 423 people +\texttt{non-whitespace} \\ +\texttt{character\ class\ negation}: \texttt{{[}\^{}{]}} (everything +except the given characters) & {[}\^{}a-z{]}+. & PEPPERS3982 17211!↑å & +porch CLAmS \\ +\texttt{escape\ character}: \texttt{\textbackslash{}} (match the literal +next character) & cow\textbackslash.com & cow.com & cowscom \\ +\texttt{beginning\ of\ line}: \texttt{\^{}} & \^{}ark & ark two ark o +ark & dark \\ +\texttt{end\ of\ line}: \texttt{\$} & ark\$ & dark ark o ark & ark +two \\ +\texttt{lazy\ version\ of\ zero\ or\ more} : \texttt{*?} & 5.*?5 & 5005 +55 & 5005005 \\ +\end{longtable} + +\subsection{Greediness}\label{greediness} + +In order to fully understand the last operation in the table, we have to +discuss greediness. RegEx is greedy -- it will look for the longest +possible match in a string. To motivate this with an example, consider +the pattern +\texttt{\textless{}div\textgreater{}.*\textless{}/div\textgreater{}}. In +the sentence below, we would hope that the bolded portions would be +matched: + +``This is a +\textbf{\textless div\textgreater example\textless/div\textgreater{}} of +greediness +\textbf{\textless div\textgreater in\textless/div\textgreater{}} regular +expressions.'' + +However, in reality, RegEx captures far more of the sentence. The way +RegEx processes the text given that pattern is as follows: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + ``Look for the exact string \textless{}\div\textgreater{}'' +\item + Then, ``look for any character 0 or more times'' +\item + Then, ``look for the exact string \textless/div\textgreater{}'' +\end{enumerate} + +The result would be all the characters starting from the leftmost +\textless div\textgreater{} and the rightmost +\textless/div\textgreater{} (inclusive): + +``This is a +\textbf{\textless div\textgreater example\textless/div\textgreater{} of +greediness \textless div\textgreater in\textless/div\textgreater{}} +regular expressions.'' + +We can fix this by making our pattern non-greedy, +\texttt{\textless{}div\textgreater{}.*?\textless{}/div\textgreater{}}. +You can read up more in the documentation +\href{https://docs.python.org/3/howto/regex.html\#greedy-versus-non-greedy}{here}. + +\subsection{Examples}\label{examples-2} + +Let's revisit our earlier problem of extracting date/time data from the +given \texttt{.txt} files. Here is how the data looked. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{log\_lines[}\DecValTok{0}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /stat141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n' +\end{verbatim} + +\textbf{Question}: Give a regular expression that matches everything +contained within and including the brackets - the day, month, year, +hour, minutes, seconds, and time zone. + +\textbf{Answer}: \texttt{\textbackslash{}{[}.*\textbackslash{}{]}} + +\begin{itemize} +\tightlist +\item + Notice how matching the literal \texttt{{[}} and \texttt{{]}} is + necessary. Therefore, an escape character \texttt{\textbackslash{}} is + required before both \texttt{{[}} and \texttt{{]}} --- otherwise these + metacharacters will match character classes. +\item + We need to match a particular format between \texttt{{[}} and + \texttt{{]}}. For this example, \texttt{.*} will suffice. +\end{itemize} + +\textbf{Alternative Solution}: +\texttt{\textbackslash{}{[}\textbackslash{}w+/\textbackslash{}w+/\textbackslash{}w+:\textbackslash{}w+:\textbackslash{}w+:\textbackslash{}w+\textbackslash{}s-\textbackslash{}w+\textbackslash{}{]}} + +\begin{itemize} +\tightlist +\item + This solution is much safer. + + \begin{itemize} + \tightlist + \item + Imagine the data between \texttt{{[}} and \texttt{{]}} was garbage - + \texttt{.*} will still match that. + \item + The alternate solution will only match data that follows the correct + format. + \end{itemize} +\end{itemize} + +\section{Regex in Python and Pandas (RegEx +Groups)}\label{regex-in-python-and-pandas-regex-groups} + +\subsection{Canonicalization}\label{canonicalization-1} + +\subsubsection{Canonicalization with +RegEx}\label{canonicalization-with-regex} + +Earlier in this note, we examined the process of canonicalization using +\texttt{python} string manipulation and \texttt{pandas} \texttt{Series} +methods. However, we mentioned this approach had a major flaw: our code +was unnecessarily verbose. Equipped with our knowledge of regular +expressions, let's fix this. + +To do so, we need to understand a few functions in the \texttt{re} +module. The first of these is the substitute function: +\texttt{re.sub(pattern,\ rep1,\ text)}. It behaves similarly to +\texttt{python}'s built-in \texttt{.replace} function, and returns text +with all instances of \texttt{pattern} replaced by \texttt{rep1}. + +The regular expression here removes text surrounded by +\texttt{\textless{}\textgreater{}} (also known as HTML tags). + +In order, the pattern matches \ldots{} 1. a single \texttt{\textless{}} +2. any character that is not a \texttt{\textgreater{}} : div, td +valign\ldots, /td, /div 3. a single \texttt{\textgreater{}} + +Any substring in \texttt{text} that fulfills all three conditions will +be replaced by \texttt{\textquotesingle{}\textquotesingle{}}. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ re} + +\NormalTok{text }\OperatorTok{=} \StringTok{"\textless{}div\textgreater{}\textless{}td valign=\textquotesingle{}top\textquotesingle{}\textgreater{}Moo\textless{}/td\textgreater{}\textless{}/div\textgreater{}"} +\NormalTok{pattern }\OperatorTok{=} \VerbatimStringTok{r"\textless{}[\^{}\textgreater{}]+\textgreater{}"} +\NormalTok{re.sub(pattern, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{, text) } +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'Moo' +\end{verbatim} + +Notice the \texttt{r} preceding the regular expression pattern; this +specifies the regular expression is a raw string. Raw strings do not +recognize escape sequences (i.e., the Python newline metacharacter +\texttt{\textbackslash{}n}). This makes them useful for regular +expressions, which often contain literal \texttt{\textbackslash{}} +characters. + +In other words, don't forget to tag your RegEx with an \texttt{r}. + +\subsubsection{\texorpdfstring{Canonicalization with +\texttt{pandas}}{Canonicalization with pandas}}\label{canonicalization-with-pandas} + +We can also use regular expressions with \texttt{pandas} \texttt{Series} +methods. This gives us the benefit of operating on an entire column of +data as opposed to a single value. The code is simple: +\texttt{ser.str.replace(pattern,\ repl,\ regex=True}). + +Consider the following \texttt{DataFrame} \texttt{html\_data} with a +single column. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{data }\OperatorTok{=}\NormalTok{ \{}\StringTok{"HTML"}\NormalTok{: [}\StringTok{"\textless{}div\textgreater{}\textless{}td valign=\textquotesingle{}top\textquotesingle{}\textgreater{}Moo\textless{}/td\textgreater{}\textless{}/div\textgreater{}"}\NormalTok{, }\OperatorTok{\textbackslash{}} + \StringTok{"\textless{}a href=\textquotesingle{}http://ds100.org\textquotesingle{}\textgreater{}Link\textless{}/a\textgreater{}"}\NormalTok{, }\OperatorTok{\textbackslash{}} + \StringTok{"\textless{}b\textgreater{}Bold text\textless{}/b\textgreater{}"}\NormalTok{]\}} +\NormalTok{html\_data }\OperatorTok{=}\NormalTok{ pd.DataFrame(data)} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{html\_data} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& HTML \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & \textless div\textgreater\textless td +valign=\textquotesingle top\textquotesingle\textgreater Moo\textless/td\textgreater\textless/div\textgreater{} \\ +1 & \textless a +href=\textquotesingle http://ds100.org\textquotesingle\textgreater Link\textless/a\textgreater{} \\ +2 & \textless b\textgreater Bold text\textless/b\textgreater{} \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pattern }\OperatorTok{=} \VerbatimStringTok{r"\textless{}[\^{}\textgreater{}]+\textgreater{}"} +\NormalTok{html\_data[}\StringTok{\textquotesingle{}HTML\textquotesingle{}}\NormalTok{].}\BuiltInTok{str}\NormalTok{.replace(pattern, }\StringTok{\textquotesingle{}\textquotesingle{}}\NormalTok{, regex}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 Moo +1 Link +2 Bold text +Name: HTML, dtype: object +\end{verbatim} + +\subsection{Extraction}\label{extraction-1} + +\subsubsection{Extraction with RegEx}\label{extraction-with-regex} + +Just like with canonicalization, the \texttt{re} module provides +capability to extract relevant text from a string: +\texttt{re.findall(pattern,\ text)}. This function returns a list of all +matches to \texttt{pattern}. + +Using the familiar regular expression for Social Security Numbers: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{text }\OperatorTok{=} \StringTok{"My social security number is 123{-}45{-}6789 bro, or maybe it’s 321{-}45{-}6789."} +\NormalTok{pattern }\OperatorTok{=} \VerbatimStringTok{r"[0{-}9]}\SpecialCharTok{\{3\}}\VerbatimStringTok{{-}[0{-}9]}\SpecialCharTok{\{2\}}\VerbatimStringTok{{-}[0{-}9]}\SpecialCharTok{\{4\}}\VerbatimStringTok{"} +\NormalTok{re.findall(pattern, text) } +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +['123-45-6789', '321-45-6789'] +\end{verbatim} + +\subsubsection{\texorpdfstring{Extraction with +\texttt{pandas}}{Extraction with pandas}}\label{extraction-with-pandas} + +\texttt{pandas} similarily provides extraction functionality on a +\texttt{Series} of data: \texttt{ser.str.findall(pattern)} + +Consider the following \texttt{DataFrame} \texttt{ssn\_data}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{data }\OperatorTok{=}\NormalTok{ \{}\StringTok{"SSN"}\NormalTok{: [}\StringTok{"987{-}65{-}4321"}\NormalTok{, }\StringTok{"forty"}\NormalTok{, }\OperatorTok{\textbackslash{}} + \StringTok{"123{-}45{-}6789 bro or 321{-}45{-}6789"}\NormalTok{,} + \StringTok{"999{-}99{-}9999"}\NormalTok{]\}} +\NormalTok{ssn\_data }\OperatorTok{=}\NormalTok{ pd.DataFrame(data)} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{ssn\_data} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& SSN \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 987-65-4321 \\ +1 & forty \\ +2 & 123-45-6789 bro or 321-45-6789 \\ +3 & 999-99-9999 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{ssn\_data[}\StringTok{"SSN"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.findall(pattern)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0 [987-65-4321] +1 [] +2 [123-45-6789, 321-45-6789] +3 [999-99-9999] +Name: SSN, dtype: object +\end{verbatim} + +This function returns a list for every row containing the pattern +matches in a given string. + +As you may expect, there are similar \texttt{pandas} equivalents for +other \texttt{re} functions as well. \texttt{Series.str.extract} takes +in a pattern and returns a \texttt{DataFrame} of each capture group's +first match in the string. In contrast, \texttt{Series.str.extractall} +returns a multi-indexed \texttt{DataFrame} of all matches for each +capture group. You can see the difference in the outputs below: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pattern\_cg }\OperatorTok{=} \VerbatimStringTok{r"([0{-}9]}\SpecialCharTok{\{3\}}\VerbatimStringTok{){-}([0{-}9]}\SpecialCharTok{\{2\}}\VerbatimStringTok{){-}([0{-}9]}\SpecialCharTok{\{4\}}\VerbatimStringTok{)"} +\NormalTok{ssn\_data[}\StringTok{"SSN"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.extract(pattern\_cg)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& 0 & 1 & 2 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 987 & 65 & 4321 \\ +1 & NaN & NaN & NaN \\ +2 & 123 & 45 & 6789 \\ +3 & 999 & 99 & 9999 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{ssn\_data[}\StringTok{"SSN"}\NormalTok{].}\BuiltInTok{str}\NormalTok{.extractall(pattern\_cg)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& & 0 & 1 & 2 \\ +& match & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 0 & 987 & 65 & 4321 \\ +\multirow{2}{=}{2} & 0 & 123 & 45 & 6789 \\ +& 1 & 321 & 45 & 6789 \\ +3 & 0 & 999 & 99 & 9999 \\ +\end{longtable} + +\subsection{Regular Expression Capture +Groups}\label{regular-expression-capture-groups} + +Earlier we used parentheses \texttt{(} \texttt{)} to specify the highest +order of operation in regular expressions. However, they have another +meaning; parentheses are often used to represent \textbf{capture +groups}. Capture groups are essentially, a set of smaller regular +expressions that match multiple substrings in text data. + +Let's take a look at an example. + +\subsubsection{Example 1}\label{example-1} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{text }\OperatorTok{=} \StringTok{"Observations: 03:04:53 {-} Horse awakens. }\CharTok{\textbackslash{}} +\StringTok{ 03:05:14 {-} Horse goes back to sleep."} +\end{Highlighting} +\end{Shaded} + +Say we want to capture all occurences of time data (hour, minute, and +second) as \emph{separate entities}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pattern\_1 }\OperatorTok{=} \VerbatimStringTok{r"(\textbackslash{}d\textbackslash{}d):(\textbackslash{}d\textbackslash{}d):(\textbackslash{}d\textbackslash{}d)"} +\NormalTok{re.findall(pattern\_1, text)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +[('03', '04', '53'), ('03', '05', '14')] +\end{verbatim} + +Notice how the given pattern has 3 capture groups, each specified by the +regular expression \texttt{(\textbackslash{}d\textbackslash{}d)}. We +then use \texttt{re.findall} to return these capture groups, each as +tuples containing 3 matches. + +These regular expression capture groups can be different. We can use the +\texttt{(\textbackslash{}d\{2\})} shorthand to extract the same data. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pattern\_2 }\OperatorTok{=} \VerbatimStringTok{r"(\textbackslash{}d\textbackslash{}d):(\textbackslash{}d\textbackslash{}d):(\textbackslash{}d}\SpecialCharTok{\{2\}}\VerbatimStringTok{)"} +\NormalTok{re.findall(pattern\_2, text)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +[('03', '04', '53'), ('03', '05', '14')] +\end{verbatim} + +\subsubsection{Example 2}\label{example-2} + +With the notion of capture groups, convince yourself how the following +regular expression works. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{first }\OperatorTok{=}\NormalTok{ log\_lines[}\DecValTok{0}\NormalTok{]} +\NormalTok{first} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +'169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] "GET /stat141/Winter04/ HTTP/1.1" 200 2585 "http://anson.ucdavis.edu/courses/"\n' +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pattern }\OperatorTok{=} \VerbatimStringTok{r\textquotesingle{}\textbackslash{}[(\textbackslash{}d+)\textbackslash{}/(\textbackslash{}w+)\textbackslash{}/(\textbackslash{}d+):(\textbackslash{}d+):(\textbackslash{}d+):(\textbackslash{}d+) (.+)\textbackslash{}]\textquotesingle{}} +\NormalTok{day, month, year, hour, minute, second, time\_zone }\OperatorTok{=}\NormalTok{ re.findall(pattern, first)[}\DecValTok{0}\NormalTok{]} +\BuiltInTok{print}\NormalTok{(day, month, year, hour, minute, second, time\_zone)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +26 Jan 2014 10 47 58 -0800 +\end{verbatim} + +\section{Limitations of Regular +Expressions}\label{limitations-of-regular-expressions} + +Today, we explored the capabilities of regular expressions in data +wrangling with text data. However, there are a few things to be wary of. + +Writing regular expressions is like writing a program. + +\begin{itemize} +\tightlist +\item + Need to know the syntax well. +\item + Can be easier to write than to read. +\item + Can be difficult to debug. +\end{itemize} + +Regular expressions are terrible at certain types of problems: + +\begin{itemize} +\tightlist +\item + For parsing a hierarchical structure, such as JSON, use the + \texttt{json.load()} parser, not RegEx! +\item + Complex features (e.g.~valid email address). +\item + Counting (same number of instances of a and b). (impossible) +\item + Complex properties (palindromes, balanced parentheses). (impossible) +\end{itemize} + +Ultimately, the goal is not to memorize all regular expressions. Rather, +the aim is to: + +\begin{itemize} +\tightlist +\item + Understand what RegEx is capable of. +\item + Parse and create RegEx, with a reference table +\item + Use vocabulary (metacharacter, escape character, groups, etc.) to + describe regex metacharacters. +\item + Differentiate between (), {[}{]}, \{\} +\item + Design your own character classes with \d, \w, \s, + {[}\ldots-\ldots{]}, \^{}, etc. +\item + Use \texttt{python} and \texttt{pandas} RegEx methods. +\end{itemize} + +\bookmarksetup{startatroot} + +\chapter{Visualization I}\label{visualization-i} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Understand the theories behind effective visualizations and start to + generate plots of our own with \texttt{matplotlib} and + \texttt{seaborn}. +\item + Analyze histograms and identify the skewness, potential outliers, and + the mode. +\item + Use \texttt{boxplot} and \texttt{violinplot} to compare two + distributions. +\end{itemize} + +\end{tcolorbox} + +In our journey of the data science lifecycle, we have begun to explore +the vast world of exploratory data analysis. More recently, we learned +how to pre-process data using various data manipulation techniques. As +we work towards understanding our data, there is one key component +missing in our arsenal --- the ability to visualize and discern +relationships in existing data. + +These next two lectures will introduce you to various examples of data +visualizations and their underlying theory. In doing so, we'll motivate +their importance in real-world examples with the use of plotting +libraries. + +\section{Visualizations in Data 8 and Data 100 (so +far)}\label{visualizations-in-data-8-and-data-100-so-far} + +You've likely encountered several forms of data visualizations in your +studies. You may remember two such examples from Data 8: line plots, +scatter plots, and histograms. Each of these served a unique purpose. +For example, line plots displayed how numerical quantities changed over +time, while histograms were useful in understanding a variable's +distribution. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Line Chart +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Scatter Plot +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +& \\ +\end{longtable} + +\begin{longtable}[]{@{}l@{}} +\toprule\noalign{} +Histogram \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot + \\ +\end{longtable} + +\section{Goals of Visualization}\label{goals-of-visualization} + +Visualizations are useful for a number of reasons. In Data 100, we +consider two areas in particular: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + To broaden your understanding of the data. Summarizing trends visually + before in-depth analysis is a key part of exploratory data analysis. + Creating these graphs is a lightweight, iterative and flexible process + that helps us investigate relationships between variables. +\item + To communicate results/conclusions to others. These visualizations are + highly editorial, selective, and fine-tuned to achieve a + communications goal, so be thoughtful and careful about its clarity, + accessibility, and necessary context. +\end{enumerate} + +Altogether, these goals emphasize the fact that visualizations aren't a +matter of making ``pretty'' pictures; we need to do a lot of thinking +about what stylistic choices communicate ideas most effectively. + +This course note will focus on the first half of visualization topics in +Data 100. The goal here is to understand how to choose the ``right'' +plot depending on different variable types and, secondly, how to +generate these plots using code. + +\section{An Overview of +Distributions}\label{an-overview-of-distributions} + +A distribution describes both the set of values that a single variable +can take and the frequency of unique values in a single variable. For +example, if we're interested in the distribution of students across Data +100 discussion sections, the set of possible values is a list of +discussion sections (10-11am, 11-12pm, etc.), and the frequency that +each of those values occur is the number of students enrolled in each +section. In other words, the we're interested in how a variable is +distributed across it's possible values. Therefore, distributions must +satisfy two properties: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + The total frequency of all categories must sum to 100\% +\item + Total count should sum to the total number of datapoints if we're + using raw counts. +\end{enumerate} + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Not a Valid Distribution +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Valid Distribution +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +& \\ +This is not a valid distribution since individuals can be associated +with more than one category and the bar values demonstrate values in +minutes and not probability. & This example satisfies the two properties +of distributions, so it is a valid distribution. \\ +\end{longtable} + +\section{Variable Types Should Inform Plot +Choice}\label{variable-types-should-inform-plot-choice} + +Different plots are more or less suited for displaying particular types +of variables, laid out in the diagram below: + +The first step of any visualization is to identify the type(s) of +variables we're working with. From here, we can select an appropriate +plot type: + +\section{Qualitative Variables: Bar +Plots}\label{qualitative-variables-bar-plots} + +A \textbf{bar plot} is one of the most common ways of displaying the +\textbf{distribution} of a \textbf{qualitative} (categorical) variable. +The length of a bar plot encodes the frequency of a category; the width +encodes no useful information. The color \emph{could} indicate a +sub-category, but this is not necessarily the case. + +Let's contextualize this in an example. We will use the World Bank +dataset (\texttt{wb}) in our analysis. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} + +\NormalTok{wb }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/world\_bank.csv"}\NormalTok{, index\_col}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\NormalTok{wb.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllllllllllllll@{}} +\toprule\noalign{} +& Continent & Country & Primary completion rate: Male: \% of relevant +age group: 2015 & Primary completion rate: Female: \% of relevant age +group: 2015 & Lower secondary completion rate: Male: \% of relevant age +group: 2015 & Lower secondary completion rate: Female: \% of relevant +age group: 2015 & Youth literacy rate: Male: \% of ages 15-24: 2005-14 & +Youth literacy rate: Female: \% of ages 15-24: 2005-14 & Adult literacy +rate: Male: \% ages 15 and older: 2005-14 & Adult literacy rate: Female: +\% ages 15 and older: 2005-14 & ... & Access to improved sanitation +facilities: \% of population: 1990 & Access to improved sanitation +facilities: \% of population: 2015 & Child immunization rate: Measles: +\% of children ages 12-23 months: 2015 & Child immunization rate: DTP3: +\% of children ages 12-23 months: 2015 & Children with acute respiratory +infection taken to health provider: \% of children under age 5 with ARI: +2009-2016 & Children with diarrhea who received oral rehydration and +continuous feeding: \% of children under age 5 with diarrhea: 2009-2016 +& Children sleeping under treated bed nets: \% of children under age 5: +2009-2016 & Children with fever receiving antimalarial drugs: \% of +children under age 5 with fever: 2009-2016 & Tuberculosis: Treatment +success rate: \% of new cases: 2014 & Tuberculosis: Cases detection +rate: \% of new estimated cases: 2015 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Africa & Algeria & 106.0 & 105.0 & 68.0 & 85.0 & 96.0 & 92.0 & 83.0 +& 68.0 & ... & 80.0 & 88.0 & 95.0 & 95.0 & 66.0 & 42.0 & NaN & NaN & +88.0 & 80.0 \\ +1 & Africa & Angola & NaN & NaN & NaN & NaN & 79.0 & 67.0 & 82.0 & 60.0 +& ... & 22.0 & 52.0 & 55.0 & 64.0 & NaN & NaN & 25.9 & 28.3 & 34.0 & +64.0 \\ +2 & Africa & Benin & 83.0 & 73.0 & 50.0 & 37.0 & 55.0 & 31.0 & 41.0 & +18.0 & ... & 7.0 & 20.0 & 75.0 & 79.0 & 23.0 & 33.0 & 72.7 & 25.9 & 89.0 +& 61.0 \\ +3 & Africa & Botswana & 98.0 & 101.0 & 86.0 & 87.0 & 96.0 & 99.0 & 87.0 +& 89.0 & ... & 39.0 & 63.0 & 97.0 & 95.0 & NaN & NaN & NaN & NaN & 77.0 +& 62.0 \\ +5 & Africa & Burundi & 58.0 & 66.0 & 35.0 & 30.0 & 90.0 & 88.0 & 89.0 & +85.0 & ... & 42.0 & 48.0 & 93.0 & 94.0 & 55.0 & 43.0 & 53.8 & 25.4 & +91.0 & 51.0 \\ +\end{longtable} + +We can visualize the distribution of the \texttt{Continent} column using +a bar plot. There are a few ways to do this. + +\subsection{Plotting in Pandas}\label{plotting-in-pandas} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{wb[}\StringTok{\textquotesingle{}Continent\textquotesingle{}}\NormalTok{].value\_counts().plot(kind}\OperatorTok{=}\StringTok{\textquotesingle{}bar\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-3-output-1.pdf} + +Recall that \texttt{.value\_counts()} returns a \texttt{Series} with the +total count of each unique value. We call +\texttt{.plot(kind=\textquotesingle{}bar\textquotesingle{})} on this +result to visualize these counts as a bar plot. + +Plotting methods in \texttt{pandas} are the least preferred and not +supported in Data 100, as their functionality is limited. Instead, +future examples will focus on other libraries built specifically for +visualizing data. The most well-known library here is +\texttt{matplotlib}. + +\subsection{Plotting in Matplotlib}\label{plotting-in-matplotlib} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt }\CommentTok{\# matplotlib is typically given the alias plt} + +\NormalTok{continent }\OperatorTok{=}\NormalTok{ wb[}\StringTok{\textquotesingle{}Continent\textquotesingle{}}\NormalTok{].value\_counts()} +\NormalTok{plt.bar(continent.index, continent)} +\NormalTok{plt.xlabel(}\StringTok{\textquotesingle{}Continent\textquotesingle{}}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{\textquotesingle{}Count\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-4-output-1.pdf} + +While more code is required to achieve the same result, +\texttt{matplotlib} is often used over \texttt{pandas} for its ability +to plot more complex visualizations, some of which are discussed +shortly. + +However, note how we needed to label the axes with \texttt{plt.xlabel} +and \texttt{plt.ylabel}, as \texttt{matplotlib} does not support +automatic axis labeling. To get around these inconveniences, we can use +a more efficient plotting library: \texttt{seaborn}. + +\subsection{\texorpdfstring{Plotting in +\texttt{Seaborn}}{Plotting in Seaborn}}\label{plotting-in-seaborn} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns }\CommentTok{\# seaborn is typically given the alias sns} +\NormalTok{sns.countplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{\textquotesingle{}Continent\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-5-output-1.pdf} + +In contrast to \texttt{matplotlib}, the general structure of a +\texttt{seaborn} call involves passing in an entire \texttt{DataFrame}, +and then specifying what column(s) to plot. \texttt{seaborn.countplot} +both counts and visualizes the number of unique values in a given +column. This column is specified by the \texttt{x} argument to +\texttt{sns.countplot}, while the \texttt{DataFrame} is specified by the +\texttt{data} argument. + +For the vast majority of visualizations, \texttt{seaborn} is far more +concise and aesthetically pleasing than \texttt{matplotlib}. However, +the color scheme of this particular bar plot is arbitrary - it encodes +no additional information about the categories themselves. This is not +always true; color may signify meaningful detail in other +visualizations. We'll explore this more in-depth during the next +lecture. + +By now, you'll have noticed that each of these plotting libraries have a +very different syntax. As with \texttt{pandas}, we'll teach you the +important methods in \texttt{matplotlib} and \texttt{seaborn}, but +you'll learn more through documentation. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \href{https://matplotlib.org/stable/index.html}{Matplotlib + Documentation} +\item + \href{https://seaborn.pydata.org/}{Seaborn Documentation} +\end{enumerate} + +\section{Distributions of Quantitative +Variables}\label{distributions-of-quantitative-variables} + +Revisiting our example with the \texttt{wb} DataFrame, let's plot the +distribution of \texttt{Gross\ national\ income\ per\ capita}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{wb.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllllllllllllll@{}} +\toprule\noalign{} +& Continent & Country & Primary completion rate: Male: \% of relevant +age group: 2015 & Primary completion rate: Female: \% of relevant age +group: 2015 & Lower secondary completion rate: Male: \% of relevant age +group: 2015 & Lower secondary completion rate: Female: \% of relevant +age group: 2015 & Youth literacy rate: Male: \% of ages 15-24: 2005-14 & +Youth literacy rate: Female: \% of ages 15-24: 2005-14 & Adult literacy +rate: Male: \% ages 15 and older: 2005-14 & Adult literacy rate: Female: +\% ages 15 and older: 2005-14 & ... & Access to improved sanitation +facilities: \% of population: 1990 & Access to improved sanitation +facilities: \% of population: 2015 & Child immunization rate: Measles: +\% of children ages 12-23 months: 2015 & Child immunization rate: DTP3: +\% of children ages 12-23 months: 2015 & Children with acute respiratory +infection taken to health provider: \% of children under age 5 with ARI: +2009-2016 & Children with diarrhea who received oral rehydration and +continuous feeding: \% of children under age 5 with diarrhea: 2009-2016 +& Children sleeping under treated bed nets: \% of children under age 5: +2009-2016 & Children with fever receiving antimalarial drugs: \% of +children under age 5 with fever: 2009-2016 & Tuberculosis: Treatment +success rate: \% of new cases: 2014 & Tuberculosis: Cases detection +rate: \% of new estimated cases: 2015 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Africa & Algeria & 106.0 & 105.0 & 68.0 & 85.0 & 96.0 & 92.0 & 83.0 +& 68.0 & ... & 80.0 & 88.0 & 95.0 & 95.0 & 66.0 & 42.0 & NaN & NaN & +88.0 & 80.0 \\ +1 & Africa & Angola & NaN & NaN & NaN & NaN & 79.0 & 67.0 & 82.0 & 60.0 +& ... & 22.0 & 52.0 & 55.0 & 64.0 & NaN & NaN & 25.9 & 28.3 & 34.0 & +64.0 \\ +2 & Africa & Benin & 83.0 & 73.0 & 50.0 & 37.0 & 55.0 & 31.0 & 41.0 & +18.0 & ... & 7.0 & 20.0 & 75.0 & 79.0 & 23.0 & 33.0 & 72.7 & 25.9 & 89.0 +& 61.0 \\ +3 & Africa & Botswana & 98.0 & 101.0 & 86.0 & 87.0 & 96.0 & 99.0 & 87.0 +& 89.0 & ... & 39.0 & 63.0 & 97.0 & 95.0 & NaN & NaN & NaN & NaN & 77.0 +& 62.0 \\ +5 & Africa & Burundi & 58.0 & 66.0 & 35.0 & 30.0 & 90.0 & 88.0 & 89.0 & +85.0 & ... & 42.0 & 48.0 & 93.0 & 94.0 & 55.0 & 43.0 & 53.8 & 25.4 & +91.0 & 51.0 \\ +\end{longtable} + +How should we define our categories for this variable? In the previous +example, these were a few unique values of the \texttt{Continent} +column. If we use similar logic here, our categories are the different +numerical values contained in the +\texttt{Gross\ national\ income\ per\ capita} column. + +Under this assumption, let's plot this distribution using the +\texttt{seaborn.countplot} function. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.countplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{\textquotesingle{}Gross national income per capita, Atlas method: $: 2016\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-7-output-1.pdf} + +What happened? A bar plot (either \texttt{plt.bar} or +\texttt{sns.countplot}) will create a separate bar for each unique value +of a variable. With a continuous variable, we may not have a finite +number of possible values, which can lead to situations like above where +we would need many, many bars to display each unique value. + +Specifically, we can say this histogram suffers from +\textbf{overplotting} as we are unable to interpret the plot and gain +any meaningful insight. + +Rather than bar plots, to visualize the distribution of a continuous +variable, we use one of the following types of plots: + +\begin{itemize} +\tightlist +\item + Histogram +\item + Box plot +\item + Violin plot +\end{itemize} + +\subsection{Box Plots and Violin +Plots}\label{box-plots-and-violin-plots} + +Box plots and violin plots are two very similar kinds of visualizations. +Both display the distribution of a variable using information about +\textbf{quartiles}. + +In a box plot, the width of the box at any point does not encode +meaning. In a violin plot, the width of the plot indicates the density +of the distribution at each possible value. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.boxplot(data}\OperatorTok{=}\NormalTok{wb, y}\OperatorTok{=}\StringTok{\textquotesingle{}Gross national income per capita, Atlas method: $: 2016\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-8-output-1.pdf} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.violinplot(data}\OperatorTok{=}\NormalTok{wb, y}\OperatorTok{=}\StringTok{"Gross national income per capita, Atlas method: $: 2016"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-9-output-1.pdf} + +A quartile represents a 25\% portion of the data. We say that: + +\begin{itemize} +\tightlist +\item + The first quartile (Q1) represents the 25th percentile -- 25\% of the + data is smaller than or equal to the first quartile. +\item + The second quartile (Q2) represents the 50th percentile, also known as + the median -- 50\% of the data is smaller than or equal to the second + quartile. +\item + The third quartile (Q3) represents the 75th percentile -- 75\% of the + data is smaller than or equal to the third quartile. +\end{itemize} + +This means that the middle 50\% of the data lies between the first and +third quartiles. This is demonstrated in the histogram below. The three +quartiles are marked with red vertical bars. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{gdp }\OperatorTok{=}\NormalTok{ wb[}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{]} +\NormalTok{gdp }\OperatorTok{=}\NormalTok{ gdp[}\OperatorTok{\textasciitilde{}}\NormalTok{gdp.isna()]} + +\NormalTok{q1, q2, q3 }\OperatorTok{=}\NormalTok{ np.percentile(gdp, [}\DecValTok{25}\NormalTok{, }\DecValTok{50}\NormalTok{, }\DecValTok{75}\NormalTok{])} + +\NormalTok{wb\_quartiles }\OperatorTok{=}\NormalTok{ wb.copy()} +\NormalTok{wb\_quartiles[}\StringTok{\textquotesingle{}category\textquotesingle{}}\NormalTok{] }\OperatorTok{=} \VariableTok{None} +\NormalTok{wb\_quartiles.loc[(wb\_quartiles[}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{] }\OperatorTok{\textless{}}\NormalTok{ q1) }\OperatorTok{|}\NormalTok{ (wb\_quartiles[}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{] }\OperatorTok{\textgreater{}}\NormalTok{ q3), }\StringTok{\textquotesingle{}category\textquotesingle{}}\NormalTok{] }\OperatorTok{=} \StringTok{\textquotesingle{}Outside of the middle 50\%\textquotesingle{}} +\NormalTok{wb\_quartiles.loc[(wb\_quartiles[}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{] }\OperatorTok{\textgreater{}}\NormalTok{ q1) }\OperatorTok{\&}\NormalTok{ (wb\_quartiles[}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{] }\OperatorTok{\textless{}}\NormalTok{ q3), }\StringTok{\textquotesingle{}category\textquotesingle{}}\NormalTok{] }\OperatorTok{=} \StringTok{\textquotesingle{}In the middle 50\%\textquotesingle{}} + +\NormalTok{sns.histplot(wb\_quartiles, x}\OperatorTok{=}\StringTok{"Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016"}\NormalTok{, hue}\OperatorTok{=}\StringTok{"category"}\NormalTok{)} +\NormalTok{sns.rugplot([q1, q2, q3], c}\OperatorTok{=}\StringTok{"firebrick"}\NormalTok{, lw}\OperatorTok{=}\DecValTok{6}\NormalTok{, height}\OperatorTok{=}\FloatTok{0.1}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-10-output-1.pdf} + +In a box plot, the lower extent of the box lies at Q1, while the upper +extent of the box lies at Q3. The horizontal line in the middle of the +box corresponds to Q2 (equivalently, the median). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.boxplot(data}\OperatorTok{=}\NormalTok{wb, y}\OperatorTok{=}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-11-output-1.pdf} + +The \textbf{whiskers} of a box-plot are the two points that lie at the +{[}\(1^{st}\) Quartile \(-\) (\(1.5\times\) IQR){]}, and the +{[}\(3^{rd}\) Quartile \(+\) (\(1.5\times\) IQR){]}. They are the lower +and upper ranges of ``normal'' data (the points excluding outliers). + +The different forms of information contained in a box plot can be +summarised as follows: + +A violin plot displays quartile information, albeit a bit more subtly +through smoothed density curves. Look closely at the center vertical bar +of the violin plot below; the three quartiles and ``whiskers'' are still +present! + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.violinplot(data}\OperatorTok{=}\NormalTok{wb, y}\OperatorTok{=}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-12-output-1.pdf} + +\subsection{Side-by-Side Box and Violin +Plots}\label{side-by-side-box-and-violin-plots} + +Plotting side-by-side box or violin plots allows us to compare +distributions across different categories. In other words, they enable +us to plot both a qualitative variable and a quantitative continuous +variable in one visualization. + +With \texttt{seaborn}, we can easily create side-by-side plots by +specifying both an x and y column. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.boxplot(data}\OperatorTok{=}\NormalTok{wb, x}\OperatorTok{=}\StringTok{"Continent"}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}Gross domestic product: }\SpecialCharTok{\% g}\StringTok{rowth : 2016\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-13-output-1.pdf} + +\subsection{Histograms}\label{histograms} + +You are likely familiar with histograms from Data 8. A histogram +collects continuous data into bins, then plots this binned data. Each +bin reflects the density of datapoints with values that lie between the +left and right ends of the bin; in other words, the \textbf{area} of +each bin is proportional to the \textbf{percentage} of datapoints it +contains. + +\subsubsection{Plotting Histograms}\label{plotting-histograms} + +Below, we plot a histogram using matplotlib and seaborn. Which graph do +you prefer? + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# The \textasciigrave{}edgecolor\textasciigrave{} argument controls the color of the bin edges} +\NormalTok{gni }\OperatorTok{=}\NormalTok{ wb[}\StringTok{"Gross national income per capita, Atlas method: $: 2016"}\NormalTok{]} +\NormalTok{plt.hist(gni, density}\OperatorTok{=}\VariableTok{True}\NormalTok{, edgecolor}\OperatorTok{=}\StringTok{"white"}\NormalTok{)} + +\CommentTok{\# Add labels} +\NormalTok{plt.xlabel(}\StringTok{"Gross national income per capita"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Density"}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Distribution of gross national income per capita"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-14-output-1.pdf} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.histplot(data}\OperatorTok{=}\NormalTok{wb, x}\OperatorTok{=}\StringTok{"Gross national income per capita, Atlas method: $: 2016"}\NormalTok{, stat}\OperatorTok{=}\StringTok{"density"}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Distribution of gross national income per capita"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-15-output-1.pdf} + +\subsubsection{Overlaid Histograms}\label{overlaid-histograms} + +We can overlay histograms (or density curves) to compare distributions +across qualitative categories. + +The \texttt{hue} parameter of \texttt{sns.histplot} specifies the column +that should be used to determine the color of each category. +\texttt{hue} can be used in many \texttt{seaborn} plotting functions. + +Notice that the resulting plot includes a legend describing which color +corresponds to each hemisphere -- a legend should always be included if +color is used to encode information in a visualization! + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Create a new variable to store the hemisphere in which each country is located} +\NormalTok{north }\OperatorTok{=}\NormalTok{ [}\StringTok{"Asia"}\NormalTok{, }\StringTok{"Europe"}\NormalTok{, }\StringTok{"N. America"}\NormalTok{]} +\NormalTok{south }\OperatorTok{=}\NormalTok{ [}\StringTok{"Africa"}\NormalTok{, }\StringTok{"Oceania"}\NormalTok{, }\StringTok{"S. America"}\NormalTok{]} +\NormalTok{wb.loc[wb[}\StringTok{"Continent"}\NormalTok{].isin(north), }\StringTok{"Hemisphere"}\NormalTok{] }\OperatorTok{=} \StringTok{"Northern"} +\NormalTok{wb.loc[wb[}\StringTok{"Continent"}\NormalTok{].isin(south), }\StringTok{"Hemisphere"}\NormalTok{] }\OperatorTok{=} \StringTok{"Southern"} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.histplot(data}\OperatorTok{=}\NormalTok{wb, x}\OperatorTok{=}\StringTok{"Gross national income per capita, Atlas method: $: 2016"}\NormalTok{, hue}\OperatorTok{=}\StringTok{"Hemisphere"}\NormalTok{, stat}\OperatorTok{=}\StringTok{"density"}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Distribution of gross national income per capita"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-17-output-1.pdf} + +Again, each bin of a histogram is scaled such that its \textbf{area} is +proportional to the \textbf{percentage} of all datapoints that it +contains. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{densities, bins, \_ }\OperatorTok{=}\NormalTok{ plt.hist(gni, density}\OperatorTok{=}\VariableTok{True}\NormalTok{, edgecolor}\OperatorTok{=}\StringTok{"white"}\NormalTok{, bins}\OperatorTok{=}\DecValTok{5}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"Gross national income per capita"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Density"}\NormalTok{)} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"First bin has width }\SpecialCharTok{\{}\NormalTok{bins[}\DecValTok{1}\NormalTok{]}\OperatorTok{{-}}\NormalTok{bins[}\DecValTok{0}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{ and height }\SpecialCharTok{\{}\NormalTok{densities[}\DecValTok{0}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"This corresponds to }\SpecialCharTok{\{}\NormalTok{bins[}\DecValTok{1}\NormalTok{]}\OperatorTok{{-}}\NormalTok{bins[}\DecValTok{0}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{ * }\SpecialCharTok{\{}\NormalTok{densities[}\DecValTok{0}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{ = }\SpecialCharTok{\{}\NormalTok{(bins[}\DecValTok{1}\NormalTok{]}\OperatorTok{{-}}\NormalTok{bins[}\DecValTok{0}\NormalTok{])}\OperatorTok{*}\NormalTok{densities[}\DecValTok{0}\NormalTok{]}\OperatorTok{*}\DecValTok{100}\SpecialCharTok{\}}\SpecialStringTok{\% of the data"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +First bin has width 16410.0 and height 4.7741589911386953e-05 +This corresponds to 16410.0 * 4.7741589911386953e-05 = 78.343949044586% of the data +\end{verbatim} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-18-output-2.pdf} + +\subsubsection{Evaluating Histograms}\label{evaluating-histograms} + +Histograms allow us to assess a distribution by their shape. There are a +few properties of histograms we can analyze: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Skewness and Tails + + \begin{itemize} + \tightlist + \item + Skewed left vs skewed right + \item + Left tail vs right tail + \end{itemize} +\item + Outliers + + \begin{itemize} + \tightlist + \item + Using percentiles + \end{itemize} +\item + Modes + + \begin{itemize} + \tightlist + \item + Most commonly occuring data + \end{itemize} +\end{enumerate} + +\paragraph{Skewness and Tails}\label{skewness-and-tails} + +The skew of a histogram describes the direction in which its ``tail'' +extends. - A distribution with a long right tail is \textbf{skewed +right} (such as \texttt{Gross\ national\ income\ per\ capita}). In a +right-skewed distribution, the few large outliers ``pull'' the mean to +the \textbf{right} of the median. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.histplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{\textquotesingle{}Gross national income per capita, Atlas method: $: 2016\textquotesingle{}}\NormalTok{, stat }\OperatorTok{=} \StringTok{\textquotesingle{}density\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\NormalTok{plt.title(}\StringTok{\textquotesingle{}Distribution with a long right tail\textquotesingle{}}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Text(0.5, 1.0, 'Distribution with a long right tail') +\end{verbatim} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-19-output-2.pdf} + +\begin{itemize} +\tightlist +\item + A distribution with a long left tail is \textbf{skewed left} (such as + \texttt{Access\ to\ an\ improved\ water\ source}). In a left-skewed + distribution, the few small outliers ``pull'' the mean to the + \textbf{left} of the median. +\end{itemize} + +In the case where a distribution has equal-sized right and left tails, +it is \textbf{symmetric}. The mean is approximately \textbf{equal} to +the median. Think of mean as the balancing point of the distribution. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.histplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{\textquotesingle{}Access to an improved water source: }\SpecialCharTok{\% o}\StringTok{f population: 2015\textquotesingle{}}\NormalTok{, stat }\OperatorTok{=} \StringTok{\textquotesingle{}density\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\NormalTok{plt.title(}\StringTok{\textquotesingle{}Distribution with a long left tail\textquotesingle{}}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Text(0.5, 1.0, 'Distribution with a long left tail') +\end{verbatim} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-20-output-2.pdf} + +\paragraph{Outliers}\label{outliers} + +Loosely speaking, an \textbf{outlier} is defined as a data point that +lies an abnormally large distance away from other values. Let's make +this more concrete. As you may have observed in the box plot infographic +earlier, we define \textbf{outliers} to be the data points that fall +beyond the whiskers. Specifically, values that are less than the +{[}\(1^{st}\) Quartile \(-\) (\(1.5\times\) IQR){]}, or greater than +{[}\(3^{rd}\) Quartile \(+\) (\(1.5\times\) IQR).{]} + +\paragraph{Modes}\label{modes} + +In Data 100, we describe a ``mode'' of a histogram as a peak in the +distribution. Often, however, it is difficult to determine what counts +as its own ``peak.'' For example, the number of peaks in the +distribution of HIV rates across different countries varies depending on +the number of histogram bins we plot. + +If we set the number of bins to 5, the distribution appears unimodal. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Rename the very long column name for convenience} +\NormalTok{wb }\OperatorTok{=}\NormalTok{ wb.rename(columns}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}Antiretroviral therapy coverage: }\SpecialCharTok{\% o}\StringTok{f people living with HIV: 2015\textquotesingle{}}\NormalTok{:}\StringTok{"HIV rate"}\NormalTok{\})} +\CommentTok{\# With 5 bins, it seems that there is only one peak} +\NormalTok{sns.histplot(data}\OperatorTok{=}\NormalTok{wb, x}\OperatorTok{=}\StringTok{"HIV rate"}\NormalTok{, stat}\OperatorTok{=}\StringTok{"density"}\NormalTok{, bins}\OperatorTok{=}\DecValTok{5}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"5 histogram bins"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-21-output-1.pdf} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# With 10 bins, there seem to be two peaks} + +\NormalTok{sns.histplot(data}\OperatorTok{=}\NormalTok{wb, x}\OperatorTok{=}\StringTok{"HIV rate"}\NormalTok{, stat}\OperatorTok{=}\StringTok{"density"}\NormalTok{, bins}\OperatorTok{=}\DecValTok{10}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"10 histogram bins"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-22-output-1.pdf} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# And with 20 bins, it becomes hard to say what counts as a "peak"!} + +\NormalTok{sns.histplot(data}\OperatorTok{=}\NormalTok{wb, x }\OperatorTok{=}\StringTok{"HIV rate"}\NormalTok{, stat}\OperatorTok{=}\StringTok{"density"}\NormalTok{, bins}\OperatorTok{=}\DecValTok{20}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"20 histogram bins"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_1/visualization_1_files/figure-pdf/cell-23-output-1.pdf} + +In part, it is these ambiguities that motivate us to consider using +Kernel Density Estimation (KDE), which we will explore more in the next +lecture. + +\bookmarksetup{startatroot} + +\chapter{Visualization II}\label{visualization-ii} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Understanding KDE for plotting distributions and estimating density + curves. +\item + Using transformations to analyze the relationship between two + variables. +\item + Evaluating the quality of a visualization based on visualization + theory concepts. +\end{itemize} + +\end{tcolorbox} + +\section{Kernel Density Estimation}\label{kernel-density-estimation} + +Often, we want to identify general trends across a distribution, rather +than focus on detail. Smoothing a distribution helps generalize the +structure of the data and eliminate noise. + +\subsection{KDE Theory}\label{kde-theory} + +A \textbf{kernel density estimate (KDE)} is a smooth, continuous +function that approximates a curve. It allows us to represent general +trends in a distribution without focusing on the details, which is +useful for analyzing the broad structure of a dataset. + +More formally, a KDE attempts to approximate the underlying +\textbf{probability distribution} from which our dataset was drawn. You +may have encountered the idea of a probability distribution in your +other classes; if not, we'll discuss it at length in the next lecture. +For now, you can think of a probability distribution as a description of +how likely it is for us to sample a particular value in our dataset. + +A KDE curve estimates the probability density function of a random +variable. Consider the example below, where we have used +\texttt{sns.displot} to plot both a histogram (containing the data +points we actually collected) and a KDE curve (representing the +\emph{approximated} probability distribution from which this data was +drawn) using data from the World Bank dataset (\texttt{wb}). + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} + +\NormalTok{wb }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/world\_bank.csv"}\NormalTok{, index\_col}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\NormalTok{wb }\OperatorTok{=}\NormalTok{ wb.rename(columns}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}Antiretroviral therapy coverage: }\SpecialCharTok{\% o}\StringTok{f people living with HIV: 2015\textquotesingle{}}\NormalTok{:}\StringTok{"HIV rate"}\NormalTok{,} + \StringTok{\textquotesingle{}Gross national income per capita, Atlas method: $: 2016\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}gni\textquotesingle{}}\NormalTok{\})} +\NormalTok{wb.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllllllllllllll@{}} +\toprule\noalign{} +& Continent & Country & Primary completion rate: Male: \% of relevant +age group: 2015 & Primary completion rate: Female: \% of relevant age +group: 2015 & Lower secondary completion rate: Male: \% of relevant age +group: 2015 & Lower secondary completion rate: Female: \% of relevant +age group: 2015 & Youth literacy rate: Male: \% of ages 15-24: 2005-14 & +Youth literacy rate: Female: \% of ages 15-24: 2005-14 & Adult literacy +rate: Male: \% ages 15 and older: 2005-14 & Adult literacy rate: Female: +\% ages 15 and older: 2005-14 & ... & Access to improved sanitation +facilities: \% of population: 1990 & Access to improved sanitation +facilities: \% of population: 2015 & Child immunization rate: Measles: +\% of children ages 12-23 months: 2015 & Child immunization rate: DTP3: +\% of children ages 12-23 months: 2015 & Children with acute respiratory +infection taken to health provider: \% of children under age 5 with ARI: +2009-2016 & Children with diarrhea who received oral rehydration and +continuous feeding: \% of children under age 5 with diarrhea: 2009-2016 +& Children sleeping under treated bed nets: \% of children under age 5: +2009-2016 & Children with fever receiving antimalarial drugs: \% of +children under age 5 with fever: 2009-2016 & Tuberculosis: Treatment +success rate: \% of new cases: 2014 & Tuberculosis: Cases detection +rate: \% of new estimated cases: 2015 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Africa & Algeria & 106.0 & 105.0 & 68.0 & 85.0 & 96.0 & 92.0 & 83.0 +& 68.0 & ... & 80.0 & 88.0 & 95.0 & 95.0 & 66.0 & 42.0 & NaN & NaN & +88.0 & 80.0 \\ +1 & Africa & Angola & NaN & NaN & NaN & NaN & 79.0 & 67.0 & 82.0 & 60.0 +& ... & 22.0 & 52.0 & 55.0 & 64.0 & NaN & NaN & 25.9 & 28.3 & 34.0 & +64.0 \\ +2 & Africa & Benin & 83.0 & 73.0 & 50.0 & 37.0 & 55.0 & 31.0 & 41.0 & +18.0 & ... & 7.0 & 20.0 & 75.0 & 79.0 & 23.0 & 33.0 & 72.7 & 25.9 & 89.0 +& 61.0 \\ +3 & Africa & Botswana & 98.0 & 101.0 & 86.0 & 87.0 & 96.0 & 99.0 & 87.0 +& 89.0 & ... & 39.0 & 63.0 & 97.0 & 95.0 & NaN & NaN & NaN & NaN & 77.0 +& 62.0 \\ +5 & Africa & Burundi & 58.0 & 66.0 & 35.0 & 30.0 & 90.0 & 88.0 & 89.0 & +85.0 & ... & 42.0 & 48.0 & 93.0 & 94.0 & 55.0 & 43.0 & 53.8 & 25.4 & +91.0 & 51.0 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} + +\NormalTok{sns.displot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{\textquotesingle{}HIV rate\textquotesingle{}}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ kde }\OperatorTok{=} \VariableTok{True}\NormalTok{, stat }\OperatorTok{=} \StringTok{"density"}\NormalTok{)} + +\NormalTok{plt.title(}\StringTok{"Distribution of HIV rates"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-3-output-1.pdf} + +Notice that the smooth KDE curve is higher when the histogram bins are +taller. You can think of the height of the KDE curve as representing how +``probable'' it is that we randomly sample a datapoint with the +corresponding value. This intuitively makes sense -- if we have already +collected more datapoints with a particular value (resulting in a tall +histogram bin), it is more likely that, if we randomly sample another +datapoint, we will sample one with a similar value (resulting in a high +KDE curve). + +The area under a probability density function should always integrate to +1, representing the fact that the total probability of a distribution +should always sum to 100\%. Hence, a KDE curve will always have an area +under the curve of 1. + +\subsection{Constructing a KDE}\label{constructing-a-kde} + +We perform kernel density estimation using three steps. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Place a kernel at each datapoint. +\item + Normalize the kernels to have a total area of 1 (across all kernels). +\item + Sum the normalized kernels. +\end{enumerate} + +We'll explain what a ``kernel'' is momentarily. + +To make things simpler, let's construct a KDE for a small, artificially +generated dataset of 5 datapoints: \([2.2, 2.8, 3.7, 5.3, 5.7]\). In the +plot below, each vertical bar represents one data point. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{data }\OperatorTok{=}\NormalTok{ [}\FloatTok{2.2}\NormalTok{, }\FloatTok{2.8}\NormalTok{, }\FloatTok{3.7}\NormalTok{, }\FloatTok{5.3}\NormalTok{, }\FloatTok{5.7}\NormalTok{]} + +\NormalTok{sns.rugplot(data, height}\OperatorTok{=}\FloatTok{0.3}\NormalTok{)} + +\NormalTok{plt.xlabel(}\StringTok{"Data"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Density"}\NormalTok{)} +\NormalTok{plt.xlim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{plt.ylim(}\DecValTok{0}\NormalTok{, }\FloatTok{0.5}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-4-output-1.pdf} + +Our goal is to create the following KDE curve, which was generated +automatically by \texttt{sns.kdeplot}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.kdeplot(data)} + +\NormalTok{plt.xlabel(}\StringTok{"Data"}\NormalTok{)} +\NormalTok{plt.xlim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{plt.ylim(}\DecValTok{0}\NormalTok{, }\FloatTok{0.5}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-5-output-1.pdf} + +\subsubsection{Step 1: Place a Kernel at Each Data +Point}\label{step-1-place-a-kernel-at-each-data-point} + +To begin generating a density curve, we need to choose a \textbf{kernel} +and \textbf{bandwidth value (\(\alpha\))}. What are these exactly? + +A \textbf{kernel} is a density curve. It is the mathematical function +that attempts to capture the randomness of each data point in our +sampled data. To explain what this means, consider just \emph{one} of +the datapoints in our dataset: \(2.2\). We obtained this datapoint by +randomly sampling some information out in the real world (you can +imagine \(2.2\) as representing a single measurement taken in an +experiment, for example). If we were to sample a new datapoint, we may +obtain a slightly different value. It could be higher than \(2.2\); it +could also be lower than \(2.2\). We make the assumption that any future +sampled datapoints will likely be similar in value to the data we've +already drawn. This means that our \emph{kernel} -- our description of +the probability of randomly sampling any new value -- will be greatest +at the datapoint we've already drawn but still have non-zero probability +above and below it. The area under any kernel should integrate to 1, +representing the total probability of drawing a new datapoint. + +A \textbf{bandwidth value}, usually denoted by \(\alpha\), represents +the width of the kernel. A large value of \(\alpha\) will result in a +wide, short kernel function, while a small value with result in a +narrow, tall kernel. + +Below, we place a \textbf{Gaussian kernel}, plotted in orange, over the +datapoint \(2.2\). A Gaussian kernel is simply the normal distribution, +which you may have called a bell curve in Data 8. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ gaussian\_kernel(x, z, a):} + \CommentTok{\# We\textquotesingle{}ll discuss where this mathematical formulation came from later} + \ControlFlowTok{return}\NormalTok{ (}\DecValTok{1}\OperatorTok{/}\NormalTok{np.sqrt(}\DecValTok{2}\OperatorTok{*}\NormalTok{np.pi}\OperatorTok{*}\NormalTok{a}\OperatorTok{**}\DecValTok{2}\NormalTok{)) }\OperatorTok{*}\NormalTok{ np.exp((}\OperatorTok{{-}}\NormalTok{(x }\OperatorTok{{-}}\NormalTok{ z)}\OperatorTok{**}\DecValTok{2} \OperatorTok{/}\NormalTok{ (}\DecValTok{2} \OperatorTok{*}\NormalTok{ a}\OperatorTok{**}\DecValTok{2}\NormalTok{)))} + +\CommentTok{\# Plot our datapoint} +\NormalTok{sns.rugplot([}\FloatTok{2.2}\NormalTok{], height}\OperatorTok{=}\FloatTok{0.3}\NormalTok{)} + +\CommentTok{\# Plot the kernel} +\NormalTok{x }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{10}\NormalTok{, }\DecValTok{1000}\NormalTok{)} +\NormalTok{plt.plot(x, gaussian\_kernel(x, }\FloatTok{2.2}\NormalTok{, }\DecValTok{1}\NormalTok{))} + +\NormalTok{plt.xlabel(}\StringTok{"Data"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Density"}\NormalTok{)} +\NormalTok{plt.xlim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{plt.ylim(}\DecValTok{0}\NormalTok{, }\FloatTok{0.5}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-6-output-1.pdf} + +To begin creating our KDE, we place a kernel on \emph{each} datapoint in +our dataset. For our dataset of 5 points, we will have 5 kernels. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# You will work with the functions below in Lab 4} +\KeywordTok{def}\NormalTok{ create\_kde(kernel, pts, a):} + \CommentTok{\# Takes in a kernel, set of points, and alpha} + \CommentTok{\# Returns the KDE as a function} + \KeywordTok{def}\NormalTok{ f(x):} +\NormalTok{ output }\OperatorTok{=} \DecValTok{0} + \ControlFlowTok{for}\NormalTok{ pt }\KeywordTok{in}\NormalTok{ pts:} +\NormalTok{ output }\OperatorTok{+=}\NormalTok{ kernel(x, pt, a)} + \ControlFlowTok{return}\NormalTok{ output }\OperatorTok{/} \BuiltInTok{len}\NormalTok{(pts) }\CommentTok{\# Normalization factor} + \ControlFlowTok{return}\NormalTok{ f} + +\KeywordTok{def}\NormalTok{ plot\_kde(kernel, pts, a):} + \CommentTok{\# Calls create\_kde and plots the corresponding KDE} +\NormalTok{ f }\OperatorTok{=}\NormalTok{ create\_kde(kernel, pts, a)} +\NormalTok{ x }\OperatorTok{=}\NormalTok{ np.linspace(}\BuiltInTok{min}\NormalTok{(pts) }\OperatorTok{{-}} \DecValTok{5}\NormalTok{, }\BuiltInTok{max}\NormalTok{(pts) }\OperatorTok{+} \DecValTok{5}\NormalTok{, }\DecValTok{1000}\NormalTok{)} +\NormalTok{ y }\OperatorTok{=}\NormalTok{ [f(xi) }\ControlFlowTok{for}\NormalTok{ xi }\KeywordTok{in}\NormalTok{ x]} +\NormalTok{ plt.plot(x, y)}\OperatorTok{;} + +\KeywordTok{def}\NormalTok{ plot\_separate\_kernels(kernel, pts, a, norm}\OperatorTok{=}\VariableTok{False}\NormalTok{):} + \CommentTok{\# Plots individual kernels, which are then summed to create the KDE} +\NormalTok{ x }\OperatorTok{=}\NormalTok{ np.linspace(}\BuiltInTok{min}\NormalTok{(pts) }\OperatorTok{{-}} \DecValTok{5}\NormalTok{, }\BuiltInTok{max}\NormalTok{(pts) }\OperatorTok{+} \DecValTok{5}\NormalTok{, }\DecValTok{1000}\NormalTok{)} + \ControlFlowTok{for}\NormalTok{ pt }\KeywordTok{in}\NormalTok{ pts:} +\NormalTok{ y }\OperatorTok{=}\NormalTok{ kernel(x, pt, a)} + \ControlFlowTok{if}\NormalTok{ norm:} +\NormalTok{ y }\OperatorTok{/=} \BuiltInTok{len}\NormalTok{(pts)} +\NormalTok{ plt.plot(x, y)} + +\NormalTok{ plt.show()}\OperatorTok{;} + +\NormalTok{plt.xlim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{plt.ylim(}\DecValTok{0}\NormalTok{, }\FloatTok{0.5}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"Data"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Density"}\NormalTok{)} + +\NormalTok{plot\_separate\_kernels(gaussian\_kernel, data, a }\OperatorTok{=} \DecValTok{1}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-7-output-1.pdf} + +\subsubsection{Step 2: Normalize Kernels to Have a Total Area of +1}\label{step-2-normalize-kernels-to-have-a-total-area-of-1} + +Above, we said that \emph{each} kernel has an area of 1. Earlier, we +also said that our goal is to construct a KDE curve using these kernels +with a \emph{total} area of 1. If we were to directly sum the kernels as +they are, we would produce a KDE curve with an integrated area of (5 +kernels) \(\times\) (area of 1 each) = 5. To avoid this, we will +\textbf{normalize} each of our kernels. This involves multiplying each +kernel by \(\frac{1}{\#\:\text{datapoints}}\). + +In the cell below, we multiply each of our 5 kernels by \(\frac{1}{5}\) +to apply normalization. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{plt.xlim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{plt.ylim(}\DecValTok{0}\NormalTok{, }\FloatTok{0.5}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"Data"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Density"}\NormalTok{)} + +\CommentTok{\# The \textasciigrave{}norm\textasciigrave{} argument specifies whether or not to normalize the kernels} +\NormalTok{plot\_separate\_kernels(gaussian\_kernel, data, a }\OperatorTok{=} \DecValTok{1}\NormalTok{, norm }\OperatorTok{=} \VariableTok{True}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-8-output-1.pdf} + +\subsubsection{Step 3: Sum the Normalized +Kernels}\label{step-3-sum-the-normalized-kernels} + +Our KDE curve is the sum of the normalized kernels. Notice that the +final curve is identical to the plot generated by \texttt{sns.kdeplot} +we saw earlier! + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{plt.xlim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{plt.ylim(}\DecValTok{0}\NormalTok{, }\FloatTok{0.5}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"Data"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Density"}\NormalTok{)} + +\NormalTok{plot\_kde(gaussian\_kernel, data, a }\OperatorTok{=} \DecValTok{1}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-9-output-1.pdf} + +\subsection{Kernel Functions and +Bandwidths}\label{kernel-functions-and-bandwidths} + +A general ``KDE formula'' function is given above. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \(K_{\alpha}(x, x_i)\) is the kernel centered on the observation + \texttt{i}. + + \begin{itemize} + \tightlist + \item + Each kernel individually has area 1. + \item + x represents any number on the number line. It is the input to our + function. + \end{itemize} +\item + \(n\) is the number of observed datapoints that we have. + + \begin{itemize} + \tightlist + \item + We multiply by \(\frac{1}{n}\) so that the total area of the KDE is + still 1. + \end{itemize} +\item + Each \(x_i \in \{x_1, x_2, \dots, x_n\}\) represents an observed + datapoint. + + \begin{itemize} + \tightlist + \item + These are what we use to create our KDE by summing multiple shifted + kernels centered at these points. + \end{itemize} +\end{enumerate} + +\begin{itemize} +\tightlist +\item + \(\alpha\) (alpha) is the bandwidth or smoothing parameter. +\end{itemize} + +A \textbf{kernel} (for our purposes) is a valid density function. This +means it: + +\begin{itemize} +\tightlist +\item + Must be non-negative for all inputs. +\item + Must integrate to 1. +\end{itemize} + +\subsubsection{Gaussian Kernel}\label{gaussian-kernel} + +The most common kernel is the \textbf{Gaussian kernel}. The Gaussian +kernel is equivalent to the Gaussian probability density function (the +Normal distribution), centered at the observed value with a standard +deviation of (this is known as the \textbf{bandwidth} parameter). + +\[K_a(x, x_i) = \frac{1}{\sqrt{2\pi\alpha^{2}}}e^{-\frac{(x-x_i)^{2}}{2\alpha^{2}}}\] + +In this formula: + +\begin{itemize} +\tightlist +\item + \(x\) (no subscript) represents any value along the x-axis of our plot +\item + \(x_i\) represents the \(i\) -th datapoint in our dataset. It is one + of the values that we have actually collected in our data sampling + process. In our example earlier, \(x_i=2.2\). Those of you who have + taken a probability class may recognize \(x_i\) as the \textbf{mean} + of the normal distribution. +\item + Each kernel is \textbf{centered} on our observed values, so its + distribution mean is \(x_i\). +\item + \(\alpha\) is the bandwidth parameter, representing the width of our + kernel. More formally, \(\alpha\) is the \textbf{standard deviation} + of the Gaussian curve. + + \begin{itemize} + \tightlist + \item + A large value of \(\alpha\) will produce a kernel that is wider and + shorter -- this leads to a smoother KDE when the kernels are summed + together. + \item + A small value of \(\alpha\) will produce a narrower, taller kernel, + and, with it, a noisier KDE. + \end{itemize} +\end{itemize} + +The details of this (admittedly intimidating) formula are less important +than understanding its role in kernel density estimation -- this +equation gives us the shape of each kernel. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Gaussian Kernel, \(\alpha\) = 0.1 +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Gaussian Kernel, \(\alpha\) = 1 +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +& \\ +\end{longtable} + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Gaussian Kernel, \(\alpha\) = 2 +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Gaussian Kernel, \(\alpha\) = 10 +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +& \\ +\end{longtable} + +\subsubsection{Boxcar Kernel}\label{boxcar-kernel} + +Another example of a kernel is the \textbf{Boxcar kernel}. The boxcar +kernel assigns a uniform density to points within a ``window'' of the +observation, and a density of 0 elsewhere. The equation below is a +boxcar kernel with the center at \(x_i\) and the bandwidth of +\(\alpha\). + +\[K_a(x, x_i) = \begin{cases} + \frac{1}{\alpha}, & |x - x_i| \le \frac{\alpha}{2}\\ + 0, & \text{else } + \end{cases}\] + +The boxcar kernel is seldom used in practice -- we include it here to +demonstrate that a kernel function can take whatever form you would +like, provided it integrates to 1 and does not output negative values. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ boxcar\_kernel(alpha, x, z):} + \ControlFlowTok{return}\NormalTok{ (((x}\OperatorTok{{-}}\NormalTok{z)}\OperatorTok{\textgreater{}={-}}\NormalTok{alpha}\OperatorTok{/}\DecValTok{2}\NormalTok{)}\OperatorTok{\&}\NormalTok{((x}\OperatorTok{{-}}\NormalTok{z)}\OperatorTok{\textless{}=}\NormalTok{alpha}\OperatorTok{/}\DecValTok{2}\NormalTok{))}\OperatorTok{/}\NormalTok{alpha} + +\NormalTok{xs }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{5}\NormalTok{, }\DecValTok{5}\NormalTok{, }\DecValTok{200}\NormalTok{)} +\NormalTok{alpha}\OperatorTok{=}\DecValTok{1} +\NormalTok{kde\_curve }\OperatorTok{=}\NormalTok{ [boxcar\_kernel(alpha, x, }\DecValTok{0}\NormalTok{) }\ControlFlowTok{for}\NormalTok{ x }\KeywordTok{in}\NormalTok{ xs]} +\NormalTok{plt.plot(xs, kde\_curve)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{figure}[H] + +{\centering \includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-10-output-1.pdf} + +} + +\caption{The Boxcar kernel centered at 0 with bandwidth \(\alpha\) = 1.} + +\end{figure}% + +The diagram on the right is how the density curve for our 5 point +dataset would have looked had we used the Boxcar kernel with bandwidth +\(\alpha\) = 1. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +KDE +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Boxcar +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +& \\ +\end{longtable} + +\section{\texorpdfstring{Diving Deeper into +\texttt{displot}}{Diving Deeper into displot}}\label{diving-deeper-into-displot} + +As we saw earlier, we can use \texttt{seaborn}'s \texttt{displot} +function to plot various distributions. In particular, \texttt{displot} +allows you to specify the \texttt{kind} of plot and is a wrapper for +\texttt{histplot}, \texttt{kdeplot}, and \texttt{ecdfplot}. + +Below, we can see a couple of examples of how \texttt{sns.displot} can +be used to plot various distributions. + +First, we can plot a histogram by setting \texttt{kind} to +\texttt{"hist"}. Note that here we've specified +\texttt{stat\ =\ density} to normalize the histogram such that the area +under the histogram is equal to 1. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.displot(data}\OperatorTok{=}\NormalTok{wb, } +\NormalTok{ x}\OperatorTok{=}\StringTok{"gni"}\NormalTok{, } +\NormalTok{ kind}\OperatorTok{=}\StringTok{"hist"}\NormalTok{, } +\NormalTok{ stat}\OperatorTok{=}\StringTok{"density"}\NormalTok{) }\CommentTok{\# default: stat=count and density integrates to 1} +\NormalTok{plt.title(}\StringTok{"Distribution of gross national income per capita"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-11-output-1.pdf} + +Now, what if we want to generate a KDE plot? We can set \texttt{kind} = +to \texttt{"kde"}! + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.displot(data}\OperatorTok{=}\NormalTok{wb, } +\NormalTok{ x}\OperatorTok{=}\StringTok{"gni"}\NormalTok{, } +\NormalTok{ kind}\OperatorTok{=}\StringTok{\textquotesingle{}kde\textquotesingle{}}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Distribution of gross national income per capita"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-12-output-1.pdf} + +And finally, if we want to generate an Empirical Cumulative Distribution +Function (ECDF), we can specify \texttt{kind\ =\ "ecdf"}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.displot(data}\OperatorTok{=}\NormalTok{wb, } +\NormalTok{ x}\OperatorTok{=}\StringTok{"gni"}\NormalTok{, } +\NormalTok{ kind}\OperatorTok{=}\StringTok{\textquotesingle{}ecdf\textquotesingle{}}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Cumulative Distribution of gross national income per capita"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-13-output-1.pdf} + +\section{Relationships Between Quantitative +Variables}\label{relationships-between-quantitative-variables} + +Up until now, we've discussed how to visualize single-variable +distributions. Going beyond this, we want to understand the relationship +between pairs of numerical variables. + +\subsubsection{Scatter Plots}\label{scatter-plots} + +\textbf{Scatter plots} are one of the most useful tools in representing +the relationship between \textbf{pairs} of quantitative variables. They +are particularly important in gauging the strength, or correlation, of +the relationship between variables. Knowledge of these relationships can +then motivate decisions in our modeling process. + +In \texttt{matplotlib}, we use the function \texttt{plt.scatter} to +generate a scatter plot. Notice that, unlike our examples of plotting +single-variable distributions, now we specify sequences of values to be +plotted along the x-axis \emph{and} the y-axis. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{plt.scatter(wb[}\StringTok{"per capita: }\SpecialCharTok{\% g}\StringTok{rowth: 2016"}\NormalTok{], }\OperatorTok{\textbackslash{}} +\NormalTok{ wb[}\StringTok{\textquotesingle{}Adult literacy rate: Female: \% ages 15 and older: 2005{-}14\textquotesingle{}}\NormalTok{])} + +\NormalTok{plt.xlabel(}\StringTok{"}\SpecialCharTok{\% g}\StringTok{rowth per capita"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Female adult literacy rate"}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Female adult literacy against }\SpecialCharTok{\% g}\StringTok{rowth"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-14-output-1.pdf} + +In \texttt{seaborn}, we call the function \texttt{sns.scatterplot}. We +use the \texttt{x} and \texttt{y} parameters to indicate the values to +be plotted along the x and y axes, respectively. By using the +\texttt{hue} parameter, we can specify a third variable to be used for +coloring each scatter point. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.scatterplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{"per capita: }\SpecialCharTok{\% g}\StringTok{rowth: 2016"}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ y }\OperatorTok{=} \StringTok{"Adult literacy rate: Female: \% ages 15 and older: 2005{-}14"}\NormalTok{, } +\NormalTok{ hue }\OperatorTok{=} \StringTok{"Continent"}\NormalTok{)} + +\NormalTok{plt.title(}\StringTok{"Female adult literacy against }\SpecialCharTok{\% g}\StringTok{rowth"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-15-output-1.pdf} + +\paragraph{Overplotting}\label{overplotting} + +Although the plots above communicate the general relationship between +the two plotted variables, they both suffer a major limitation -- +\textbf{overplotting}. Overplotting occurs when scatter points with +similar values are stacked on top of one another, making it difficult to +see the number of scatter points actually plotted in the visualization. +Notice how in the upper righthand region of the plots, we cannot easily +tell just how many points have been plotted. This makes our +visualizations difficult to interpret. + +We have a few methods to help reduce overplotting: + +\begin{itemize} +\tightlist +\item + Decreasing the size of the scatter point markers can improve + readability. We do this by setting a new value to the size parameter, + \texttt{s}, of \texttt{plt.scatter} or \texttt{sns.scatterplot}. +\item + \textbf{Jittering} is the process of adding a small amount of random + noise to all x and y values to slightly shift the position of each + datapoint. By randomly shifting all the data by some small distance, + we can discern individual points more clearly without modifying the + major trends of the original dataset. +\end{itemize} + +In the cell below, we first jitter the data using +\texttt{np.random.uniform}, then re-plot it with smaller markers. The +resulting plot is much easier to interpret. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Setting a seed ensures that we produce the same plot each time} +\CommentTok{\# This means that the course notes will not change each time you access them} +\NormalTok{np.random.seed(}\DecValTok{150}\NormalTok{)} + +\CommentTok{\# This call to np.random.uniform generates random numbers between {-}1 and 1} +\CommentTok{\# We add these random numbers to the original x data to jitter it slightly} +\NormalTok{x\_noise }\OperatorTok{=}\NormalTok{ np.random.uniform(}\OperatorTok{{-}}\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{, }\BuiltInTok{len}\NormalTok{(wb))} +\NormalTok{jittered\_x }\OperatorTok{=}\NormalTok{ wb[}\StringTok{"per capita: }\SpecialCharTok{\% g}\StringTok{rowth: 2016"}\NormalTok{] }\OperatorTok{+}\NormalTok{ x\_noise} + +\CommentTok{\# Repeat for y data} +\NormalTok{y\_noise }\OperatorTok{=}\NormalTok{ np.random.uniform(}\OperatorTok{{-}}\DecValTok{5}\NormalTok{, }\DecValTok{5}\NormalTok{, }\BuiltInTok{len}\NormalTok{(wb))} +\NormalTok{jittered\_y }\OperatorTok{=}\NormalTok{ wb[}\StringTok{"Adult literacy rate: Female: \% ages 15 and older: 2005{-}14"}\NormalTok{] }\OperatorTok{+}\NormalTok{ y\_noise} + +\CommentTok{\# Setting the size parameter \textasciigrave{}s\textasciigrave{} changes the size of each point} +\NormalTok{plt.scatter(jittered\_x, jittered\_y, s}\OperatorTok{=}\DecValTok{15}\NormalTok{)} + +\NormalTok{plt.xlabel(}\StringTok{"}\SpecialCharTok{\% g}\StringTok{rowth per capita (jittered)"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Female adult literacy rate (jittered)"}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Female adult literacy against }\SpecialCharTok{\% g}\StringTok{rowth"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-16-output-1.pdf} + +\subsubsection{\texorpdfstring{\texttt{lmplot} and +\texttt{jointplot}}{lmplot and jointplot}}\label{lmplot-and-jointplot} + +\texttt{seaborn} also includes several built-in functions for creating +more sophisticated scatter plots. Two of the most commonly used examples +are \texttt{sns.lmplot} and \texttt{sns.jointplot}. + +\texttt{sns.lmplot} plots both a scatter plot \emph{and} a linear +regression line, all in one function call. We'll discuss linear +regression in a few lectures. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.lmplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{"per capita: }\SpecialCharTok{\% g}\StringTok{rowth: 2016"}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ y }\OperatorTok{=} \StringTok{"Adult literacy rate: Female: \% ages 15 and older: 2005{-}14"}\NormalTok{)} + +\NormalTok{plt.title(}\StringTok{"Female adult literacy against }\SpecialCharTok{\% g}\StringTok{rowth"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-17-output-1.pdf} + +\texttt{sns.jointplot} creates a visualization with three components: a +scatter plot, a histogram of the distribution of x values, and a +histogram of the distribution of y values. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.jointplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{"per capita: }\SpecialCharTok{\% g}\StringTok{rowth: 2016"}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ y }\OperatorTok{=} \StringTok{"Adult literacy rate: Female: \% ages 15 and older: 2005{-}14"}\NormalTok{)} + +\CommentTok{\# plt.suptitle allows us to shift the title up so it does not overlap with the histogram} +\NormalTok{plt.suptitle(}\StringTok{"Female adult literacy against }\SpecialCharTok{\% g}\StringTok{rowth"}\NormalTok{)} +\NormalTok{plt.subplots\_adjust(top}\OperatorTok{=}\FloatTok{0.9}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-18-output-1.pdf} + +\subsubsection{Hex plots}\label{hex-plots} + +For datasets with a very large number of datapoints, jittering is +unlikely to fully resolve the issue of overplotting. In these cases, we +can attempt to visualize our data by its \emph{density}, rather than +displaying each individual datapoint. + +\textbf{Hex plots} can be thought of as two-dimensional histograms that +show the joint distribution between two variables. This is particularly +useful when working with very dense data. In a hex plot, the x-y plane +is binned into hexagons. Hexagons that are darker in color indicate a +greater density of data -- that is, there are more data points that lie +in the region enclosed by the hexagon. + +We can generate a hex plot using \texttt{sns.jointplot} modified with +the \texttt{kind} parameter. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.jointplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{"per capita: }\SpecialCharTok{\% g}\StringTok{rowth: 2016"}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ y }\OperatorTok{=} \StringTok{"Adult literacy rate: Female: \% ages 15 and older: 2005{-}14"}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ kind }\OperatorTok{=} \StringTok{"hex"}\NormalTok{)} + +\CommentTok{\# plt.suptitle allows us to shift the title up so it does not overlap with the histogram} +\NormalTok{plt.suptitle(}\StringTok{"Female adult literacy against }\SpecialCharTok{\% g}\StringTok{rowth"}\NormalTok{)} +\NormalTok{plt.subplots\_adjust(top}\OperatorTok{=}\FloatTok{0.9}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-19-output-1.pdf} + +\subsubsection{Contour Plots}\label{contour-plots} + +\textbf{Contour plots} are an alternative way of plotting the joint +distribution of two variables. You can think of them as the +2-dimensional versions of KDE plots. A contour plot can be interpreted +in a similar way to a +\href{https://gisgeography.com/contour-lines-topographic-map/}{topographic +map}. Each contour line represents an area that has the same +\emph{density} of datapoints throughout the region. Contours marked with +darker colors contain more datapoints (a higher density) in that region. + +\texttt{sns.kdeplot} will generate a contour plot if we specify both x +and y data. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.kdeplot(data }\OperatorTok{=}\NormalTok{ wb, x }\OperatorTok{=} \StringTok{"per capita: }\SpecialCharTok{\% g}\StringTok{rowth: 2016"}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ y }\OperatorTok{=} \StringTok{"Adult literacy rate: Female: \% ages 15 and older: 2005{-}14"}\NormalTok{, }\OperatorTok{\textbackslash{}} +\NormalTok{ fill }\OperatorTok{=} \VariableTok{True}\NormalTok{)} + +\NormalTok{plt.title(}\StringTok{"Female adult literacy against }\SpecialCharTok{\% g}\StringTok{rowth"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-20-output-1.pdf} + +\section{Transformations}\label{transformations} + +We have now covered visualizations in great depth, looking into various +forms of visualizations, plotting libraries, and high-level theory. + +Much of this was done to uncover insights in data, which will prove +necessary when we begin building models of data later in the course. A +strong graphical correlation between two variables hints at an +underlying relationship that we may want to study in greater detail. +However, relying on visual relationships alone is limiting - not all +plots show association. The presence of outliers and other statistical +anomalies makes it hard to interpret data. + +\textbf{Transformations} are the process of manipulating data to find +significant relationships between variables. These are often found by +applying mathematical functions to variables that ``transform'' their +range of possible values and highlight some previously hidden +associations between data. + +To see why we may want to transform data, consider the following plot of +adult literacy rates against gross national income. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Some data cleaning to help with the next example} +\NormalTok{df }\OperatorTok{=}\NormalTok{ pd.DataFrame(index}\OperatorTok{=}\NormalTok{wb.index)} +\NormalTok{df[}\StringTok{\textquotesingle{}lit\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ wb[}\StringTok{\textquotesingle{}Adult literacy rate: Female: \% ages 15 and older: 2005{-}14\textquotesingle{}}\NormalTok{] }\OperatorTok{\textbackslash{}} + \OperatorTok{+}\NormalTok{ wb[}\StringTok{"Adult literacy rate: Male: \% ages 15 and older: 2005{-}14"}\NormalTok{]} +\NormalTok{df[}\StringTok{\textquotesingle{}inc\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ wb[}\StringTok{\textquotesingle{}gni\textquotesingle{}}\NormalTok{]} +\NormalTok{df.dropna(inplace}\OperatorTok{=}\VariableTok{True}\NormalTok{)} + +\NormalTok{plt.scatter(df[}\StringTok{"inc"}\NormalTok{], df[}\StringTok{"lit"}\NormalTok{])} +\NormalTok{plt.xlabel(}\StringTok{"Gross national income per capita"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Adult literacy rate"}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Adult literacy rate against GNI per capita"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-21-output-1.pdf} + +This plot is difficult to interpret for two reasons: + +\begin{itemize} +\tightlist +\item + The data shown in the visualization appears almost ``smushed'' -- it + is heavily concentrated in the upper lefthand region of the plot. Even + if we jittered the dataset, we likely would not be able to fully + assess all datapoints in that area. +\item + It is hard to generalize a clear relationship between the two plotted + variables. While adult literacy rate appears to share some positive + relationship with gross national income, we are not able to describe + the specifics of this trend in much detail. +\end{itemize} + +A transformation would allow us to visualize this data more clearly, +which, in turn, would enable us to describe the underlying relationship +between our variables of interest. + +We will most commonly apply a transformation to \textbf{linearize a +relationship} between variables. If we find a transformation to make a +scatter plot of two variables linear, we can ``backtrack'' to find the +exact relationship between the variables. This helps us in two major +ways. Firstly, linear relationships are particularly simple to interpret +-- we have an intuitive sense of what the slope and intercept of a +linear trend represent, and how they can help us understand the +relationship between two variables. Secondly, linear relationships are +the backbone of linear models. We will begin exploring linear modeling +in great detail next week. As we'll soon see, linear models become much +more effective when we are working with linearized data. + +In the remainder of this note, we will discuss how to linearize a +dataset to produce the result below. Notice that the resulting plot +displays a rough linear relationship between the values plotted on the x +and y axes. + +\subsection{Linearization and Applying +Transformations}\label{linearization-and-applying-transformations} + +To linearize a relationship, begin by asking yourself: what makes the +data non-linear? It is helpful to repeat this question for each variable +in your visualization. + +Let's start by considering the gross national income variable in our +plot above. Looking at the y values in the scatter plot, we can see that +many large y values are all clumped together, compressing the vertical +axis. The scale of the horizontal axis is also being distorted by the +few large outlying x values on the right. + +If we decreased the size of these outliers relative to the bulk of the +data, we could reduce the distortion of the horizontal axis. How can we +do this? We need a transformation that will: + +\begin{itemize} +\tightlist +\item + Decrease the magnitude of large x values by a significant amount. +\item + Not drastically change the magnitude of small x values. +\end{itemize} + +One function that produces this result is the \textbf{log +transformation}. When we take the logarithm of a large number, the +original number will decrease in magnitude dramatically. Conversely, +when we take the logarithm of a small number, the original number does +not change its value by as significant of an amount (to illustrate this, +consider the difference between \(\log{(100)} = 4.61\) and +\(\log{(10)} = 2.3\)). + +In Data 100 (and most upper-division STEM classes), \(\log\) is used to +refer to the natural logarithm with base \(e\). + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# np.log takes the logarithm of an array or Series} +\NormalTok{plt.scatter(np.log(df[}\StringTok{"inc"}\NormalTok{]), df[}\StringTok{"lit"}\NormalTok{])} + +\NormalTok{plt.xlabel(}\StringTok{"Log(gross national income per capita)"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Adult literacy rate"}\NormalTok{)} +\NormalTok{plt.title(}\StringTok{"Adult literacy rate against Log(GNI per capita)"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-22-output-1.pdf} + +After taking the logarithm of our x values, our plot appears much more +balanced in its horizontal scale. We no longer have many datapoints +clumped on one end and a few outliers out at extreme values. + +Let's repeat this reasoning for the y values. Considering only the +vertical axis of the plot, notice how there are many datapoints +concentrated at large y values. Only a few datapoints lie at smaller +values of y. + +If we were to ``spread out'' these large values of y more, we would no +longer see the dense concentration in one region of the y-axis. We need +a transformation that will: + +\begin{itemize} +\tightlist +\item + Increase the magnitude of large values of y so these datapoints are + distributed more broadly on the vertical scale, +\item + Not substantially alter the scaling of small values of y (we do not + want to drastically modify the lower end of the y axis, which is + already distributed evenly on the vertical scale). +\end{itemize} + +In this case, it is helpful to apply a \textbf{power transformation} -- +that is, raise our y values to a power. Let's try raising our adult +literacy rate values to the power of 4. Large values raised to the power +of 4 will increase in magnitude proportionally much more than small +values raised to the power of 4 (consider the difference between +\(2^4 = 16\) and \(200^4 = 1600000000\)). + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Apply a log transformation to the x values and a power transformation to the y values} +\NormalTok{plt.scatter(np.log(df[}\StringTok{"inc"}\NormalTok{]), df[}\StringTok{"lit"}\NormalTok{]}\OperatorTok{**}\DecValTok{4}\NormalTok{)} + +\NormalTok{plt.xlabel(}\StringTok{"Log(gross national income per capita)"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Adult literacy rate (4th power)"}\NormalTok{)} +\NormalTok{plt.suptitle(}\StringTok{"Adult literacy rate (4th power) against Log(GNI per capita)"}\NormalTok{)} +\NormalTok{plt.subplots\_adjust(top}\OperatorTok{=}\FloatTok{0.9}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-23-output-1.pdf} + +Our scatter plot is looking a lot better! Now, we are plotting the log +of our original x values on the horizontal axis, and the 4th power of +our original y values on the vertical axis. We start to see an +approximate \emph{linear} relationship between our transformed +variables. + +What can we take away from this? We now know that the log of gross +national income and adult literacy to the power of 4 are roughly +linearly related. If we denote the original, untransformed gross +national income values as \(x\) and the original adult literacy rate +values as \(y\), we can use the standard form of a linear fit to express +this relationship: + +\[y^4 = m(\log{x}) + b\] + +Where \(m\) represents the slope of the linear fit, while \(b\) +represents the intercept. + +The cell below computes \(m\) and \(b\) for our transformed data. We'll +discuss how this code was generated in a future lecture. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# The code below fits a linear regression model. We\textquotesingle{}ll discuss it at length in a future lecture} +\ImportTok{from}\NormalTok{ sklearn.linear\_model }\ImportTok{import}\NormalTok{ LinearRegression} + +\NormalTok{model }\OperatorTok{=}\NormalTok{ LinearRegression()} +\NormalTok{model.fit(np.log(df[[}\StringTok{"inc"}\NormalTok{]]), df[}\StringTok{"lit"}\NormalTok{]}\OperatorTok{**}\DecValTok{4}\NormalTok{)} +\NormalTok{m, b }\OperatorTok{=}\NormalTok{ model.coef\_[}\DecValTok{0}\NormalTok{], model.intercept\_} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"The slope, m, of the transformed data is: }\SpecialCharTok{\{}\NormalTok{m}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"The intercept, b, of the transformed data is: }\SpecialCharTok{\{}\NormalTok{b}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} + +\NormalTok{df }\OperatorTok{=}\NormalTok{ df.sort\_values(}\StringTok{"inc"}\NormalTok{)} +\NormalTok{plt.scatter(np.log(df[}\StringTok{"inc"}\NormalTok{]), df[}\StringTok{"lit"}\NormalTok{]}\OperatorTok{**}\DecValTok{4}\NormalTok{, label}\OperatorTok{=}\StringTok{"Transformed data"}\NormalTok{)} +\NormalTok{plt.plot(np.log(df[}\StringTok{"inc"}\NormalTok{]), m}\OperatorTok{*}\NormalTok{np.log(df[}\StringTok{"inc"}\NormalTok{])}\OperatorTok{+}\NormalTok{b, c}\OperatorTok{=}\StringTok{"red"}\NormalTok{, label}\OperatorTok{=}\StringTok{"Linear regression"}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"Log(gross national income per capita)"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Adult literacy rate (4th power)"}\NormalTok{)} +\NormalTok{plt.legend()}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +The slope, m, of the transformed data is: 336400693.43172705 +The intercept, b, of the transformed data is: -1802204836.0479987 +\end{verbatim} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-24-output-2.pdf} + +What if we want to understand the \emph{underlying} relationship between +our original variables, before they were transformed? We can simply +rearrange our linear expression above! + +Recall our linear relationship between the transformed variables +\(\log{x}\) and \(y^4\). + +\[y^4 = m(\log{x}) + b\] + +By rearranging the equation, we find a relationship between the +untransformed variables \(x\) and \(y\). + +\[y = [m(\log{x}) + b]^{(1/4)}\] + +When we plug in the values for \(m\) and \(b\) computed above, something +interesting happens. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Now, plug the values for m and b into the relationship between the untransformed x and y} +\NormalTok{plt.scatter(df[}\StringTok{"inc"}\NormalTok{], df[}\StringTok{"lit"}\NormalTok{], label}\OperatorTok{=}\StringTok{"Untransformed data"}\NormalTok{)} +\NormalTok{plt.plot(df[}\StringTok{"inc"}\NormalTok{], (m}\OperatorTok{*}\NormalTok{np.log(df[}\StringTok{"inc"}\NormalTok{])}\OperatorTok{+}\NormalTok{b)}\OperatorTok{**}\NormalTok{(}\DecValTok{1}\OperatorTok{/}\DecValTok{4}\NormalTok{), c}\OperatorTok{=}\StringTok{"red"}\NormalTok{, label}\OperatorTok{=}\StringTok{"Modeled relationship"}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"Gross national income per capita"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Adult literacy rate"}\NormalTok{)} +\NormalTok{plt.legend()}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{visualization_2/visualization_2_files/figure-pdf/cell-25-output-1.pdf} + +We have found a relationship between our original variables -- gross +national income and adult literacy rate! + +Transformations are powerful tools for understanding our data in greater +detail. To summarize what we just achieved: + +\begin{itemize} +\tightlist +\item + We identified appropriate transformations to \textbf{linearize} the + original data. +\item + We used our knowledge of linear curves to compute the slope and + intercept of the transformed data. +\item + We used this slope and intercept information to derive a relationship + in the untransformed data. +\end{itemize} + +Linearization will be an important tool as we begin our work on linear +modeling next week. + +\subsubsection{Tukey-Mosteller Bulge +Diagram}\label{tukey-mosteller-bulge-diagram} + +The \textbf{Tukey-Mosteller Bulge Diagram} is a good guide when +determining possible transformations to achieve linearity. It is a +visual summary of the reasoning we just worked through above. + +How does it work? Each curved ``bulge'' represents a possible shape of +non-linear data. To use the diagram, find which of the four bulges +resembles your dataset the most closely. Then, look at the axes of the +quadrant for this bulge. The horizontal axis will list possible +transformations that could be applied to your x data for linearization. +Similarly, the vertical axis will list possible transformations that +could be applied to your y data. Note that each axis lists two possible +transformations. While \emph{either} of these transformations has the +\emph{potential} to linearize your dataset, note that this is an +iterative process. It's important to try out these transformations and +look at the results to see whether you've actually achieved linearity. +If not, you'll need to continue testing other possible transformations. + +Generally: + +\begin{itemize} +\tightlist +\item + \(\sqrt{}\) and \(\log{}\) will reduce the magnitude of large values. +\item + Powers (\(^2\) and \(^3\)) will increase the spread in magnitude of + large values. +\end{itemize} + +\textbf{Important:} You should still understand the \emph{logic} we +worked through to determine how best to transform the data. The bulge +diagram is just a summary of this same reasoning. You will be expected +to be able to explain why a given transformation is or is not +appropriate for linearization. + +\subsection{Additional Remarks}\label{additional-remarks} + +Visualization requires a lot of thought! + +\begin{itemize} +\tightlist +\item + There are many tools for visualizing distributions. + + \begin{itemize} + \tightlist + \item + Distribution of a single variable: + + \begin{enumerate} + \def\labelenumi{\arabic{enumi}.} + \tightlist + \item + Rugplot + \item + Histogram + \item + Density plot + \item + Box plot + \item + Violin plot + \end{enumerate} + \item + Joint distribution of two quantitative variables: + + \begin{enumerate} + \def\labelenumi{\arabic{enumi}.} + \tightlist + \item + Scatter plot + \item + Hex plot + \item + Contour plot + \end{enumerate} + \end{itemize} +\end{itemize} + +This class primarily uses \texttt{seaborn} and \texttt{matplotlib}, but +\texttt{pandas} also has basic built-in plotting methods. Many other +visualization libraries exist, and \texttt{plotly} is one of them. + +\begin{itemize} +\tightlist +\item + \texttt{plotly} creates very easily creates interactive plots. +\item + \texttt{plotly} will occasionally appear in lecture code, labs, and + assignments! +\end{itemize} + +Next, we'll go deeper into the theory behind visualization. + +\section{Visualization Theory}\label{visualization-theory} + +This section marks a pivot to the second major topic of this lecture - +visualization theory. We'll discuss the abstract nature of +visualizations and analyze how they convey information. + +Remember, we had two goals for visualizing data. This section is +particularly important in: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Helping us understand the data and results, +\item + Communicating our results and conclusions with others. +\end{enumerate} + +\subsection{Information Channels}\label{information-channels} + +Visualizations are able to convey information through various encodings. +In the remainder of this lecture, we'll look at the use of color, scale, +and depth, to name a few. + +\subsubsection{Encodings in Rugplots}\label{encodings-in-rugplots} + +One detail that we may have overlooked in our earlier discussion of +rugplots is the importance of encodings. Rugplots are effective visuals +because they utilize line thickness to encode frequency. Consider the +following diagram: + +\subsubsection{Multi-Dimensional +Encodings}\label{multi-dimensional-encodings} + +Encodings are also useful for representing multi-dimensional data. +Notice how the following visual highlights four distinct ``dimensions'' +of data: + +\begin{itemize} +\tightlist +\item + X-axis +\item + Y-axis +\item + Area +\item + Color +\end{itemize} + +The human visual perception system is only capable of visualizing data +in a three-dimensional plane, but as you've seen, we can encode many +more channels of information. + +\subsection{Harnessing the Axes}\label{harnessing-the-axes} + +\subsubsection{Consider the Scale of the +Data}\label{consider-the-scale-of-the-data} + +However, we should be careful to not misrepresent relationships in our +data by manipulating the scale or axes. The visualization below +improperly portrays two seemingly independent relationships on the same +plot. The authors have clearly changed the scale of the y-axis to +mislead their audience. + +Notice how the downwards-facing line segment contains values in the +millions, while the upwards-trending segment only contains values near +three hundred thousand. These lines should not be intersecting. + +When there is a large difference in the magnitude of the data, it's +advised to analyze percentages instead of counts. The following diagrams +correctly display the trends in cancer screening and abortion rates. + +\subsubsection{Reveal the Data}\label{reveal-the-data} + +Great visualizations not only consider the scale of the data but also +utilize the axes in a way that best conveys information. For example, +data scientists commonly set certain axes limits to highlight parts of +the visualization they are most interested in. + +The visualization on the right captures the trend in coronavirus cases +during March of 2020. From only looking at the visualization on the +left, a viewer may incorrectly believe that coronavirus began to +skyrocket on March 4\textsuperscript{th}, 2020. However, the second +illustration tells a different story - cases rose closer to March +21\textsuperscript{th}, 2020. + +\subsection{Harnessing Color}\label{harnessing-color} + +Color is another important feature in visualizations that does more than +what meets the eye. + +We already explored using color to encode a categorical variable in our +scatter plot. Let's now discuss the uses of color in novel +visualizations like colormaps and heatmaps. + +5-8\% of the world is red-green color blind, so we have to be very +particular about our color scheme. We want to make these as accessible +as possible. Choosing a set of colors that work together is evidently a +challenging task! + +\subsubsection{Colormaps}\label{colormaps} + +Colormaps are mappings from pixel data to color values, and they're +often used to highlight distinct parts of an image. Let's investigate a +few properties of colormaps. + +\textbf{Jet Colormap} + +\textbf{Viridis Colormap} + +The jet colormap is infamous for being misleading. While it seems more +vibrant than viridis, the aggressive colors poorly encode numerical +data. To understand why, let's analyze the following images. + +The diagram on the left compares how a variety of colormaps represent +pixel data that transitions from a high to low intensity. These include +the jet colormap (row a) and grayscale (row b). Notice how the grayscale +images do the best job in smoothly transitioning between pixel data. The +jet colormap is the worst at this - the four images in row (a) look like +a conglomeration of individual colors. + +The difference is also evident in the images labeled (a) and (b) on the +left side. The grayscale image is better at preserving finer detail in +the vertical line strokes. Additionally, grayscale is preferred in X-ray +scans for being more neutral. The intensity of the dark red color in the +jet colormap is frightening and indicates something is wrong. + +Why is the jet colormap so much worse? The answer lies in how its color +composition is perceived to the human eye. + +\textbf{Jet Colormap Perception} + +\textbf{Viridis Colormap Perception} + +The jet colormap is largely misleading because it is not perceptually +uniform. \textbf{Perceptually uniform colormaps} have the property that +if the pixel data goes from 0.1 to 0.2, the perceptual change is the +same as when the data goes from 0.8 to 0.9. + +Notice how the said uniformity is present within the linear trend +displayed in the viridis colormap. On the other hand, the jet colormap +is largely non-linear - this is precisely why it's considered a worse +colormap. + +\subsection{Harnessing Markings}\label{harnessing-markings} + +In our earlier discussion of multi-dimensional encodings, we analyzed a +scatter plot with four pseudo-dimensions: the two axes, area, and color. +Were these appropriate to use? The following diagram analyzes how well +the human eye can distinguish between these ``markings''. + +There are a few key takeaways from this diagram + +\begin{itemize} +\tightlist +\item + Lengths are easy to discern. Don't use plots with jiggled baselines - + keep everything axis-aligned. +\item + Avoid pie charts! Angle judgments are inaccurate. +\item + Areas and volumes are hard to distinguish (area charts, word clouds, + etc.). +\end{itemize} + +\subsection{Harnessing Conditioning}\label{harnessing-conditioning} + +Conditioning is the process of comparing data that belong to separate +groups. We've seen this before in overlayed distributions, side-by-side +box plots, and scatter plots with categorical encodings. Here, we'll +introduce terminology that formalizes these examples. + +Consider an example where we want to analyze income earnings for males +and females with varying levels of education. There are multiple ways to +compare this data. + +The barplot is an example of \textbf{juxtaposition}: placing multiple +plots side by side, with the same scale. The scatter plot is an example +of \textbf{superposition}: placing multiple density curves and scatter +plots on top of each other. + +Which is better depends on the problem at hand. Here, superposition +makes the precise wage difference very clear from a quick glance. +However, many sophisticated plots convey information that favors the use +of juxtaposition. Below is one example. + +\subsection{Harnessing Context}\label{harnessing-context} + +The last component of a great visualization is perhaps the most critical +- the use of context. Adding informative titles, axis labels, and +descriptive captions are all best practices that we've heard repeatedly +in Data 8. + +A publication-ready plot (and every Data 100 plot) needs: + +\begin{itemize} +\tightlist +\item + Informative title (takeaway, not description), +\item + Axis labels, +\item + Reference lines, markers, etc, +\item + Legends, if appropriate, +\item + Captions that describe data, +\end{itemize} + +Captions should: + +\begin{itemize} +\tightlist +\item + Be comprehensive and self-contained, +\item + Describe what has been graphed, +\item + Draw attention to important features, +\item + Describe conclusions drawn from graphs. +\end{itemize} + +\bookmarksetup{startatroot} + +\chapter{Sampling}\label{sampling} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Understand how to appropriately collect data to help answer a + question. +\end{itemize} + +\end{tcolorbox} + +In data science, understanding characteristics of a population starts +with having quality data to investigate. While it is often impossible to +collect all the data describing a population, we can overcome this by +properly sampling from the population. In this note, we will discuss +appropriate techniques for sampling from populations. + +\begin{figure}[H] + +{\centering \includegraphics{sampling/images/data_life_cycle_sampling.png} + +} + +\caption{Lifecycle diagram} + +\end{figure}% + +\section{Censuses and Surveys}\label{censuses-and-surveys} + +In general: a \textbf{census} is ``a complete count or survey of a +\textbf{population}, typically recording various details of +\textbf{individuals}.'' An example is the U.S. Decennial Census which +was held in April 2020. It counts \emph{every person} living in all 50 +states, DC, and US territories, not just citizens. Participation is +required by law (it is mandated by the U.S. Constitution). Important +uses include the allocation of Federal funds, congressional +representation, and drawing congressional and state legislative +districts. The census is composed of a \textbf{survey} mailed to +different housing addresses in the United States. + +A \textbf{survey} is a set of questions. An example is workers sampling +individuals and households. What is asked and how it is asked can affect +how the respondent answers or even whether or not they answer in the +first place. + +While censuses are great, it is often very difficult and expensive to +survey everyone in a population. Imagine the amount of resources, money, +time, and energy the U.S. spent on the 2020 Census. While this does give +us more accurate information about the population, it's often infeasible +to execute. Thus, we usually survey a subset of the population instead. + +A \textbf{sample} is (usually) a subset of the population that is often +used to make inferences about the population. If our sample is a good +representation of our population, then we can use it to glean useful +information at a lower cost. That being said, how the sample is drawn +will affect the reliability of such inferences. Two common sources of +error in sampling are \textbf{chance error}, where random samples can +vary from what is expected in any direction, and \textbf{bias}, which is +a systematic error in one direction. Biases can be the result of many +things, for example, our sampling scheme or survey methods. + +Let's define some useful vocabulary: + +\begin{itemize} +\tightlist +\item + \textbf{Population}: The group that you want to learn something about. + + \begin{itemize} + \tightlist + \item + \textbf{Individuals} in a population are not always people. Other + populations include bacteria in your gut (sampled using DNA + sequencing), trees of a certain species, small businesses receiving + a microloan, or published results in an academic journal or field. + \end{itemize} +\item + \textbf{Sampling Frame}: The list from which the sample is drawn. + + \begin{itemize} + \tightlist + \item + For example, if sampling people, then the sampling frame is the set + of all people that could possibly end up in your sample. + \end{itemize} +\item + \textbf{Sample}: Who you actually end up sampling. The sample is + therefore a subset of your \emph{sampling frame}. +\end{itemize} + +While ideally, these three sets would be exactly the same, they usually +aren't in practice. For example, there may be individuals in your +sampling frame (and hence, your sample) that are not in your population. +And generally, sample sizes are much smaller than population sizes. + +\begin{figure}[H] + +{\centering \includegraphics{sampling/images/samplingframe.png} + +} + +\caption{Sampling\_Frames} + +\end{figure}% + +\section{Bias: A Case Study}\label{bias-a-case-study} + +The following case study is adapted from \emph{Statistics} by Freedman, +Pisani, and Purves, W.W. Norton NY, 1978. + +In 1936, President Franklin D. Roosevelt (Democratic) went up for +re-election against Alf Landon (Republican). As is usual, \textbf{polls} +were conducted in the months leading up to the election to try and +predict the outcome. The \emph{Literary Digest} was a magazine that had +successfully predicted the outcome of 5 general elections coming into +1936. In their polling for the 1936 election, they sent out their survey +to 10 million individuals whom they found from phone books, lists of +magazine subscribers, and lists of country club members. Of the roughly +2.4 million people who filled out the survey, only 43\% reported they +would vote for Roosevelt; thus, the \emph{Digest} predicted that Landon +would win. + +On election day, Roosevelt won in a landslide, winning 61\% of the +popular vote of about 45 million voters. How could the \emph{Digest} +have been so wrong with their polling? + +It turns out that the \emph{Literary Digest} sample was not +representative of the population. Their sampling frame of people found +in phone books, lists of magazine subscribers, and lists of country club +members were more affluent and tended to vote Republican. As such, their +sampling frame was inherently skewed in Landon's favor. The +\emph{Literary Digest} completely overlooked the lion's share of voters +who were still suffering through the Great Depression. Furthermore, they +had a dismal response rate (about 24\%); who knows how the other +non-respondents would have polled? The \emph{Digest} folded just 18 +months after this disaster. + +At the same time, George Gallup, a rising statistician, also made +predictions about the 1936 elections. Despite having a smaller sample +size of ``only'' 50,000 (this is still more than necessary; more when we +cover the Central Limit Theorem), his estimate that 56\% of voters would +choose Roosevelt was much closer to the actual result (61\%). Gallup +also predicted the \emph{Digest}'s prediction within 1\% with a sample +size of only 3000 people by anticipating the \emph{Digest}'s affluent +sampling frame and subsampling those individuals. + +So what's the moral of the story? Samples, while convenient, are subject +to chance error and \textbf{bias}. Election polling, in particular, can +involve many sources of bias. To name a few: + +\begin{itemize} +\tightlist +\item + \textbf{Selection bias} systematically excludes (or favors) particular + groups. + + \begin{itemize} + \tightlist + \item + Example: the Literary Digest poll excludes people not in phone + books. + \item + How to avoid: Examine the sampling frame and the method of sampling. + \end{itemize} +\item + \textbf{Response bias} occurs because people don't always respond + truthfully. Survey designers pay special detail to the nature and + wording of questions to avoid this type of bias. + + \begin{itemize} + \tightlist + \item + Example: Illegal immigrants might not answer truthfully when asked + citizenship questions on the census survey. + \item + How to avoid: Examine the nature of questions and the method of + surveying. Randomized response - flip a coin and answer yes if heads + or answer truthfully if tails. + \end{itemize} +\item + \textbf{Non-response bias} occurs because people don't always respond + to survey requests, which can skew responses. + + \begin{itemize} + \tightlist + \item + Example: Only 2.4m out of 10m people responded to the \emph{Literary + Digest}'s poll. + \item + How to avoid: Keep surveys short, and be persistent. + \end{itemize} +\end{itemize} + +\textbf{Randomized Response} + +Suppose you want to ask someone a sensitive question: ``Have you ever +cheated on an exam?'' An individual may be embarrassed or afraid to +answer truthfully and might lie or not answer the question. One solution +is to leverage a randomized response: + +First, you can ask the individual to secretly flip a fair coin; you (the +surveyor) \emph{don't} know the outcome of the coin flip. + +Then, you ask them to \textbf{answer ``Yes''} if the coin landed heads +and to \textbf{answer truthfully} if the coin landed tails. + +The surveyor doesn't know if the \textbf{``Yes''} means that the +\textbf{person cheated} or if it means that the \textbf{coin landed +heads}. The individual's sensitive information remains secret. However, +if the response is \textbf{``No''}, then the surveyor knows the +\textbf{individual didn't cheat}. We assume the individual is +comfortable revealing this information. + +Generally, we can assume that the coin lands heads 50\% of the time, +masking the remaining 50\% of the ``No'' answers. We can therefore +\textbf{double} the proportion of ``No'' answers to estimate the +\textbf{true} fraction of ``No'' answers. + +\textbf{Election Polls} + +Today, the \emph{Gallup Poll} is one of the leading polls for election +results. The many sources of biases -- who responds to polls? Do voters +tell the truth? How can we predict turnout? -- still remain, but the +\emph{Gallup Poll} uses several tactics to mitigate them. Within their +sampling frame of ``civilian, non-institutionalized population'' of +adults in telephone households in continental U.S., they use random +digit dialing to include both listed/unlisted phone numbers and to avoid +selection bias. Additionally, they use a within-household selection +process to randomly select households with one or more adults. If no one +answers, re-call multiple times to avoid non-response bias. + +\section{Probability Samples}\label{probability-samples} + +When sampling, it is essential to focus on the quality of the sample +rather than the quantity of the sample. A huge sample size does not fix +a bad sampling method. Our main goal is to gather a sample that is +representative of the population it came from. In this section, we'll +explore the different types of sampling and their pros and cons. + +A \textbf{convenience sample} is whatever you can get ahold of; this +type of sampling is \emph{non-random}. Note that haphazard sampling is +not necessarily random sampling; there are many potential sources of +bias. + +In a \textbf{probability sample}, we provide the \textbf{chance} that +any specified \textbf{set} of individuals will be in the sample +(individuals in the population can have different chances of being +selected; they don't all have to be uniform), and we sample at random +based off this known chance. For this reason, probability samples are +also called \textbf{random samples}. The randomness provides a few +benefits: + +\begin{itemize} +\tightlist +\item + Because we know the source probabilities, we can \textbf{measure the + errors}. +\item + Sampling at random gives us a more representative sample of the + population, which \textbf{reduces bias}. (Note: this is only the case + when the probability distribution we're sampling from is accurate. + Random samples using ``bad'' or inaccurate distributions can produce + biased estimates of population quantities.) +\item + Probability samples allow us to \textbf{estimate} the \textbf{bias} + and \textbf{chance error}, which helps us \textbf{quantify + uncertainty} (more in a future lecture). +\end{itemize} + +The real world is usually more complicated, and we often don't know the +initial probabilities. For example, we do not generally know the +probability that a given bacterium is in a microbiome sample or whether +people will answer when Gallup calls landlines. That being said, still +we try to model probability sampling to the best of our ability even +when the sampling or measurement process is not fully under our control. + +A few common random sampling schemes: + +\begin{itemize} +\tightlist +\item + A \textbf{uniform random sample with replacement} is a sample drawn + \textbf{uniformly} at random \textbf{with} replacement. + + \begin{itemize} + \tightlist + \item + Random doesn't always mean ``uniformly at random,'' but in this + specific context, it does. + \item + Some individuals in the population might get picked more than once. + \end{itemize} +\item + A \textbf{simple random sample (SRS)} is a sample drawn + \textbf{uniformly} at random \textbf{without} replacement. + + \begin{itemize} + \tightlist + \item + Every individual (and subset of individuals) has the same chance of + being selected from the sampling frame. + \item + Every pair has the same chance as every other pair. + \item + Every triple has the same chance as every other triple. + \item + And so on. + \end{itemize} +\item + A \textbf{stratified random sample}, where random sampling is + performed on strata (specific groups), and the groups together compose + a sample. +\end{itemize} + +\subsection{Example Scheme 1: Probability +Sample}\label{example-scheme-1-probability-sample} + +Suppose we have 3 TA's (\textbf{A}rman, \textbf{B}oyu, +\textbf{C}harlie): I decide to sample 2 of them as follows: + +\begin{itemize} +\tightlist +\item + I choose A with probability 1.0 +\item + I choose either B or C, each with a probability of 0.5. +\end{itemize} + +We can list all the possible outcomes and their respective probabilities +in a table: + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +Outcome & Probability \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\{A, B\} & 0.5 \\ +\{A, C\} & 0.5 \\ +\{B, C\} & 0 \\ +\end{longtable} + +This is a \textbf{probability sample} (though not a great one). Of the 3 +people in my population, I know the chance of getting each subset. +Suppose I'm measuring the average distance TAs live from campus. + +\begin{itemize} +\tightlist +\item + This scheme does not see the entire population! +\item + My estimate using the single sample I take has some chance error + depending on if I see AB or AC. +\item + This scheme is biased towards A's response. +\end{itemize} + +\subsection{Example Scheme 2: Simple Random +Sample}\label{example-scheme-2-simple-random-sample} + +Consider the following sampling scheme: + +\begin{itemize} +\tightlist +\item + A class roster has 1100 students listed alphabetically. +\item + Pick one of the first 10 students on the list at random (e.g.~Student + 8). +\item + To create your sample, take that student and every 10th student listed + after that (e.g.~Students 8, 18, 28, 38, etc.). +\end{itemize} + +Is this a probability sample? + +Yes. For a sample {[}n, n + 10, n + 20, \ldots, n + 1090{]}, where 1 +\textless= n \textless= 10, the probability of that sample is 1/10. +Otherwise, the probability is 0. + +Only 10 possible samples! + +Does each student have the same probability of being selected? + +Yes. Each student is chosen with a probability of 1/10. + +Is this a simple random sample? + +No.~The chance of selecting (8, 18) is 1/10; the chance of selecting (8, +9) is 0. + +\subsection{Demo: Barbie v. +Oppenheimer}\label{demo-barbie-v.-oppenheimer} + +We are trying to collect a sample from Berkeley residents to predict the +which one of Barbie and Oppenheimer would perform better on their +opening day, July 21st. + +First, let's grab a dataset that has every single resident in Berkeley +(this is a fake dataset) and which movie they \textbf{actually} watched +on July 21st. + +Let's load in the \texttt{movie.csv} table. We can assume that: + +\begin{itemize} +\tightlist +\item + \texttt{is\_male} is a boolean that indicates if a resident identifies + as male. +\item + There are only two movies they can watch on July 21st: Barbie and + Oppenheimer. +\item + Every resident watches a movie (either Barbie or Oppenheimer) on July + 21st. +\end{itemize} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} + +\NormalTok{sns.set\_theme(style}\OperatorTok{=}\StringTok{\textquotesingle{}darkgrid\textquotesingle{}}\NormalTok{, font\_scale }\OperatorTok{=} \FloatTok{1.5}\NormalTok{,} +\NormalTok{ rc}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}figure.figsize\textquotesingle{}}\NormalTok{:(}\DecValTok{7}\NormalTok{,}\DecValTok{5}\NormalTok{)\})} + +\NormalTok{rng }\OperatorTok{=}\NormalTok{ np.random.default\_rng()} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{movie }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/movie.csv"}\NormalTok{)} + +\CommentTok{\# create a 1/0 int that indicates Barbie vote} +\NormalTok{movie[}\StringTok{\textquotesingle{}barbie\textquotesingle{}}\NormalTok{] }\OperatorTok{=}\NormalTok{ (movie[}\StringTok{\textquotesingle{}movie\textquotesingle{}}\NormalTok{] }\OperatorTok{==} \StringTok{\textquotesingle{}Barbie\textquotesingle{}}\NormalTok{).astype(}\BuiltInTok{int}\NormalTok{)} +\NormalTok{movie.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& age & is\_male & movie & barbie \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 35 & False & Barbie & 1 \\ +1 & 42 & True & Oppenheimer & 0 \\ +2 & 55 & False & Barbie & 1 \\ +3 & 77 & True & Oppenheimer & 0 \\ +4 & 31 & False & Barbie & 1 \\ +\end{longtable} + +What fraction of Berkeley residents chose Barbie? + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{actual\_barbie }\OperatorTok{=}\NormalTok{ np.mean(movie[}\StringTok{"barbie"}\NormalTok{])} +\NormalTok{actual\_barbie} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(0.5302792307692308) +\end{verbatim} + +This is the \textbf{actual outcome} of the competition. Based on this +result, Barbie would win. How did our sample of retirees do? + +\subsubsection{Convenience Sample: +Retirees}\label{convenience-sample-retirees} + +Let's take a convenience sample of people who have retired +(\textgreater= 65 years old). What proportion of them went to see Barbie +instead of Oppenheimer? + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{convenience\_sample }\OperatorTok{=}\NormalTok{ movie[movie[}\StringTok{\textquotesingle{}age\textquotesingle{}}\NormalTok{] }\OperatorTok{\textgreater{}=} \DecValTok{65}\NormalTok{] }\CommentTok{\# take a convenience sample of retirees} +\NormalTok{np.mean(convenience\_sample[}\StringTok{"barbie"}\NormalTok{]) }\CommentTok{\# what proportion of them saw Barbie? } +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(0.3744755089093924) +\end{verbatim} + +Based on this result, we would have predicted that Oppenheimer would +win! What happened? Is it possible that our sample is too small or +noisy? + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# what\textquotesingle{}s the size of our sample? } +\BuiltInTok{len}\NormalTok{(convenience\_sample)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +359396 +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# what proportion of our data is in the convenience sample? } +\BuiltInTok{len}\NormalTok{(convenience\_sample)}\OperatorTok{/}\BuiltInTok{len}\NormalTok{(movie)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0.27645846153846154 +\end{verbatim} + +Seems like our sample is rather large (roughly 360,000 people), so the +error is likely not due to solely to chance. + +\subsubsection{Check for Bias}\label{check-for-bias} + +Let us aggregate all choices by age and visualize the fraction of Barbie +views, split by gender. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{votes\_by\_barbie }\OperatorTok{=}\NormalTok{ movie.groupby([}\StringTok{"age"}\NormalTok{,}\StringTok{"is\_male"}\NormalTok{]).agg(}\StringTok{"mean"}\NormalTok{, numeric\_only}\OperatorTok{=}\VariableTok{True}\NormalTok{).reset\_index()} +\NormalTok{votes\_by\_barbie.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& age & is\_male & barbie \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 18 & False & 0.819594 \\ +1 & 18 & True & 0.667001 \\ +2 & 19 & False & 0.812214 \\ +3 & 19 & True & 0.661252 \\ +4 & 20 & False & 0.805281 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# A common matplotlib/seaborn pattern: create the figure and axes object, pass ax} +\CommentTok{\# to seaborn for drawing into, and later fine{-}tune the figure via ax.} +\NormalTok{fig, ax }\OperatorTok{=}\NormalTok{ plt.subplots()}\OperatorTok{;} + +\NormalTok{red\_blue }\OperatorTok{=}\NormalTok{ [}\StringTok{"\#bf1518"}\NormalTok{, }\StringTok{"\#397eb7"}\NormalTok{]} +\ControlFlowTok{with}\NormalTok{ sns.color\_palette(red\_blue):} +\NormalTok{ sns.pointplot(data}\OperatorTok{=}\NormalTok{votes\_by\_barbie, x }\OperatorTok{=} \StringTok{"age"}\NormalTok{, y }\OperatorTok{=} \StringTok{"barbie"}\NormalTok{, hue }\OperatorTok{=} \StringTok{"is\_male"}\NormalTok{, ax}\OperatorTok{=}\NormalTok{ax)} + +\NormalTok{new\_ticks }\OperatorTok{=}\NormalTok{ [i.get\_text() }\ControlFlowTok{for}\NormalTok{ i }\KeywordTok{in}\NormalTok{ ax.get\_xticklabels()]} +\NormalTok{ax.set\_xticks(}\BuiltInTok{range}\NormalTok{(}\DecValTok{0}\NormalTok{, }\BuiltInTok{len}\NormalTok{(new\_ticks), }\DecValTok{10}\NormalTok{), new\_ticks[::}\DecValTok{10}\NormalTok{])} +\NormalTok{ax.set\_title(}\StringTok{"Preferences by Demographics"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{sampling/sampling_files/figure-pdf/cell-9-output-1.pdf} + +\begin{itemize} +\tightlist +\item + We see that retirees (in Berkeley) tend to watch Oppenheimer. +\item + We also see that residents who identify as non-male tend to prefer + Barbie. +\end{itemize} + +\subsubsection{Simple Random Sample}\label{simple-random-sample} + +Suppose we took a simple random sample (SRS) of the same size as our +retiree sample: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{n }\OperatorTok{=} \BuiltInTok{len}\NormalTok{(convenience\_sample)} +\NormalTok{random\_sample }\OperatorTok{=}\NormalTok{ movie.sample(n, replace }\OperatorTok{=} \VariableTok{False}\NormalTok{) }\CommentTok{\#\# By default, replace = False} +\NormalTok{np.mean(random\_sample[}\StringTok{"barbie"}\NormalTok{])} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(0.5299112956182038) +\end{verbatim} + +This is very close to the actual vote of 0.5302792307692308! + +It turns out that we can get similar results with a \textbf{much smaller +sample size}, say, 800: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{n }\OperatorTok{=} \DecValTok{800} +\NormalTok{random\_sample }\OperatorTok{=}\NormalTok{ movie.sample(n, replace }\OperatorTok{=} \VariableTok{False}\NormalTok{)} + +\CommentTok{\# Compute the sample average and the resulting relative error} +\NormalTok{sample\_barbie }\OperatorTok{=}\NormalTok{ np.mean(random\_sample[}\StringTok{"barbie"}\NormalTok{])} +\NormalTok{err }\OperatorTok{=} \BuiltInTok{abs}\NormalTok{(sample\_barbie}\OperatorTok{{-}}\NormalTok{actual\_barbie)}\OperatorTok{/}\NormalTok{actual\_barbie} + +\CommentTok{\# We can print output with Markdown formatting too...} +\ImportTok{from}\NormalTok{ IPython.display }\ImportTok{import}\NormalTok{ Markdown} +\NormalTok{Markdown(}\SpecialStringTok{f"**Actual** = }\SpecialCharTok{\{}\NormalTok{actual\_barbie}\SpecialCharTok{:.4f\}}\SpecialStringTok{, **Sample** = }\SpecialCharTok{\{}\NormalTok{sample\_barbie}\SpecialCharTok{:.4f\}}\SpecialStringTok{, "} + \SpecialStringTok{f"**Err** = }\SpecialCharTok{\{}\DecValTok{100}\OperatorTok{*}\NormalTok{err}\SpecialCharTok{:.2f\}}\SpecialStringTok{\%."}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\textbf{Actual} = 0.5303, \textbf{Sample} = 0.5075, \textbf{Err} = +4.30\%. + +We'll learn how to choose this number when we (re)learn the Central +Limit Theorem later in the semester. + +\subsubsection{Quantifying Chance Error}\label{quantifying-chance-error} + +In our SRS of size 800, what would be our chance error? + +Let's simulate 1000 versions of taking the 800-sized SRS from before: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{nrep }\OperatorTok{=} \DecValTok{1000} \CommentTok{\# number of simulations} +\NormalTok{n }\OperatorTok{=} \DecValTok{800} \CommentTok{\# size of our sample} +\NormalTok{poll\_result }\OperatorTok{=}\NormalTok{ []} +\ControlFlowTok{for}\NormalTok{ i }\KeywordTok{in} \BuiltInTok{range}\NormalTok{(}\DecValTok{0}\NormalTok{, nrep):} +\NormalTok{ random\_sample }\OperatorTok{=}\NormalTok{ movie.sample(n, replace }\OperatorTok{=} \VariableTok{False}\NormalTok{)} +\NormalTok{ poll\_result.append(np.mean(random\_sample[}\StringTok{"barbie"}\NormalTok{]))} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{fig, ax }\OperatorTok{=}\NormalTok{ plt.subplots()} +\NormalTok{sns.histplot(poll\_result, stat}\OperatorTok{=}\StringTok{\textquotesingle{}density\textquotesingle{}}\NormalTok{, ax}\OperatorTok{=}\NormalTok{ax)} +\NormalTok{ax.axvline(actual\_barbie, color}\OperatorTok{=}\StringTok{"orange"}\NormalTok{, lw}\OperatorTok{=}\DecValTok{4}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{sampling/sampling_files/figure-pdf/cell-13-output-1.pdf} + +What fraction of these simulated samples would have predicted Barbie? + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{poll\_result }\OperatorTok{=}\NormalTok{ pd.Series(poll\_result)} +\NormalTok{np.}\BuiltInTok{sum}\NormalTok{(poll\_result }\OperatorTok{\textgreater{}} \FloatTok{0.5}\NormalTok{)}\OperatorTok{/}\DecValTok{1000} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(0.95) +\end{verbatim} + +You can see the curve looks roughly Gaussian/normal. Using KDE: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{sns.histplot(poll\_result, stat}\OperatorTok{=}\StringTok{\textquotesingle{}density\textquotesingle{}}\NormalTok{, kde}\OperatorTok{=}\VariableTok{True}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{sampling/sampling_files/figure-pdf/cell-15-output-1.pdf} + +\section{Summary}\label{summary-1} + +Understanding the sampling process is what lets us go from describing +the data to understanding the world. Without knowing / assuming +something about how the data were collected, there is no connection +between the sample and the population. Ultimately, the dataset doesn't +tell us about the world behind the data. + +\bookmarksetup{startatroot} + +\chapter{Introduction to Modeling}\label{introduction-to-modeling} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Understand what models are and how to carry out the four-step modeling + process. +\item + Define the concept of loss and gain familiarity with \(L_1\) and + \(L_2\) loss. +\item + Fit the Simple Linear Regression model using minimization techniques. +\end{itemize} + +\end{tcolorbox} + +Up until this point in the semester, we've focused on analyzing +datasets. We've looked into the early stages of the data science +lifecycle, focusing on the programming tools, visualization techniques, +and data cleaning methods needed for data analysis. + +This lecture marks a shift in focus. We will move away from examining +datasets to actually \emph{using} our data to better understand the +world. Specifically, the next sequence of lectures will explore +predictive modeling: generating models to make some predictions about +the world around us. In this lecture, we'll introduce the conceptual +framework for setting up a modeling task. In the next few lectures, +we'll put this framework into practice by implementing various kinds of +models. + +\section{What is a Model?}\label{what-is-a-model} + +A model is an \textbf{idealized representation} of a system. A system is +a set of principles or procedures according to which something +functions. We live in a world full of systems: the procedure of turning +on a light happens according to a specific set of rules dictating the +flow of electricity. The truth behind how any event occurs is usually +complex, and many times the specifics are unknown. The workings of the +world can be viewed as its own giant procedure. Models seek to simplify +the world and distill them into workable pieces. + +Example: We model the fall of an object on Earth as subject to a +constant acceleration of \(9.81 m/s^2\) due to gravity. + +\begin{itemize} +\tightlist +\item + While this describes the behavior of our system, it is merely an + approximation. +\item + It doesn't account for the effects of air resistance, local variations + in gravity, etc. +\item + In practice, it's accurate enough to be useful! +\end{itemize} + +\subsection{Reasons for Building +Models}\label{reasons-for-building-models} + +Why do we want to build models? As far as data scientists and +statisticians are concerned, there are three reasons, and each implies a +different focus on modeling. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + To explain complex phenomena occurring in the world we live in. + Examples of this might be: + + \begin{itemize} + \tightlist + \item + How are the parents' average height related to their children's + average height? + \item + How does an object's velocity and acceleration impact how far it + travels? (Physics: \(d = d_0 + vt + \frac{1}{2}at^2\)) + \end{itemize} + + In these cases, we care about creating models that are \emph{simple + and interpretable}, allowing us to understand what the relationships + between our variables are. +\item + To make accurate predictions about unseen data. Some examples include: + + \begin{itemize} + \tightlist + \item + Can we predict if an email is spam or not? + \item + Can we generate a one-sentence summary of this 10-page long article? + \end{itemize} + + When making predictions, we care more about making extremely accurate + predictions, at the cost of having an uninterpretable model. These are + sometimes called black-box models and are common in fields like deep + learning. +\item + To measure the causal effects of one event on some other event. For + example, + + \begin{itemize} + \tightlist + \item + Does smoking \emph{cause} lung cancer? + \item + Does a job training program \emph{cause} increases in employment and + wages? + \end{itemize} + + This is a much harder question because most statistical tools are + designed to infer association, not causation. We will not focus on + this task in Data 100, but you can take other advanced classes on + causal inference (e.g., Stat 156, Data 102) if you are intrigued! +\end{enumerate} + +Most of the time, we aim to strike a balance between building +\textbf{interpretable} models and building \textbf{accurate models}. + +\subsection{Common Types of Models}\label{common-types-of-models} + +In general, models can be split into two categories: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Deterministic physical (mechanistic) models: Laws that govern how the + world works. + + \begin{itemize} + \tightlist + \item + \href{https://en.wikipedia.org/wiki/Kepler\%27s_laws_of_planetary_motion\#Third_law}{Kepler's + Third Law of Planetary Motion (1619)}: The ratio of the square of an + object's orbital period with the cube of the semi-major axis of its + orbit is the same for all objects orbiting the same primary. + + \begin{itemize} + \tightlist + \item + \(T^2 \propto R^3\) + \end{itemize} + \item + \href{https://en.wikipedia.org/wiki/Newton\%27s_laws_of_motion}{Newton's + Laws: motion and gravitation (1687)}: Newton's second law of motion + models the relationship between the mass of an object and the force + required to accelerate it. + + \begin{itemize} + \tightlist + \item + \(F = ma\) + \item + \(F_g = G \frac{m_1 m_2}{r^2}\) + \end{itemize} + \end{itemize} +\item + Probabilistic models: Models that attempt to understand how random + processes evolve. These are more general and can be used to describe + many phenomena in the real world. These models commonly make + simplifying assumptions about the nature of the world. + + \begin{itemize} + \tightlist + \item + \href{https://en.wikipedia.org/wiki/Poisson_point_process}{Poisson + Process models}: Used to model random events that happen with some + probability at any point in time and are strictly increasing in + count, such as the arrival of customers at a store. + \end{itemize} +\end{enumerate} + +Note: These specific models are not in the scope of Data 100 and exist +to serve as motivation. + +\section{Simple Linear Regression}\label{simple-linear-regression} + +The \textbf{regression line} is the unique straight line that minimizes +the \textbf{mean squared error} of estimation among all straight lines. +As with any straight line, it can be defined by a slope and a +y-intercept: + +\begin{itemize} +\tightlist +\item + \(\text{slope} = r \cdot \frac{\text{Standard Deviation of } y}{\text{Standard Deviation of }x}\) +\item + \(y\text{-intercept} = \text{average of }y - \text{slope}\cdot\text{average of }x\) +\item + \(\text{regression estimate} = y\text{-intercept} + \text{slope}\cdot\text{}x\) +\item + \(\text{residual} =\text{observed }y - \text{regression estimate}\) +\end{itemize} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\CommentTok{\# Set random seed for consistency } +\NormalTok{np.random.seed(}\DecValTok{43}\NormalTok{)} +\NormalTok{plt.style.use(}\StringTok{\textquotesingle{}default\textquotesingle{}}\NormalTok{) } + +\CommentTok{\#Generate random noise for plotting} +\NormalTok{x }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{100}\NormalTok{)} +\NormalTok{y }\OperatorTok{=}\NormalTok{ x }\OperatorTok{*} \FloatTok{0.5} \OperatorTok{{-}} \DecValTok{1} \OperatorTok{+}\NormalTok{ np.random.randn(}\DecValTok{100}\NormalTok{) }\OperatorTok{*} \FloatTok{0.3} + +\CommentTok{\#plot regression line} +\NormalTok{sns.regplot(x}\OperatorTok{=}\NormalTok{x,y}\OperatorTok{=}\NormalTok{y)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{intro_to_modeling/intro_to_modeling_files/figure-pdf/cell-2-output-1.pdf} + +\subsection{Notations and Definitions}\label{notations-and-definitions} + +For a pair of variables \(x\) and \(y\) representing our data +\(\mathcal{D} = \{(x_1, y_1), (x_2, y_2), \dots, (x_n, y_n)\}\), we +denote their means/averages as \(\bar x\) and \(\bar y\) and standard +deviations as \(\sigma_x\) and \(\sigma_y\). + +\subsubsection{Standard Units}\label{standard-units} + +A variable is represented in standard units if the following are true: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + 0 in standard units is equal to the mean (\(\bar{x}\)) in the original + variable's units. +\item + An increase of 1 standard unit is an increase of 1 standard deviation + (\(\sigma_x\)) in the original variable's units. +\end{enumerate} + +To convert a variable \(x_i\) into standard units, we subtract its mean +from it and divide it by its standard deviation. For example, \(x_i\) in +standard units is \(\frac{x_i - \bar x}{\sigma_x}\). + +\subsubsection{Correlation}\label{correlation} + +The correlation (\(r\)) is the average of the product of \(x\) and +\(y\), both measured in \emph{standard units}. + +\[r = \frac{1}{n} \sum_{i=1}^n (\frac{x_i - \bar{x}}{\sigma_x})(\frac{y_i - \bar{y}}{\sigma_y})\] + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Correlation measures the strength of a \textbf{linear association} + between two variables. +\item + Correlations range between -1 and 1: \(|r| \leq 1\), with \(r=1\) + indicating perfect linear association, and \(r=-1\) indicating perfect + negative association. The closer \(r\) is to \(0\), the weaker the + linear association is. +\item + Correlation says nothing about causation and non-linear association. + Correlation does \textbf{not} imply causation. When \(r = 0\), the two + variables are uncorrelated. However, they could still be related + through some non-linear relationship. +\end{enumerate} + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ plot\_and\_get\_corr(ax, x, y, title):} +\NormalTok{ ax.set\_xlim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{)} +\NormalTok{ ax.set\_ylim(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{)} +\NormalTok{ ax.set\_xticks([])} +\NormalTok{ ax.set\_yticks([])} +\NormalTok{ ax.scatter(x, y, alpha }\OperatorTok{=} \FloatTok{0.73}\NormalTok{)} +\NormalTok{ r }\OperatorTok{=}\NormalTok{ np.corrcoef(x, y)[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{]} +\NormalTok{ ax.set\_title(title }\OperatorTok{+} \StringTok{" (corr: }\SpecialCharTok{\{\}}\StringTok{)"}\NormalTok{.}\BuiltInTok{format}\NormalTok{(r.}\BuiltInTok{round}\NormalTok{(}\DecValTok{2}\NormalTok{)))} + \ControlFlowTok{return}\NormalTok{ r} + +\NormalTok{fig, axs }\OperatorTok{=}\NormalTok{ plt.subplots(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, figsize }\OperatorTok{=}\NormalTok{ (}\DecValTok{10}\NormalTok{, }\DecValTok{10}\NormalTok{))} + +\CommentTok{\# Just noise} +\NormalTok{x1, y1 }\OperatorTok{=}\NormalTok{ np.random.randn(}\DecValTok{2}\NormalTok{, }\DecValTok{100}\NormalTok{)} +\NormalTok{corr1 }\OperatorTok{=}\NormalTok{ plot\_and\_get\_corr(axs[}\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{], x1, y1, title }\OperatorTok{=} \StringTok{"noise"}\NormalTok{)} + +\CommentTok{\# Strong linear} +\NormalTok{x2 }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{100}\NormalTok{)} +\NormalTok{y2 }\OperatorTok{=}\NormalTok{ x2 }\OperatorTok{*} \FloatTok{0.5} \OperatorTok{{-}} \DecValTok{1} \OperatorTok{+}\NormalTok{ np.random.randn(}\DecValTok{100}\NormalTok{) }\OperatorTok{*} \FloatTok{0.3} +\NormalTok{corr2 }\OperatorTok{=}\NormalTok{ plot\_and\_get\_corr(axs[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{], x2, y2, title }\OperatorTok{=} \StringTok{"strong linear"}\NormalTok{)} + +\CommentTok{\# Unequal spread} +\NormalTok{x3 }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{100}\NormalTok{)} +\NormalTok{y3 }\OperatorTok{=} \OperatorTok{{-}}\NormalTok{ x3}\OperatorTok{/}\DecValTok{3} \OperatorTok{+}\NormalTok{ np.random.randn(}\DecValTok{100}\NormalTok{)}\OperatorTok{*}\NormalTok{(x3)}\OperatorTok{/}\FloatTok{2.5} +\NormalTok{corr3 }\OperatorTok{=}\NormalTok{ plot\_and\_get\_corr(axs[}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{], x3, y3, title }\OperatorTok{=} \StringTok{"strong linear"}\NormalTok{)} +\NormalTok{extent }\OperatorTok{=}\NormalTok{ axs[}\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{].get\_window\_extent().transformed(fig.dpi\_scale\_trans.inverted())} + +\CommentTok{\# Strong non{-}linear} +\NormalTok{x4 }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{3}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{100}\NormalTok{)} +\NormalTok{y4 }\OperatorTok{=} \DecValTok{2}\OperatorTok{*}\NormalTok{np.sin(x3 }\OperatorTok{{-}} \FloatTok{1.5}\NormalTok{) }\OperatorTok{+}\NormalTok{ np.random.randn(}\DecValTok{100}\NormalTok{) }\OperatorTok{*} \FloatTok{0.3} +\NormalTok{corr4 }\OperatorTok{=}\NormalTok{ plot\_and\_get\_corr(axs[}\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{], x4, y4, title }\OperatorTok{=} \StringTok{"strong non{-}linear"}\NormalTok{)} + +\NormalTok{plt.show()} +\end{Highlighting} +\end{Shaded} + +\includegraphics{intro_to_modeling/intro_to_modeling_files/figure-pdf/cell-3-output-1.pdf} + +\subsection{Alternate Form}\label{alternate-form} + +When the variables \(y\) and \(x\) are measured in \emph{standard +units}, the regression line for predicting \(y\) based on \(x\) has +slope \(r\) and passes through the origin. + +\[\hat{y}_{su} = r \cdot x_{su}\] + +\includegraphics{intro_to_modeling/images/reg_line_1.png} + +\begin{itemize} +\tightlist +\item + In the original units, this becomes +\end{itemize} + +\[\frac{\hat{y} - \bar{y}}{\sigma_y} = r \cdot \frac{x - \bar{x}}{\sigma_x}\] + +\includegraphics{intro_to_modeling/images/reg_line_2.png} + +\subsection{Derivation}\label{derivation} + +Starting from the top, we have our claimed form of the regression line, +and we want to show that it is equivalent to the optimal linear +regression line: \(\hat{y} = \hat{a} + \hat{b}x\). + +Recall: + +\begin{itemize} +\tightlist +\item + \(\hat{b} = r \cdot \frac{\text{Standard Deviation of }y}{\text{Standard Deviation of }x}\) +\item + \(\hat{a} = \text{average of }y - \text{slope}\cdot\text{average of }x\) +\end{itemize} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-color-frame, left=2mm, breakable, rightrule=.15mm, bottomrule=.15mm, opacityback=0, toprule=.15mm, leftrule=.75mm, arc=.35mm, colback=white] + +Proof: + +\[\frac{\hat{y} - \bar{y}}{\sigma_y} = r \cdot \frac{x - \bar{x}}{\sigma_x}\] + +Multiply by \(\sigma_y\), and add \(\bar{y}\) on both sides. + +\[\hat{y} = \sigma_y \cdot r \cdot \frac{x - \bar{x}}{\sigma_x} + \bar{y}\] + +Distribute coefficient \(\sigma_{y}\cdot r\) to the +\(\frac{x - \bar{x}}{\sigma_x}\) term + +\[\hat{y} = (\frac{r\sigma_y}{\sigma_x} ) \cdot x + (\bar{y} - (\frac{r\sigma_y}{\sigma_x} ) \bar{x})\] + +We now see that we have a line that matches our claim: + +\begin{itemize} +\tightlist +\item + slope: + \(r\cdot\frac{\text{SD of y}}{\text{SD of x}} = r\cdot\frac{\sigma_y}{\sigma_x}\) +\item + intercept: \(\bar{y} - \text{slope}\cdot \bar{x}\) +\end{itemize} + +Note that the error for the i-th datapoint is: \(e_i = y_i - \hat{y_i}\) + +\end{tcolorbox} + +\section{The Modeling Process}\label{the-modeling-process} + +At a high level, a model is a way of representing a system. In Data 100, +we'll treat a model as some mathematical rule we use to describe the +relationship between variables. + +What variables are we modeling? Typically, we use a subset of the +variables in our sample of collected data to model another variable in +this data. To put this more formally, say we have the following dataset +\(\mathcal{D}\): + +\[\mathcal{D} = \{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\}\] + +Each pair of values \((x_i, y_i)\) represents a datapoint. In a modeling +setting, we call these \textbf{observations}. \(y_i\) is the dependent +variable we are trying to model, also called an \textbf{output} or +\textbf{response}. \(x_i\) is the independent variable inputted into the +model to make predictions, also known as a \textbf{feature}. + +Our goal in modeling is to use the observed data \(\mathcal{D}\) to +predict the output variable \(y_i\). We denote each prediction as +\(\hat{y}_i\) (read: ``y hat sub i''). + +How do we generate these predictions? Some examples of models we'll +encounter in the next few lectures are given below: + +\[\hat{y}_i = \theta\] \[\hat{y}_i = \theta_0 + \theta_1 x_i\] + +The examples above are known as \textbf{parametric models}. They relate +the collected data, \(x_i\), to the prediction we make, \(\hat{y}_i\). A +few parameters (\(\theta\), \(\theta_0\), \(\theta_1\)) are used to +describe the relationship between \(x_i\) and \(\hat{y}_i\). + +Notice that we don't immediately know the values of these parameters. +While the features, \(x_i\), are taken from our observed data, we need +to decide what values to give \(\theta\), \(\theta_0\), and \(\theta_1\) +ourselves. This is the heart of parametric modeling: \emph{what +parameter values should we choose so our model makes the best possible +predictions?} + +To choose our model parameters, we'll work through the \textbf{modeling +process}. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Choose a model: how should we represent the world? +\item + Choose a loss function: how do we quantify prediction error? +\item + Fit the model: how do we choose the best parameters of our model given + our data? +\item + Evaluate model performance: how do we evaluate whether this process + gave rise to a good model? +\end{enumerate} + +\section{Choosing a Model}\label{choosing-a-model} + +Our first step is choosing a model: defining the mathematical rule that +describes the relationship between the features, \(x_i\), and +predictions \(\hat{y}_i\). + +In +\href{https://inferentialthinking.com/chapters/15/4/Least_Squares_Regression.html}{Data +8}, you learned about the \textbf{Simple Linear Regression (SLR) model}. +You learned that the model takes the form: \[\hat{y}_i = a + bx_i\] + +In Data 100, we'll use slightly different notation: we will replace +\(a\) with \(\theta_0\) and \(b\) with \(\theta_1\). This will allow us +to use the same notation when we explore more complex models later on in +the course. + +\[\hat{y}_i = \theta_0 + \theta_1 x_i\] + +The parameters of the SLR model are \(\theta_0\), also called the +intercept term, and \(\theta_1\), also called the slope term. To create +an effective model, we want to choose values for \(\theta_0\) and +\(\theta_1\) that most accurately predict the output variable. The +``best'' fitting model parameters are given the special names: +\(\hat{\theta}_0\) and \(\hat{\theta}_1\); they are the specific +parameter values that allow our model to generate the best possible +predictions. + +In Data 8, you learned that the best SLR model parameters are: +\[\hat{\theta}_0 = \bar{y} - \hat{\theta}_1\bar{x} \qquad \qquad \hat{\theta}_1 = r \frac{\sigma_y}{\sigma_x}\] + +A quick reminder on notation: + +\begin{itemize} +\tightlist +\item + \(\bar{y}\) and \(\bar{x}\) indicate the mean value of \(y\) and + \(x\), respectively +\item + \(\sigma_y\) and \(\sigma_x\) indicate the standard deviations of + \(y\) and \(x\) +\item + \(r\) is the + \href{https://inferentialthinking.com/chapters/15/1/Correlation.html\#the-correlation-coefficient}{correlation + coefficient}, defined as the average of the product of \(x\) and \(y\) + measured in standard units: + \(\frac{1}{n} \sum_{i=1}^n (\frac{x_i-\bar{x}}{\sigma_x})(\frac{y_i-\bar{y}}{\sigma_y})\) +\end{itemize} + +In Data 100, we want to understand \emph{how} to derive these best model +coefficients. To do so, we'll introduce the concept of a loss function. + +\section{Choosing a Loss Function}\label{choosing-a-loss-function} + +We've talked about the idea of creating the ``best'' possible +predictions. This begs the question: how do we decide how ``good'' or +``bad'' our model's predictions are? + +A \textbf{loss function} characterizes the cost, error, or fit resulting +from a particular choice of model or model parameters. This function, +\(L(y, \hat{y})\), quantifies how ``bad'' or ``far off'' a single +prediction by our model is from a true, observed value in our collected +data. + +The choice of loss function for a particular model will affect the +accuracy and computational cost of estimation, and it'll also depend on +the estimation task at hand. For example, + +\begin{itemize} +\tightlist +\item + Are outputs quantitative or qualitative? +\item + Do outliers matter? +\item + Are all errors equally costly? (e.g., a false negative on a cancer + test is arguably more dangerous than a false positive) +\end{itemize} + +Regardless of the specific function used, a loss function should follow +two basic principles: + +\begin{itemize} +\tightlist +\item + If the prediction \(\hat{y}_i\) is \emph{close} to the actual value + \(y_i\), loss should be low. +\item + If the prediction \(\hat{y}_i\) is \emph{far} from the actual value + \(y_i\), loss should be high. +\end{itemize} + +Two common choices of loss function are squared loss and absolute loss. + +\textbf{Squared loss}, also known as \textbf{L2 loss}, computes loss as +the square of the difference between the observed \(y_i\) and predicted +\(\hat{y}_i\): \[L(y_i, \hat{y}_i) = (y_i - \hat{y}_i)^2\] + +\textbf{Absolute loss}, also known as \textbf{L1 loss}, computes loss as +the absolute difference between the observed \(y_i\) and predicted +\(\hat{y}_i\): \[L(y_i, \hat{y}_i) = |y_i - \hat{y}_i|\] + +L1 and L2 loss give us a tool for quantifying our model's performance on +a single data point. This is a good start, but ideally, we want to +understand how our model performs across our \emph{entire} dataset. A +natural way to do this is to compute the average loss across all data +points in the dataset. This is known as the \textbf{cost function}, +\(\hat{R}(\theta)\): +\[\hat{R}(\theta) = \frac{1}{n} \sum^n_{i=1} L(y_i, \hat{y}_i)\] + +The cost function has many names in the statistics literature. You may +also encounter the terms: + +\begin{itemize} +\tightlist +\item + Empirical risk (this is why we give the cost function the name \(R\)) +\item + Error function +\item + Average loss +\end{itemize} + +We can substitute our L1 and L2 loss into the cost function definition. +The \textbf{Mean Squared Error (MSE)} is the average squared loss across +a dataset: \[\text{MSE} = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2\] + +The \textbf{Mean Absolute Error (MAE)} is the average absolute loss +across a dataset: +\[\text{MAE}= \frac{1}{n} \sum_{i=1}^n |y_i - \hat{y}_i|\] + +\section{Fitting the Model}\label{fitting-the-model} + +Now that we've established the concept of a loss function, we can return +to our original goal of choosing model parameters. Specifically, we want +to choose the best set of model parameters that will minimize the +model's cost on our dataset. This process is called fitting the model. + +We know from calculus that a function is minimized when (1) its first +derivative is equal to zero and (2) its second derivative is positive. +We often call the function being minimized the \textbf{objective +function} (our objective is to find its minimum). + +To find the optimal model parameter, we: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Take the derivative of the cost function with respect to that + parameter +\item + Set the derivative equal to 0 +\item + Solve for the parameter +\end{enumerate} + +We repeat this process for each parameter present in the model. For now, +we'll disregard the second derivative condition. + +To help us make sense of this process, let's put it into action by +deriving the optimal model parameters for simple linear regression using +the mean squared error as our cost function. Remember: although the +notation may look tricky, all we are doing is following the three steps +above! + +Step 1: take the derivative of the cost function with respect to each +model parameter. We substitute the SLR model, +\(\hat{y}_i = \theta_0+\theta_1 x_i\), into the definition of MSE above +and differentiate with respect to \(\theta_0\) and \(\theta_1\). +\[\text{MSE} = \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 = \frac{1}{n} \sum_{i=1}^{n} (y_i - \theta_0 - \theta_1 x_i)^2\] + +\[\frac{\partial}{\partial \theta_0} \text{MSE} = \frac{-2}{n} \sum_{i=1}^{n} y_i - \theta_0 - \theta_1 x_i\] + +\[\frac{\partial}{\partial \theta_1} \text{MSE} = \frac{-2}{n} \sum_{i=1}^{n} (y_i - \theta_0 - \theta_1 x_i)x_i\] + +Let's walk through these derivations in more depth, starting with the +derivative of MSE with respect to \(\theta_0\). + +Given our MSE above, we know that: +\[\frac{\partial}{\partial \theta_0} \text{MSE} = \frac{\partial}{\partial \theta_0} \frac{1}{n} \sum_{i=1}^{n} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\] + +Noting that the derivative of sum is equivalent to the sum of +derivatives, this then becomes: +\[ = \frac{1}{n} \sum_{i=1}^{n} \frac{\partial}{\partial \theta_0} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\] + +We can then apply the chain rule. + +\[ = \frac{1}{n} \sum_{i=1}^{n} 2 \cdot{(y_i - \theta_0 - \theta_1 x_i)}\dot(-1)\] + +Finally, we can simplify the constants, leaving us with our answer. + +\[\frac{\partial}{\partial \theta_0} \text{MSE} = \frac{-2}{n} \sum_{i=1}^{n}{(y_i - \theta_0 - \theta_1 x_i)}\] + +Following the same procedure, we can take the derivative of MSE with +respect to \(\theta_1\). + +\[\frac{\partial}{\partial \theta_1} \text{MSE} = \frac{\partial}{\partial \theta_1} \frac{1}{n} \sum_{i=1}^{n} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\] + +\[ = \frac{1}{n} \sum_{i=1}^{n} \frac{\partial}{\partial \theta_1} {(y_i - \theta_0 - \theta_1 x_i)}^{2}\] + +\[ = \frac{1}{n} \sum_{i=1}^{n} 2 \dot{(y_i - \theta_0 - \theta_1 x_i)}\dot(-x_i)\] + +\[= \frac{-2}{n} \sum_{i=1}^{n} {(y_i - \theta_0 - \theta_1 x_i)}x_i\] + +Step 2: set the derivatives equal to 0. After simplifying terms, this +produces two \textbf{estimating equations}. The best set of model +parameters \((\hat{\theta}_0, \hat{\theta}_1)\) \emph{must} satisfy +these two optimality conditions. +\[0 = \frac{-2}{n} \sum_{i=1}^{n} y_i - \hat{\theta}_0 - \hat{\theta}_1 x_i \Longleftrightarrow \frac{1}{n}\sum_{i=1}^{n} y_i - \hat{y}_i = 0\] +\[0 = \frac{-2}{n} \sum_{i=1}^{n} (y_i - \hat{\theta}_0 - \hat{\theta}_1 x_i)x_i \Longleftrightarrow \frac{1}{n}\sum_{i=1}^{n} (y_i - \hat{y}_i)x_i = 0\] + +Step 3: solve the estimating equations to compute estimates for +\(\hat{\theta}_0\) and \(\hat{\theta}_1\). + +Taking the first equation gives the estimate of \(\hat{\theta}_0\): +\[\frac{1}{n} \sum_{i=1}^n y_i - \hat{\theta}_0 - \hat{\theta}_1 x_i = 0 \] + +\[\left(\frac{1}{n} \sum_{i=1}^n y_i \right) - \hat{\theta}_0 - \hat{\theta}_1\left(\frac{1}{n} \sum_{i=1}^n x_i \right) = 0\] + +\[ \hat{\theta}_0 = \bar{y} - \hat{\theta}_1 \bar{x}\] + +With a bit more maneuvering, the second equation gives the estimate of +\(\hat{\theta}_1\). Start by multiplying the first estimating equation +by \(\bar{x}\), then subtracting the result from the second estimating +equation. + +\[\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)x_i - \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)\bar{x} = 0 \] + +\[\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)(x_i - \bar{x}) = 0 \] + +Next, plug in +\(\hat{y}_i = \hat{\theta}_0 + \hat{\theta}_1 x_i = \bar{y} + \hat{\theta}_1(x_i - \bar{x})\): + +\[\frac{1}{n} \sum_{i=1}^n (y_i - \bar{y} - \hat{\theta}_1(x - \bar{x}))(x_i - \bar{x}) = 0 \] + +\[\frac{1}{n} \sum_{i=1}^n (y_i - \bar{y})(x_i - \bar{x}) = \hat{\theta}_1 \times \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2 +\] + +By using the definition of correlation +\(\left(r = \frac{1}{n} \sum_{i=1}^n (\frac{x_i-\bar{x}}{\sigma_x})(\frac{y_i-\bar{y}}{\sigma_y}) \right)\) +and standard deviation +\(\left(\sigma_x = \sqrt{\frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2} \right)\), +we can conclude: +\[r \sigma_x \sigma_y = \hat{\theta}_1 \times \sigma_x^2\] +\[\hat{\theta}_1 = r \frac{\sigma_y}{\sigma_x}\] + +Just as was given in Data 8! + +Remember, this derivation found the optimal model parameters for SLR +when using the MSE cost function. If we had used a different model or +different loss function, we likely would have found different values for +the best model parameters. However, regardless of the model and loss +used, we can \emph{always} follow these three steps to fit the model. + +\bookmarksetup{startatroot} + +\chapter{Constant Model, Loss, and +Transformations}\label{constant-model-loss-and-transformations} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Derive the optimal model parameters for the constant model under MSE + and MAE cost functions. +\item + Evaluate the differences between MSE and MAE risk. +\item + Understand the need for linearization of variables and apply the + Tukey-Mosteller bulge diagram for transformations. +\end{itemize} + +\end{tcolorbox} + +Last time, we introduced the modeling process. We set up a framework to +predict target variables as functions of our features, following a set +workflow: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Choose a model - how should we represent the world? +\item + Choose a loss function - how do we quantify prediction error? +\item + Fit the model - how do we choose the best parameter of our model given + our data? +\item + Evaluate model performance - how do we evaluate whether this process + gave rise to a good model? +\end{enumerate} + +To illustrate this process, we derived the optimal model parameters +under simple linear regression (SLR) with mean squared error (MSE) as +the cost function. A summary of the SLR modeling process is shown below: + +In this lecture, we'll dive deeper into step 4 - evaluating model +performance - using SLR as an example. Additionally, we'll also explore +the modeling process with new models, continue familiarizing ourselves +with the modeling process by finding the best model parameters under a +new model, the constant model, and test out two different loss functions +to understand how our choice of loss influences model design. Later on, +we'll consider what happens when a linear model isn't the best choice to +capture trends in our data and what solutions there are to create better +models. + +Before we get into Step 4, let's quickly review some important +terminology. + +\subsection{Prediction vs.~Estimation}\label{prediction-vs.-estimation} + +The terms prediction and estimation are often used somewhat +interchangeably, but there is a subtle difference between them. +\textbf{Estimation} is the task of using data to calculate model +parameters. \textbf{Prediction} is the task of using a model to predict +outputs for unseen data. In our simple linear regression model, + +\[\hat{y} = \hat{\theta_0} + \hat{\theta_1}\] + +we \textbf{estimate} the parameters by minimizing average loss; then, we +\textbf{predict} using these estimations. \textbf{Least Squares +Estimation} is when we choose the parameters that minimize MSE. + +\section{Step 4: Evaluating the SLR +Model}\label{step-4-evaluating-the-slr-model} + +Now that we've explored the mathematics behind (1) choosing a model, (2) +choosing a loss function, and (3) fitting the model, we're left with one +final question -- how ``good'' are the predictions made by this ``best'' +fitted model? To determine this, we can: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Visualize data and compute statistics: + + \begin{itemize} + \tightlist + \item + Plot the original data. + \item + Compute each column's mean and standard deviation. If the mean and + standard deviation of our predictions are close to those of the + original observed \(y_i\)'s, we might be inclined to say that our + model has done well. + \item + (If we're fitting a linear model) Compute the correlation \(r\). A + large magnitude for the correlation coefficient between the feature + and response variables could also indicate that our model has done + well. + \end{itemize} +\item + Performance metrics: + + \begin{itemize} + \tightlist + \item + We can take the \textbf{Root Mean Squared Error (RMSE)}. + + \begin{itemize} + \tightlist + \item + It's the square root of the mean squared error (MSE), which is the + average loss that we've been minimizing to determine optimal model + parameters. + \item + RMSE is in the same units as \(y\). + \item + A lower RMSE indicates more ``accurate'' predictions, as we have a + lower ``average loss'' across the data. + \end{itemize} + \end{itemize} + + \[\text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2}\] +\item + Visualization: + + \begin{itemize} + \tightlist + \item + Look at the residual plot of \(e_i = y_i - \hat{y_i}\) to visualize + the difference between actual and predicted values. The good + residual plot should not show any pattern between input/features + \(x_i\) and residual values \(e_i\). + \end{itemize} +\end{enumerate} + +To illustrate this process, let's take a look at \textbf{Anscombe's +quartet}. + +\subsection{Four Mysterious Datasets (Anscombe's +quartet)}\label{four-mysterious-datasets-anscombes-quartet} + +Let's take a look at four different datasets. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\OperatorTok{\%}\NormalTok{matplotlib inline} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ itertools} +\ImportTok{from}\NormalTok{ mpl\_toolkits.mplot3d }\ImportTok{import}\NormalTok{ Axes3D} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Big font helper} +\KeywordTok{def}\NormalTok{ adjust\_fontsize(size}\OperatorTok{=}\VariableTok{None}\NormalTok{):} +\NormalTok{ SMALL\_SIZE }\OperatorTok{=} \DecValTok{8} +\NormalTok{ MEDIUM\_SIZE }\OperatorTok{=} \DecValTok{10} +\NormalTok{ BIGGER\_SIZE }\OperatorTok{=} \DecValTok{12} + \ControlFlowTok{if}\NormalTok{ size }\OperatorTok{!=} \VariableTok{None}\NormalTok{:} +\NormalTok{ SMALL\_SIZE }\OperatorTok{=}\NormalTok{ MEDIUM\_SIZE }\OperatorTok{=}\NormalTok{ BIGGER\_SIZE }\OperatorTok{=}\NormalTok{ size} + +\NormalTok{ plt.rc(}\StringTok{"font"}\NormalTok{, size}\OperatorTok{=}\NormalTok{SMALL\_SIZE) }\CommentTok{\# controls default text sizes} +\NormalTok{ plt.rc(}\StringTok{"axes"}\NormalTok{, titlesize}\OperatorTok{=}\NormalTok{SMALL\_SIZE) }\CommentTok{\# fontsize of the axes title} +\NormalTok{ plt.rc(}\StringTok{"axes"}\NormalTok{, labelsize}\OperatorTok{=}\NormalTok{MEDIUM\_SIZE) }\CommentTok{\# fontsize of the x and y labels} +\NormalTok{ plt.rc(}\StringTok{"xtick"}\NormalTok{, labelsize}\OperatorTok{=}\NormalTok{SMALL\_SIZE) }\CommentTok{\# fontsize of the tick labels} +\NormalTok{ plt.rc(}\StringTok{"ytick"}\NormalTok{, labelsize}\OperatorTok{=}\NormalTok{SMALL\_SIZE) }\CommentTok{\# fontsize of the tick labels} +\NormalTok{ plt.rc(}\StringTok{"legend"}\NormalTok{, fontsize}\OperatorTok{=}\NormalTok{SMALL\_SIZE) }\CommentTok{\# legend fontsize} +\NormalTok{ plt.rc(}\StringTok{"figure"}\NormalTok{, titlesize}\OperatorTok{=}\NormalTok{BIGGER\_SIZE) }\CommentTok{\# fontsize of the figure title} + + +\CommentTok{\# Helper functions} +\KeywordTok{def}\NormalTok{ standard\_units(x):} + \ControlFlowTok{return}\NormalTok{ (x }\OperatorTok{{-}}\NormalTok{ np.mean(x)) }\OperatorTok{/}\NormalTok{ np.std(x)} + + +\KeywordTok{def}\NormalTok{ correlation(x, y):} + \ControlFlowTok{return}\NormalTok{ np.mean(standard\_units(x) }\OperatorTok{*}\NormalTok{ standard\_units(y))} + + +\KeywordTok{def}\NormalTok{ slope(x, y):} + \ControlFlowTok{return}\NormalTok{ correlation(x, y) }\OperatorTok{*}\NormalTok{ np.std(y) }\OperatorTok{/}\NormalTok{ np.std(x)} + + +\KeywordTok{def}\NormalTok{ intercept(x, y):} + \ControlFlowTok{return}\NormalTok{ np.mean(y) }\OperatorTok{{-}}\NormalTok{ slope(x, y) }\OperatorTok{*}\NormalTok{ np.mean(x)} + + +\KeywordTok{def}\NormalTok{ fit\_least\_squares(x, y):} +\NormalTok{ theta\_0 }\OperatorTok{=}\NormalTok{ intercept(x, y)} +\NormalTok{ theta\_1 }\OperatorTok{=}\NormalTok{ slope(x, y)} + \ControlFlowTok{return}\NormalTok{ theta\_0, theta\_1} + + +\KeywordTok{def}\NormalTok{ predict(x, theta\_0, theta\_1):} + \ControlFlowTok{return}\NormalTok{ theta\_0 }\OperatorTok{+}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ x} + + +\KeywordTok{def}\NormalTok{ compute\_mse(y, yhat):} + \ControlFlowTok{return}\NormalTok{ np.mean((y }\OperatorTok{{-}}\NormalTok{ yhat) }\OperatorTok{**} \DecValTok{2}\NormalTok{)} + + +\NormalTok{plt.style.use(}\StringTok{"default"}\NormalTok{) }\CommentTok{\# Revert style to default mpl} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{plt.style.use(}\StringTok{"default"}\NormalTok{) }\CommentTok{\# Revert style to default mpl} +\NormalTok{NO\_VIZ, RESID, RESID\_SCATTER }\OperatorTok{=} \BuiltInTok{range}\NormalTok{(}\DecValTok{3}\NormalTok{)} + + +\KeywordTok{def}\NormalTok{ least\_squares\_evaluation(x, y, visualize}\OperatorTok{=}\NormalTok{NO\_VIZ):} + \CommentTok{\# statistics} + \BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"x\_mean : }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{mean(x)}\SpecialCharTok{:.2f\}}\SpecialStringTok{, y\_mean : }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{mean(y)}\SpecialCharTok{:.2f\}}\SpecialStringTok{"}\NormalTok{)} + \BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"x\_stdev: }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{std(x)}\SpecialCharTok{:.2f\}}\SpecialStringTok{, y\_stdev: }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{std(y)}\SpecialCharTok{:.2f\}}\SpecialStringTok{"}\NormalTok{)} + \BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"r = Correlation(x, y): }\SpecialCharTok{\{}\NormalTok{correlation(x, y)}\SpecialCharTok{:.3f\}}\SpecialStringTok{"}\NormalTok{)} + + \CommentTok{\# Performance metrics} +\NormalTok{ ahat, bhat }\OperatorTok{=}\NormalTok{ fit\_least\_squares(x, y)} +\NormalTok{ yhat }\OperatorTok{=}\NormalTok{ predict(x, ahat, bhat)} + \BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"}\CharTok{\textbackslash{}t}\SpecialStringTok{heta\_0: }\SpecialCharTok{\{}\NormalTok{ahat}\SpecialCharTok{:.2f\}}\SpecialStringTok{, }\CharTok{\textbackslash{}t}\SpecialStringTok{heta\_1: }\SpecialCharTok{\{}\NormalTok{bhat}\SpecialCharTok{:.2f\}}\SpecialStringTok{"}\NormalTok{)} + \BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"RMSE: }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{sqrt(compute\_mse(y, yhat))}\SpecialCharTok{:.3f\}}\SpecialStringTok{"}\NormalTok{)} + + \CommentTok{\# visualization} +\NormalTok{ fig, ax\_resid }\OperatorTok{=} \VariableTok{None}\NormalTok{, }\VariableTok{None} + \ControlFlowTok{if}\NormalTok{ visualize }\OperatorTok{==}\NormalTok{ RESID\_SCATTER:} +\NormalTok{ fig, axs }\OperatorTok{=}\NormalTok{ plt.subplots(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{8}\NormalTok{, }\DecValTok{3}\NormalTok{))} +\NormalTok{ axs[}\DecValTok{0}\NormalTok{].scatter(x, y)} +\NormalTok{ axs[}\DecValTok{0}\NormalTok{].plot(x, yhat)} +\NormalTok{ axs[}\DecValTok{0}\NormalTok{].set\_title(}\StringTok{"LS fit"}\NormalTok{)} +\NormalTok{ ax\_resid }\OperatorTok{=}\NormalTok{ axs[}\DecValTok{1}\NormalTok{]} + \ControlFlowTok{elif}\NormalTok{ visualize }\OperatorTok{==}\NormalTok{ RESID:} +\NormalTok{ fig }\OperatorTok{=}\NormalTok{ plt.figure(figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{3}\NormalTok{))} +\NormalTok{ ax\_resid }\OperatorTok{=}\NormalTok{ plt.gca()} + + \ControlFlowTok{if}\NormalTok{ ax\_resid }\KeywordTok{is} \KeywordTok{not} \VariableTok{None}\NormalTok{:} +\NormalTok{ ax\_resid.scatter(x, y }\OperatorTok{{-}}\NormalTok{ yhat, color}\OperatorTok{=}\StringTok{"red"}\NormalTok{)} +\NormalTok{ ax\_resid.plot([}\DecValTok{4}\NormalTok{, }\DecValTok{14}\NormalTok{], [}\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{], color}\OperatorTok{=}\StringTok{"black"}\NormalTok{)} +\NormalTok{ ax\_resid.set\_title(}\StringTok{"Residuals"}\NormalTok{)} + + \ControlFlowTok{return}\NormalTok{ fig} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Load in four different datasets: I, II, III, IV} +\NormalTok{x }\OperatorTok{=}\NormalTok{ [}\DecValTok{10}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{13}\NormalTok{, }\DecValTok{9}\NormalTok{, }\DecValTok{11}\NormalTok{, }\DecValTok{14}\NormalTok{, }\DecValTok{6}\NormalTok{, }\DecValTok{4}\NormalTok{, }\DecValTok{12}\NormalTok{, }\DecValTok{7}\NormalTok{, }\DecValTok{5}\NormalTok{]} +\NormalTok{y1 }\OperatorTok{=}\NormalTok{ [}\FloatTok{8.04}\NormalTok{, }\FloatTok{6.95}\NormalTok{, }\FloatTok{7.58}\NormalTok{, }\FloatTok{8.81}\NormalTok{, }\FloatTok{8.33}\NormalTok{, }\FloatTok{9.96}\NormalTok{, }\FloatTok{7.24}\NormalTok{, }\FloatTok{4.26}\NormalTok{, }\FloatTok{10.84}\NormalTok{, }\FloatTok{4.82}\NormalTok{, }\FloatTok{5.68}\NormalTok{]} +\NormalTok{y2 }\OperatorTok{=}\NormalTok{ [}\FloatTok{9.14}\NormalTok{, }\FloatTok{8.14}\NormalTok{, }\FloatTok{8.74}\NormalTok{, }\FloatTok{8.77}\NormalTok{, }\FloatTok{9.26}\NormalTok{, }\FloatTok{8.10}\NormalTok{, }\FloatTok{6.13}\NormalTok{, }\FloatTok{3.10}\NormalTok{, }\FloatTok{9.13}\NormalTok{, }\FloatTok{7.26}\NormalTok{, }\FloatTok{4.74}\NormalTok{]} +\NormalTok{y3 }\OperatorTok{=}\NormalTok{ [}\FloatTok{7.46}\NormalTok{, }\FloatTok{6.77}\NormalTok{, }\FloatTok{12.74}\NormalTok{, }\FloatTok{7.11}\NormalTok{, }\FloatTok{7.81}\NormalTok{, }\FloatTok{8.84}\NormalTok{, }\FloatTok{6.08}\NormalTok{, }\FloatTok{5.39}\NormalTok{, }\FloatTok{8.15}\NormalTok{, }\FloatTok{6.42}\NormalTok{, }\FloatTok{5.73}\NormalTok{]} +\NormalTok{x4 }\OperatorTok{=}\NormalTok{ [}\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{19}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{, }\DecValTok{8}\NormalTok{]} +\NormalTok{y4 }\OperatorTok{=}\NormalTok{ [}\FloatTok{6.58}\NormalTok{, }\FloatTok{5.76}\NormalTok{, }\FloatTok{7.71}\NormalTok{, }\FloatTok{8.84}\NormalTok{, }\FloatTok{8.47}\NormalTok{, }\FloatTok{7.04}\NormalTok{, }\FloatTok{5.25}\NormalTok{, }\FloatTok{12.50}\NormalTok{, }\FloatTok{5.56}\NormalTok{, }\FloatTok{7.91}\NormalTok{, }\FloatTok{6.89}\NormalTok{]} + +\NormalTok{anscombe }\OperatorTok{=}\NormalTok{ \{} + \StringTok{"I"}\NormalTok{: pd.DataFrame(}\BuiltInTok{list}\NormalTok{(}\BuiltInTok{zip}\NormalTok{(x, y1)), columns}\OperatorTok{=}\NormalTok{[}\StringTok{"x"}\NormalTok{, }\StringTok{"y"}\NormalTok{]),} + \StringTok{"II"}\NormalTok{: pd.DataFrame(}\BuiltInTok{list}\NormalTok{(}\BuiltInTok{zip}\NormalTok{(x, y2)), columns}\OperatorTok{=}\NormalTok{[}\StringTok{"x"}\NormalTok{, }\StringTok{"y"}\NormalTok{]),} + \StringTok{"III"}\NormalTok{: pd.DataFrame(}\BuiltInTok{list}\NormalTok{(}\BuiltInTok{zip}\NormalTok{(x, y3)), columns}\OperatorTok{=}\NormalTok{[}\StringTok{"x"}\NormalTok{, }\StringTok{"y"}\NormalTok{]),} + \StringTok{"IV"}\NormalTok{: pd.DataFrame(}\BuiltInTok{list}\NormalTok{(}\BuiltInTok{zip}\NormalTok{(x4, y4)), columns}\OperatorTok{=}\NormalTok{[}\StringTok{"x"}\NormalTok{, }\StringTok{"y"}\NormalTok{]),} +\NormalTok{\}} + +\CommentTok{\# Plot the scatter plot and line of best fit} +\NormalTok{fig, axs }\OperatorTok{=}\NormalTok{ plt.subplots(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{10}\NormalTok{))} + +\ControlFlowTok{for}\NormalTok{ i, dataset }\KeywordTok{in} \BuiltInTok{enumerate}\NormalTok{([}\StringTok{"I"}\NormalTok{, }\StringTok{"II"}\NormalTok{, }\StringTok{"III"}\NormalTok{, }\StringTok{"IV"}\NormalTok{]):} +\NormalTok{ ans }\OperatorTok{=}\NormalTok{ anscombe[dataset]} +\NormalTok{ x, y }\OperatorTok{=}\NormalTok{ ans[}\StringTok{"x"}\NormalTok{], ans[}\StringTok{"y"}\NormalTok{]} +\NormalTok{ ahat, bhat }\OperatorTok{=}\NormalTok{ fit\_least\_squares(x, y)} +\NormalTok{ yhat }\OperatorTok{=}\NormalTok{ predict(x, ahat, bhat)} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].scatter(x, y, alpha}\OperatorTok{=}\FloatTok{0.6}\NormalTok{, color}\OperatorTok{=}\StringTok{"red"}\NormalTok{) }\CommentTok{\# plot the x, y points} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].plot(x, yhat) }\CommentTok{\# plot the line of best fit} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].set\_xlabel(}\SpecialStringTok{f"$x\_}\SpecialCharTok{\{}\NormalTok{i}\OperatorTok{+}\DecValTok{1}\SpecialCharTok{\}}\SpecialStringTok{$"}\NormalTok{)} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].set\_ylabel(}\SpecialStringTok{f"$y\_}\SpecialCharTok{\{}\NormalTok{i}\OperatorTok{+}\DecValTok{1}\SpecialCharTok{\}}\SpecialStringTok{$"}\NormalTok{)} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].set\_title(}\SpecialStringTok{f"Dataset }\SpecialCharTok{\{}\NormalTok{dataset}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} + +\NormalTok{plt.show()} +\end{Highlighting} +\end{Shaded} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-5-output-1.pdf} + +While these four sets of datapoints look very different, they actually +all have identical means \(\bar x\), \(\bar y\), standard deviations +\(\sigma_x\), \(\sigma_y\), correlation \(r\), and RMSE! If we only look +at these statistics, we would probably be inclined to say that these +datasets are similar. + +\begin{Shaded} +\begin{Highlighting}[] +\ControlFlowTok{for}\NormalTok{ dataset }\KeywordTok{in}\NormalTok{ [}\StringTok{"I"}\NormalTok{, }\StringTok{"II"}\NormalTok{, }\StringTok{"III"}\NormalTok{, }\StringTok{"IV"}\NormalTok{]:} + \BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"\textgreater{}\textgreater{}\textgreater{} Dataset }\SpecialCharTok{\{}\NormalTok{dataset}\SpecialCharTok{\}}\SpecialStringTok{:"}\NormalTok{)} +\NormalTok{ ans }\OperatorTok{=}\NormalTok{ anscombe[dataset]} +\NormalTok{ fig }\OperatorTok{=}\NormalTok{ least\_squares\_evaluation(ans[}\StringTok{"x"}\NormalTok{], ans[}\StringTok{"y"}\NormalTok{], visualize}\OperatorTok{=}\NormalTok{NO\_VIZ)} + \BuiltInTok{print}\NormalTok{()} + \BuiltInTok{print}\NormalTok{()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +>>> Dataset I: +x_mean : 9.00, y_mean : 7.50 +x_stdev: 3.16, y_stdev: 1.94 +r = Correlation(x, y): 0.816 + heta_0: 3.00, heta_1: 0.50 +RMSE: 1.119 + + +>>> Dataset II: +x_mean : 9.00, y_mean : 7.50 +x_stdev: 3.16, y_stdev: 1.94 +r = Correlation(x, y): 0.816 + heta_0: 3.00, heta_1: 0.50 +RMSE: 1.119 + + +>>> Dataset III: +x_mean : 9.00, y_mean : 7.50 +x_stdev: 3.16, y_stdev: 1.94 +r = Correlation(x, y): 0.816 + heta_0: 3.00, heta_1: 0.50 +RMSE: 1.118 + + +>>> Dataset IV: +x_mean : 9.00, y_mean : 7.50 +x_stdev: 3.16, y_stdev: 1.94 +r = Correlation(x, y): 0.817 + heta_0: 3.00, heta_1: 0.50 +RMSE: 1.118 + +\end{verbatim} + +We may also wish to visualize the model's \textbf{residuals}, defined as +the difference between the observed and predicted \(y_i\) value +(\(e_i = y_i - \hat{y}_i\)). This gives a high-level view of how ``off'' +each prediction is from the true observed value. Recall that you +explored this concept in +\href{https://inferentialthinking.com/chapters/15/5/Visual_Diagnostics.html?highlight=heteroscedasticity\#detecting-heteroscedasticity}{Data +8}: a good regression fit should display no clear pattern in its plot of +residuals. The residual plots for Anscombe's quartet are displayed +below. Note how only the first plot shows no clear pattern to the +magnitude of residuals. This is an indication that SLR is not the best +choice of model for the remaining three sets of points. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Residual visualization} +\NormalTok{fig, axs }\OperatorTok{=}\NormalTok{ plt.subplots(}\DecValTok{2}\NormalTok{, }\DecValTok{2}\NormalTok{, figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{10}\NormalTok{, }\DecValTok{10}\NormalTok{))} + +\ControlFlowTok{for}\NormalTok{ i, dataset }\KeywordTok{in} \BuiltInTok{enumerate}\NormalTok{([}\StringTok{"I"}\NormalTok{, }\StringTok{"II"}\NormalTok{, }\StringTok{"III"}\NormalTok{, }\StringTok{"IV"}\NormalTok{]):} +\NormalTok{ ans }\OperatorTok{=}\NormalTok{ anscombe[dataset]} +\NormalTok{ x, y }\OperatorTok{=}\NormalTok{ ans[}\StringTok{"x"}\NormalTok{], ans[}\StringTok{"y"}\NormalTok{]} +\NormalTok{ ahat, bhat }\OperatorTok{=}\NormalTok{ fit\_least\_squares(x, y)} +\NormalTok{ yhat }\OperatorTok{=}\NormalTok{ predict(x, ahat, bhat)} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].scatter(} +\NormalTok{ x, y }\OperatorTok{{-}}\NormalTok{ yhat, alpha}\OperatorTok{=}\FloatTok{0.6}\NormalTok{, color}\OperatorTok{=}\StringTok{"red"} +\NormalTok{ ) }\CommentTok{\# plot the x, y points} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].plot(} +\NormalTok{ x, np.zeros\_like(x), color}\OperatorTok{=}\StringTok{"black"} +\NormalTok{ ) }\CommentTok{\# plot the residual line} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].set\_xlabel(}\SpecialStringTok{f"$x\_}\SpecialCharTok{\{}\NormalTok{i}\OperatorTok{+}\DecValTok{1}\SpecialCharTok{\}}\SpecialStringTok{$"}\NormalTok{)} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].set\_ylabel(}\SpecialStringTok{f"$e\_}\SpecialCharTok{\{}\NormalTok{i}\OperatorTok{+}\DecValTok{1}\SpecialCharTok{\}}\SpecialStringTok{$"}\NormalTok{)} +\NormalTok{ axs[i }\OperatorTok{//} \DecValTok{2}\NormalTok{, i }\OperatorTok{\%} \DecValTok{2}\NormalTok{].set\_title(}\SpecialStringTok{f"Dataset }\SpecialCharTok{\{}\NormalTok{dataset}\SpecialCharTok{\}}\SpecialStringTok{ Residuals"}\NormalTok{)} + +\NormalTok{plt.show()} +\end{Highlighting} +\end{Shaded} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-7-output-1.pdf} + +\section{Constant Model + MSE}\label{constant-model-mse} + +Now, we'll shift from the SLR model to the \textbf{constant model}, also +known as a summary statistic. The constant model is slightly different +from the simple linear regression model we've explored previously. +Rather than generating predictions from an inputted feature variable, +the constant model always \emph{predicts the same constant number}. This +ignores any relationships between variables. For example, let's say we +want to predict the number of drinks a boba shop sells in a day. Boba +tea sales likely depend on the time of year, the weather, how the +customers feel, whether school is in session, etc., but the constant +model ignores these factors in favor of a simpler model. In other words, +the constant model employs a \textbf{simplifying assumption}. + +It is also a parametric, statistical model: + +\[\hat{y} = \theta_0\] + +\(\theta_0\) is the parameter of the constant model, just as +\(\theta_0\) and \(\theta_1\) were the parameters in SLR. Since our +parameter \(\theta_0\) is 1-dimensional (\(\theta_0 \in \mathbb{R}\)), +we now have no input to our model and will always predict +\(\hat{y} = \theta_0\). + +\subsection{\texorpdfstring{Deriving the optimal +\(\theta_0\)}{Deriving the optimal \textbackslash theta\_0}}\label{deriving-the-optimal-theta_0} + +Our task now is to determine what value of \(\theta_0\) best represents +the optimal model -- in other words, what number should we guess each +time to have the lowest possible \textbf{average loss} on our data? + +Like before, we'll use Mean Squared Error (MSE). Recall that the MSE is +average squared loss (L2 loss) over the data +\(D = \{y_1, y_2, ..., y_n\}\). + +\[\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \hat{y_i})^2 \] + +Our modeling process now looks like this: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Choose a model: constant model +\item + Choose a loss function: L2 loss +\item + Fit the model +\item + Evaluate model performance +\end{enumerate} + +Given the \textbf{constant model} \(\hat{y} = \theta_0\), we can rewrite +the MSE equation as + +\[\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2 \] + +We can fit \textbf{the model} by finding the optimal \(\hat{\theta_0}\) +that minimizes the MSE using a calculus approach. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Differentiate with respect to \(\theta_0\): +\end{enumerate} + +\[ +\begin{align} +\frac{d}{d\theta_0}\text{R}(\theta) & = \frac{d}{d\theta_0}(\frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2) +\\ &= \frac{1}{n}\sum^{n}_{i=1} \frac{d}{d\theta_0} (y_i - \theta_0)^2 \quad \quad \text{a derivative of sums is a sum of derivatives} +\\ &= \frac{1}{n}\sum^{n}_{i=1} 2 (y_i - \theta_0) (-1) \quad \quad \text{chain rule} +\\ &= {\frac{-2}{n}}\sum^{n}_{i=1} (y_i - \theta_0) \quad \quad \text{simply constants} +\end{align} +\] + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{1} +\item + Set the derivative equation equal to 0: + + \[ + 0 = {\frac{-2}{n}}\sum^{n}_{i=1} (y_i - \hat{\theta_0}) + \] +\item + Solve for \(\hat{\theta_0}\) +\end{enumerate} + +\[ +\begin{align} +0 &= {\frac{-2}{n}}\sum^{n}_{i=1} (y_i - \hat{\theta_0}) +\\ &= \sum^{n}_{i=1} (y_i - \hat{\theta_0}) \quad \quad \text{divide both sides by} \frac{-2}{n} +\\ &= \left(\sum^{n}_{i=1} y_i\right) - \left(\sum^{n}_{i=1} \theta_0\right) \quad \quad \text{separate sums} +\\ &= \left(\sum^{n}_{i=1} y_i\right) - (n \cdot \hat{\theta_0}) \quad \quad \text{c + c + … + c = nc} +\\ n \cdot \hat{\theta_0} &= \sum^{n}_{i=1} y_i +\\ \hat{\theta_0} &= \frac{1}{n} \sum^{n}_{i=1} y_i +\\ \hat{\theta_0} &= \bar{y} +\end{align} +\] + +Let's take a moment to interpret this result. +\(\hat{\theta_0} = \bar{y}\) is the optimal parameter for constant model ++ MSE. It holds true regardless of what data sample you have, and it +provides some formal reasoning as to why the mean is such a common +summary statistic. + +Our optimal model parameter is the value of the parameter that minimizes +the cost function. This minimum value of the cost function can be +expressed: + +\[R(\hat{\theta_0}) = \min_{\theta_0} R(\theta_0)\] + +To restate the above in plain English: we are looking at the value of +the cost function when it takes the best parameter as input. This +optimal model parameter, \(\hat{\theta_0}\), is the value of +\(\theta_0\) that minimizes the cost \(R\). + +For modeling purposes, we care less about the minimum value of cost, +\(R(\hat{\theta_0})\), and more about the \emph{value of \(\theta\)} +that results in this lowest average loss. In other words, we concern +ourselves with finding the best parameter value such that: + +\[\hat{\theta} = \underset{\theta}{\operatorname{\arg\min}}\:R(\theta)\] + +That is, we want to find the \textbf{arg}ument \(\theta\) that +\textbf{min}imizes the cost function. + +\subsection{Comparing Two Different Models, Both Fit with +MSE}\label{comparing-two-different-models-both-fit-with-mse} + +Now that we've explored the constant model with an L2 loss, we can +compare it to the SLR model that we learned last lecture. Consider the +dataset below, which contains information about the ages and lengths of +dugongs. Supposed we wanted to predict dugong ages: + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.1333}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3879}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.4788}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Constant Model +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Simple Linear Regression +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +model & \(\hat{y} = \theta_0\) & \(\hat{y} = \theta_0 + \theta_1 x\) \\ +data & sample of ages \(D = \{y_1, y_2, ..., y_n\}\) & sample of ages +\(D = \{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\}\) \\ +dimensions & \(\hat{\theta_0}\) is 1-D & +\(\hat{\theta} = [\hat{\theta_0}, \hat{\theta_1}]\) is 2-D \\ +loss surface & 2-D +\includegraphics{constant_model_loss_transformations/images/constant_loss_surface.png} +& 3-D +\includegraphics{constant_model_loss_transformations/images/slr_loss_surface.png} \\ +loss model & +\(\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2\) & +\(\hat{R}(\theta_0, \theta_1) = \frac{1}{n}\sum^{n}_{i=1} (y_i - (\theta_0 + \theta_1 x))^2\) \\ +RMSE & 7.72 & 4.31 \\ +predictions visualized & rug plot +\includegraphics{constant_model_loss_transformations/images/dugong_rug.png} +& scatter plot +\includegraphics{constant_model_loss_transformations/images/dugong_scatter.png} \\ +\end{longtable} + +(Notice how the points for our SLR scatter plot are visually not a great +linear fit. We'll come back to this). + +The code for generating the graphs and models is included below, but we +won't go over it in too much depth. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{dugongs }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/dugongs.csv"}\NormalTok{)} +\NormalTok{data\_constant }\OperatorTok{=}\NormalTok{ dugongs[}\StringTok{"Age"}\NormalTok{]} +\NormalTok{data\_linear }\OperatorTok{=}\NormalTok{ dugongs[[}\StringTok{"Length"}\NormalTok{, }\StringTok{"Age"}\NormalTok{]]} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Constant Model + MSE} +\NormalTok{plt.style.use(}\StringTok{\textquotesingle{}default\textquotesingle{}}\NormalTok{) }\CommentTok{\# Revert style to default mpl} +\NormalTok{adjust\_fontsize(size}\OperatorTok{=}\DecValTok{16}\NormalTok{)} +\OperatorTok{\%}\NormalTok{matplotlib inline} + +\KeywordTok{def}\NormalTok{ mse\_constant(theta, data):} + \ControlFlowTok{return}\NormalTok{ np.mean(np.array([(y\_obs }\OperatorTok{{-}}\NormalTok{ theta) }\OperatorTok{**} \DecValTok{2} \ControlFlowTok{for}\NormalTok{ y\_obs }\KeywordTok{in}\NormalTok{ data]), axis}\OperatorTok{=}\DecValTok{0}\NormalTok{)} + +\NormalTok{thetas }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{20}\NormalTok{, }\DecValTok{42}\NormalTok{, }\DecValTok{1000}\NormalTok{)} +\NormalTok{l2\_loss\_thetas }\OperatorTok{=}\NormalTok{ mse\_constant(thetas, data\_constant)} + +\CommentTok{\# Plotting the loss surface} +\NormalTok{plt.plot(thetas, l2\_loss\_thetas)} +\NormalTok{plt.xlabel(}\VerbatimStringTok{r\textquotesingle{}$\textbackslash{}theta\_0$\textquotesingle{}}\NormalTok{)} +\NormalTok{plt.ylabel(}\VerbatimStringTok{r\textquotesingle{}MSE\textquotesingle{}}\NormalTok{)} + +\CommentTok{\# Optimal point} +\NormalTok{thetahat }\OperatorTok{=}\NormalTok{ np.mean(data\_constant)} +\NormalTok{plt.scatter([thetahat], [mse\_constant(thetahat, data\_constant)], s}\OperatorTok{=}\DecValTok{50}\NormalTok{, label }\OperatorTok{=} \VerbatimStringTok{r"$\textbackslash{}hat\{\textbackslash{}theta\}\_0$"}\NormalTok{)} +\NormalTok{plt.legend()}\OperatorTok{;} +\CommentTok{\# plt.show()} +\end{Highlighting} +\end{Shaded} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-9-output-1.pdf} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# SLR + MSE} +\KeywordTok{def}\NormalTok{ mse\_linear(theta\_0, theta\_1, data\_linear):} +\NormalTok{ data\_x, data\_y }\OperatorTok{=}\NormalTok{ data\_linear.iloc[:, }\DecValTok{0}\NormalTok{], data\_linear.iloc[:, }\DecValTok{1}\NormalTok{]} + \ControlFlowTok{return}\NormalTok{ np.mean(} +\NormalTok{ np.array([(y }\OperatorTok{{-}}\NormalTok{ (theta\_0 }\OperatorTok{+}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ x)) }\OperatorTok{**} \DecValTok{2} \ControlFlowTok{for}\NormalTok{ x, y }\KeywordTok{in} \BuiltInTok{zip}\NormalTok{(data\_x, data\_y)]),} +\NormalTok{ axis}\OperatorTok{=}\DecValTok{0}\NormalTok{,} +\NormalTok{ )} + + +\CommentTok{\# plotting the loss surface} +\NormalTok{theta\_0\_values }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{80}\NormalTok{, }\DecValTok{20}\NormalTok{, }\DecValTok{80}\NormalTok{)} +\NormalTok{theta\_1\_values }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{10}\NormalTok{, }\DecValTok{30}\NormalTok{, }\DecValTok{80}\NormalTok{)} +\NormalTok{mse\_values }\OperatorTok{=}\NormalTok{ np.array(} +\NormalTok{ [[mse\_linear(x, y, data\_linear) }\ControlFlowTok{for}\NormalTok{ x }\KeywordTok{in}\NormalTok{ theta\_0\_values] }\ControlFlowTok{for}\NormalTok{ y }\KeywordTok{in}\NormalTok{ theta\_1\_values]} +\NormalTok{)} + +\CommentTok{\# Optimal point} +\NormalTok{data\_x, data\_y }\OperatorTok{=}\NormalTok{ data\_linear.iloc[:, }\DecValTok{0}\NormalTok{], data\_linear.iloc[:, }\DecValTok{1}\NormalTok{]} +\NormalTok{theta\_1\_hat }\OperatorTok{=}\NormalTok{ np.corrcoef(data\_x, data\_y)[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{] }\OperatorTok{*}\NormalTok{ np.std(data\_y) }\OperatorTok{/}\NormalTok{ np.std(data\_x)} +\NormalTok{theta\_0\_hat }\OperatorTok{=}\NormalTok{ np.mean(data\_y) }\OperatorTok{{-}}\NormalTok{ theta\_1\_hat }\OperatorTok{*}\NormalTok{ np.mean(data\_x)} + +\CommentTok{\# Create the 3D plot} +\NormalTok{fig }\OperatorTok{=}\NormalTok{ plt.figure(figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{7}\NormalTok{, }\DecValTok{5}\NormalTok{))} +\NormalTok{ax }\OperatorTok{=}\NormalTok{ fig.add\_subplot(}\DecValTok{111}\NormalTok{, projection}\OperatorTok{=}\StringTok{"3d"}\NormalTok{)} + +\NormalTok{X, Y }\OperatorTok{=}\NormalTok{ np.meshgrid(theta\_0\_values, theta\_1\_values)} +\NormalTok{surf }\OperatorTok{=}\NormalTok{ ax.plot\_surface(} +\NormalTok{ X, Y, mse\_values, cmap}\OperatorTok{=}\StringTok{"viridis"}\NormalTok{, alpha}\OperatorTok{=}\FloatTok{0.6} +\NormalTok{) }\CommentTok{\# Use alpha to make it slightly transparent} + +\CommentTok{\# Scatter point using matplotlib} +\NormalTok{sc }\OperatorTok{=}\NormalTok{ ax.scatter(} +\NormalTok{ [theta\_0\_hat],} +\NormalTok{ [theta\_1\_hat],} +\NormalTok{ [mse\_linear(theta\_0\_hat, theta\_1\_hat, data\_linear)],} +\NormalTok{ marker}\OperatorTok{=}\StringTok{"o"}\NormalTok{,} +\NormalTok{ color}\OperatorTok{=}\StringTok{"red"}\NormalTok{,} +\NormalTok{ s}\OperatorTok{=}\DecValTok{100}\NormalTok{,} +\NormalTok{ label}\OperatorTok{=}\StringTok{"theta hat"}\NormalTok{,} +\NormalTok{)} + +\CommentTok{\# Create a colorbar} +\NormalTok{cbar }\OperatorTok{=}\NormalTok{ fig.colorbar(surf, ax}\OperatorTok{=}\NormalTok{ax, shrink}\OperatorTok{=}\FloatTok{0.5}\NormalTok{, aspect}\OperatorTok{=}\DecValTok{10}\NormalTok{)} +\NormalTok{cbar.set\_label(}\StringTok{"Cost Value"}\NormalTok{)} + +\NormalTok{ax.set\_title(}\StringTok{"MSE for different $}\CharTok{\textbackslash{}\textbackslash{}}\StringTok{theta\_0, }\CharTok{\textbackslash{}\textbackslash{}}\StringTok{theta\_1$"}\NormalTok{)} +\NormalTok{ax.set\_xlabel(}\StringTok{"$}\CharTok{\textbackslash{}\textbackslash{}}\StringTok{theta\_0$"}\NormalTok{)} +\NormalTok{ax.set\_ylabel(}\StringTok{"$}\CharTok{\textbackslash{}\textbackslash{}}\StringTok{theta\_1$"}\NormalTok{)} +\NormalTok{ax.set\_zlabel(}\StringTok{"MSE"}\NormalTok{)} + +\CommentTok{\# plt.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Text(0.5, 0, 'MSE') +\end{verbatim} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-10-output-2.pdf} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Predictions} +\NormalTok{yobs }\OperatorTok{=}\NormalTok{ data\_linear[}\StringTok{"Age"}\NormalTok{] }\CommentTok{\# The true observations y} +\NormalTok{xs }\OperatorTok{=}\NormalTok{ data\_linear[}\StringTok{"Length"}\NormalTok{] }\CommentTok{\# Needed for linear predictions} +\NormalTok{n }\OperatorTok{=} \BuiltInTok{len}\NormalTok{(yobs) }\CommentTok{\# Predictions} + +\NormalTok{yhats\_constant }\OperatorTok{=}\NormalTok{ [thetahat }\ControlFlowTok{for}\NormalTok{ i }\KeywordTok{in} \BuiltInTok{range}\NormalTok{(n)] }\CommentTok{\# Not used, but food for thought} +\NormalTok{yhats\_linear }\OperatorTok{=}\NormalTok{ [theta\_0\_hat }\OperatorTok{+}\NormalTok{ theta\_1\_hat }\OperatorTok{*}\NormalTok{ x }\ControlFlowTok{for}\NormalTok{ x }\KeywordTok{in}\NormalTok{ xs]} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Constant Model Rug Plot} +\CommentTok{\# In case we\textquotesingle{}re in a weird style state} +\NormalTok{sns.set\_theme()} +\NormalTok{adjust\_fontsize(size}\OperatorTok{=}\DecValTok{16}\NormalTok{)} +\OperatorTok{\%}\NormalTok{matplotlib inline} + +\NormalTok{fig }\OperatorTok{=}\NormalTok{ plt.figure(figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{8}\NormalTok{, }\FloatTok{1.5}\NormalTok{))} +\NormalTok{sns.rugplot(yobs, height}\OperatorTok{=}\FloatTok{0.25}\NormalTok{, lw}\OperatorTok{=}\DecValTok{2}\NormalTok{) }\OperatorTok{;} +\NormalTok{plt.axvline(thetahat, color}\OperatorTok{=}\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{, lw}\OperatorTok{=}\DecValTok{4}\NormalTok{, label}\OperatorTok{=}\VerbatimStringTok{r"$\textbackslash{}hat\{\textbackslash{}theta\}\_0$"}\NormalTok{)}\OperatorTok{;} +\NormalTok{plt.legend()} +\NormalTok{plt.yticks([])}\OperatorTok{;} +\CommentTok{\# plt.show()} +\end{Highlighting} +\end{Shaded} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-12-output-1.pdf} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# SLR model scatter plot } +\CommentTok{\# In case we\textquotesingle{}re in a weird style state} +\NormalTok{sns.set\_theme()} +\NormalTok{adjust\_fontsize(size}\OperatorTok{=}\DecValTok{16}\NormalTok{)} +\OperatorTok{\%}\NormalTok{matplotlib inline} + +\NormalTok{sns.scatterplot(x}\OperatorTok{=}\NormalTok{xs, y}\OperatorTok{=}\NormalTok{yobs)} +\NormalTok{plt.plot(xs, yhats\_linear, color}\OperatorTok{=}\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{, lw}\OperatorTok{=}\DecValTok{4}\NormalTok{)}\OperatorTok{;} +\CommentTok{\# plt.savefig(\textquotesingle{}dugong\_line.png\textquotesingle{}, bbox\_inches = \textquotesingle{}tight\textquotesingle{});} +\CommentTok{\# plt.show()} +\end{Highlighting} +\end{Shaded} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-13-output-1.pdf} + +Interpreting the RMSE (Root Mean Squared Error): + +\begin{itemize} +\tightlist +\item + Because the constant error is \textbf{HIGHER} than the linear error, +\item + The constant model is \textbf{WORSE} than the linear model (at least + for this metric). +\end{itemize} + +\section{Constant Model + MAE}\label{constant-model-mae} + +We see now that changing the model used for prediction leads to a wildly +different result for the optimal model parameter. What happens if we +instead change the loss function used in model evaluation? + +This time, we will consider the constant model with L1 (absolute loss) +as the loss function. This means that the average loss will be expressed +as the \textbf{Mean Absolute Error (MAE)}. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Choose a model: constant model +\item + Choose a loss function: L1 loss +\item + Fit the model +\item + Evaluate model performance +\end{enumerate} + +\subsection{\texorpdfstring{Deriving the optimal +\(\theta_0\)}{Deriving the optimal \textbackslash theta\_0}}\label{deriving-the-optimal-theta_0-1} + +Recall that the MAE is average \textbf{absolute} loss (L1 loss) over the +data \(D = \{y_1, y_2, ..., y_n\}\). + +\[\hat{R}(\theta_0) = \frac{1}{n}\sum^{n}_{i=1} |y_i - \hat{y_i}| \] + +Given the constant model \(\hat{y} = \theta_0\), we can write the MAE +as: + +\[\hat{R}(\theta_0) = \frac{1}{n}\sum^{n}_{i=1} |y_i - \theta_0| \] + +To fit the model, we find the optimal parameter value \(\hat{\theta_0}\) +that minimizes the MAE by differentiating using a calculus approach: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Differentiate with respect to \(\hat{\theta_0}\): +\end{enumerate} + +\[ +\begin{align} +\hat{R}(\theta_0) &= \frac{1}{n}\sum^{n}_{i=1} |y_i - \theta_0| \\ +\frac{d}{d\theta_0} R(\theta_0) &= \frac{d}{d\theta_0} \left(\frac{1}{n} \sum^{n}_{i=1} |y_i - \theta_0| \right) \\ +&= \frac{1}{n} \sum^{n}_{i=1} \frac{d}{d\theta_0} |y_i - \theta_0| +\end{align} +\] + +\begin{itemize} +\tightlist +\item + Here, we seem to have run into a problem: the derivative of an + absolute value is undefined when the argument is 0 (i.e.~when + \(y_i = \theta_0\)). For now, we'll ignore this issue. It turns out + that disregarding this case doesn't influence our final result. +\item + To perform the derivative, consider two cases. When \(\theta_0\) is + \emph{less than or equal to} \(y_i\), the term \(y_i - \theta_0\) will + be positive and the absolute value has no impact. When \(\theta_0\) is + \emph{greater than} \(y_i\), the term \(y_i - \theta_0\) will be + negative. Applying the absolute value will convert this to a positive + value, which we can express by saying + \(-(y_i - \theta_0) = \theta_0 - y_i\). +\end{itemize} + +\[|y_i - \theta_0| = \begin{cases} y_i - \theta_0 \quad \text{ if } \theta_0 \le y_i \\ \theta_0 - y_i \quad \text{if }\theta_0 > y_i \end{cases}\] + +\begin{itemize} +\tightlist +\item + Taking derivatives: +\end{itemize} + +\[\frac{d}{d\theta_0} |y_i - \theta_0| = \begin{cases} \frac{d}{d\theta_0} (y_i - \theta_0) = -1 \quad \text{if }\theta_0 < y_i \\ \frac{d}{d\theta_0} (\theta_0 - y_i) = 1 \quad \text{if }\theta_0 > y_i \end{cases}\] + +\begin{itemize} +\tightlist +\item + This means that we obtain a different value for the derivative for + data points where \(\theta_0 < y_i\) and where \(\theta_0 > y_i\). We + can summarize this by saying: +\end{itemize} + +\[ +\frac{d}{d\theta_0} R(\theta_0) = \frac{1}{n} \sum^{n}_{i=1} \frac{d}{d\theta_0} |y_i - \theta_0| \\ += \frac{1}{n} \left[\sum_{\theta_0 < y_i} (-1) + \sum_{\theta_0 > y_i} (+1) \right] +\] + +\begin{itemize} +\tightlist +\item + In other words, we take the sum of values for \(i = 1, 2, ..., n\): + + \begin{itemize} + \tightlist + \item + \(-1\) if our observation \(y_i\) is \emph{greater than} our + prediction \(\hat{\theta_0}\) + \item + \(+1\) if our observation \(y_i\) is \emph{smaller than} our + prediction \(\hat{\theta_0}\) + \end{itemize} +\end{itemize} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{1} +\item + Set the derivative equation equal to 0: + \[ 0 = \frac{1}{n}\sum_{\hat{\theta_0} < y_i} (-1) + \frac{1}{n}\sum_{\hat{\theta_0} > y_i} (+1) \] +\item + Solve for \(\hat{\theta_0}\): + \[ 0 = -\frac{1}{n}\sum_{\hat{\theta_0} < y_i} (1) + \frac{1}{n}\sum_{\hat{\theta_0} > y_i} (1)\] +\end{enumerate} + +\[\sum_{\hat{\theta_0} < y_i} (1) = \sum_{\hat{\theta_0} > y_i} (1) \] + +Thus, the constant model parameter \(\theta = \hat{\theta_0}\) that +minimizes MAE must satisfy: + +\[ \sum_{\hat{\theta_0} < y_i} (1) = \sum_{\hat{\theta_0} > y_i} (1) \] + +In other words, the number of observations greater than \(\theta_0\) +must be equal to the number of observations less than \(\theta_0\); +there must be an equal number of points on the left and right sides of +the equation. This is the definition of median, so our optimal value is +\[ \hat{\theta_0} = median(y) \] + +\section{Summary: Loss Optimization, Calculus, and Critical +Points}\label{summary-loss-optimization-calculus-and-critical-points} + +First, define the \textbf{objective function} as average loss. + +\begin{itemize} +\tightlist +\item + Plug in L1 or L2 loss. +\item + Plug in the model so that the resulting expression is a function of + \(\theta\). +\end{itemize} + +Then, find the minimum of the objective function: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Differentiate with respect to \(\theta\). +\item + Set equal to 0. +\item + Solve for \(\hat{\theta}\). +\item + (If we have multiple parameters) repeat steps 1-3 with partial + derivatives. +\end{enumerate} + +Recall critical points from calculus: \(R(\hat{\theta})\) could be a +minimum, maximum, or saddle point! + +\begin{itemize} +\tightlist +\item + We should technically also perform the second derivative test, i.e., + show \(R''(\hat{\theta}) > 0\). +\item + MSE has a property---\textbf{convexity}---that guarantees that + \(R(\hat{\theta})\) is a global minimum. +\item + The proof of convexity for MAE is beyond this course. +\end{itemize} + +\section{Comparing Loss Functions}\label{comparing-loss-functions} + +We've now tried our hand at fitting a model under both MSE and MAE cost +functions. How do the two results compare? + +Let's consider a dataset where each entry represents the number of +drinks sold at a bubble tea store each day. We'll fit a constant model +to predict the number of drinks that will be sold tomorrow. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{drinks }\OperatorTok{=}\NormalTok{ np.array([}\DecValTok{20}\NormalTok{, }\DecValTok{21}\NormalTok{, }\DecValTok{22}\NormalTok{, }\DecValTok{29}\NormalTok{, }\DecValTok{33}\NormalTok{])} +\NormalTok{drinks} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([20, 21, 22, 29, 33]) +\end{verbatim} + +From our derivations above, we know that the optimal model parameter +under MSE cost is the mean of the dataset. Under MAE cost, the optimal +parameter is the median of the dataset. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{np.mean(drinks), np.median(drinks)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(np.float64(25.0), np.float64(22.0)) +\end{verbatim} + +If we plot each empirical risk function across several possible values +of \(\theta\), we find that each \(\hat{\theta}\) does indeed correspond +to the lowest value of error: + +Notice that the MSE above is a \textbf{smooth} function -- it is +differentiable at all points, making it easy to minimize using numerical +methods. The MAE, in contrast, is not differentiable at each of its +``kinks.'' We'll explore how the smoothness of the cost function can +impact our ability to apply numerical optimization in a few weeks. + +How do outliers affect each cost function? Imagine we replace the +largest value in the dataset with 1000. The mean of the data increases +substantially, while the median is nearly unaffected. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{drinks\_with\_outlier }\OperatorTok{=}\NormalTok{ np.append(drinks, }\DecValTok{1033}\NormalTok{)} +\NormalTok{display(drinks\_with\_outlier)} +\NormalTok{np.mean(drinks\_with\_outlier), np.median(drinks\_with\_outlier)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([ 20, 21, 22, 29, 33, 1033]) +\end{verbatim} + +\begin{verbatim} +(np.float64(193.0), np.float64(25.5)) +\end{verbatim} + +This means that under the MSE, the optimal model parameter +\(\hat{\theta}\) is strongly affected by the presence of outliers. Under +the MAE, the optimal parameter is not as influenced by outlying data. We +can generalize this by saying that the MSE is \textbf{sensitive} to +outliers, while the MAE is \textbf{robust} to outliers. + +Let's try another experiment. This time, we'll add an additional, +non-outlying datapoint to the data. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{drinks\_with\_additional\_observation }\OperatorTok{=}\NormalTok{ np.append(drinks, }\DecValTok{35}\NormalTok{)} +\NormalTok{drinks\_with\_additional\_observation} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([20, 21, 22, 29, 33, 35]) +\end{verbatim} + +When we again visualize the cost functions, we find that the MAE now +plots a horizontal line between 22 and 29. This means that there are +\emph{infinitely} many optimal values for the model parameter: any value +\(\hat{\theta} \in [22, 29]\) will minimize the MAE. In contrast, the +MSE still has a single best value for \(\hat{\theta}\). In other words, +the MSE has a \textbf{unique} solution for \(\hat{\theta}\); the MAE is +not guaranteed to have a single unique solution. + +To summarize our example, + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.1333}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3879}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.4788}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +MSE (Mean Squared Loss) +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +MAE (Mean Absolute Loss) +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Loss Function & +\(\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} (y_i - \theta_0)^2\) & +\(\hat{R}(\theta) = \frac{1}{n}\sum^{n}_{i=1} |y_i - \theta_0|\) \\ +Optimal \(\hat{\theta_0}\) & \(\hat{\theta_0} = mean(y) = \bar{y}\) & +\(\hat{\theta_0} = median(y)\) \\ +Loss Surface & & \\ +Shape & \textbf{Smooth} - easy to minimize using numerical methods (in a +few weeks) & \textbf{Piecewise} - at each of the ``kinks,'' it's not +differentiable. Harder to minimize. \\ +Outliers & \textbf{Sensitive} to outliers (since they change mean +substantially). Sensitivity also depends on the dataset size. & +\textbf{More robust} to outliers. \\ +\(\hat{\theta_0}\) Uniqueness & \textbf{Unique} \(\hat{\theta_0}\) & +\textbf{Infinitely many} \(\hat{\theta_0}\)s \\ +\end{longtable} + +\section{Transformations to fit Linear +Models}\label{transformations-to-fit-linear-models} + +At this point, we have an effective method of fitting models to predict +linear relationships. Given a feature variable and target, we can apply +our four-step process to find the optimal model parameters. + +A key word above is \emph{linear}. When we computed parameter estimates +earlier, we assumed that \(x_i\) and \(y_i\) shared a roughly linear +relationship. Data in the real world isn't always so straightforward, +but we can transform the data to try and obtain linearity. + +The \textbf{Tukey-Mosteller Bulge Diagram} is a useful tool for +summarizing what transformations can linearize the relationship between +two variables. To determine what transformations might be appropriate, +trace the shape of the ``bulge'' made by your data. Find the quadrant of +the diagram that matches this bulge. The transformations shown on the +vertical and horizontal axes of this quadrant can help improve the fit +between the variables. + +Note that: + +\begin{itemize} +\tightlist +\item + There are multiple solutions. Some will fit better than others. +\item + sqrt and log make a value ``smaller.'' +\item + Raising to a power makes a value ``bigger.'' +\item + Each of these transformations equates to increasing or decreasing the + scale of an axis. +\end{itemize} + +Other goals in addition to linearity are possible, for example, making +data appear more symmetric. Linearity allows us to fit lines to the +transformed data. + +Let's revisit our dugongs example. The lengths and ages are plotted +below: + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# \textasciigrave{}corrcoef\textasciigrave{} computes the correlation coefficient between two variables} +\CommentTok{\# \textasciigrave{}std\textasciigrave{} finds the standard deviation} +\NormalTok{x }\OperatorTok{=}\NormalTok{ dugongs[}\StringTok{"Length"}\NormalTok{]} +\NormalTok{y }\OperatorTok{=}\NormalTok{ dugongs[}\StringTok{"Age"}\NormalTok{]} +\NormalTok{r }\OperatorTok{=}\NormalTok{ np.corrcoef(x, y)[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{]} +\NormalTok{theta\_1 }\OperatorTok{=}\NormalTok{ r }\OperatorTok{*}\NormalTok{ np.std(y) }\OperatorTok{/}\NormalTok{ np.std(x)} +\NormalTok{theta\_0 }\OperatorTok{=}\NormalTok{ np.mean(y) }\OperatorTok{{-}}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ np.mean(x)} + +\NormalTok{fig, ax }\OperatorTok{=}\NormalTok{ plt.subplots(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, dpi}\OperatorTok{=}\DecValTok{200}\NormalTok{, figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{8}\NormalTok{, }\DecValTok{3}\NormalTok{))} +\NormalTok{ax[}\DecValTok{0}\NormalTok{].scatter(x, y)} +\NormalTok{ax[}\DecValTok{0}\NormalTok{].set\_xlabel(}\StringTok{"Length"}\NormalTok{)} +\NormalTok{ax[}\DecValTok{0}\NormalTok{].set\_ylabel(}\StringTok{"Age"}\NormalTok{)} + +\NormalTok{ax[}\DecValTok{1}\NormalTok{].scatter(x, y)} +\NormalTok{ax[}\DecValTok{1}\NormalTok{].plot(x, theta\_0 }\OperatorTok{+}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ x, }\StringTok{"tab:red"}\NormalTok{)} +\NormalTok{ax[}\DecValTok{1}\NormalTok{].set\_xlabel(}\StringTok{"Length"}\NormalTok{)} +\NormalTok{ax[}\DecValTok{1}\NormalTok{].set\_ylabel(}\StringTok{"Age"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Text(0, 0.5, 'Age') +\end{verbatim} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-18-output-2.pdf} + +Looking at the plot on the left, we see that there is a slight curvature +to the data points. Plotting the SLR curve on the right results in a +poor fit. + +For SLR to perform well, we'd like there to be a rough linear trend +relating \texttt{"Age"} and \texttt{"Length"}. What is making the raw +data deviate from a linear relationship? Notice that the data points +with \texttt{"Length"} greater than 2.6 have disproportionately high +values of \texttt{"Age"} relative to the rest of the data. If we could +manipulate these data points to have lower \texttt{"Age"} values, we'd +``shift'' these points downwards and reduce the curvature in the data. +Applying a logarithmic transformation to \(y_i\) (that is, taking +\(\log(\) \texttt{"Age"} \()\) ) would achieve just that. + +An important word on \(\log\): in Data 100 (and most upper-division STEM +courses), \(\log\) denotes the natural logarithm with base \(e\). The +base-10 logarithm, where relevant, is indicated by \(\log_{10}\). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{z }\OperatorTok{=}\NormalTok{ np.log(y)} + +\NormalTok{r }\OperatorTok{=}\NormalTok{ np.corrcoef(x, z)[}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{]} +\NormalTok{theta\_1 }\OperatorTok{=}\NormalTok{ r }\OperatorTok{*}\NormalTok{ np.std(z) }\OperatorTok{/}\NormalTok{ np.std(x)} +\NormalTok{theta\_0 }\OperatorTok{=}\NormalTok{ np.mean(z) }\OperatorTok{{-}}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ np.mean(x)} + +\NormalTok{fig, ax }\OperatorTok{=}\NormalTok{ plt.subplots(}\DecValTok{1}\NormalTok{, }\DecValTok{2}\NormalTok{, dpi}\OperatorTok{=}\DecValTok{200}\NormalTok{, figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{8}\NormalTok{, }\DecValTok{3}\NormalTok{))} +\NormalTok{ax[}\DecValTok{0}\NormalTok{].scatter(x, z)} +\NormalTok{ax[}\DecValTok{0}\NormalTok{].set\_xlabel(}\StringTok{"Length"}\NormalTok{)} +\NormalTok{ax[}\DecValTok{0}\NormalTok{].set\_ylabel(}\VerbatimStringTok{r"$\textbackslash{}log\{(Age)\}$"}\NormalTok{)} + +\NormalTok{ax[}\DecValTok{1}\NormalTok{].scatter(x, z)} +\NormalTok{ax[}\DecValTok{1}\NormalTok{].plot(x, theta\_0 }\OperatorTok{+}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ x, }\StringTok{"tab:red"}\NormalTok{)} +\NormalTok{ax[}\DecValTok{1}\NormalTok{].set\_xlabel(}\StringTok{"Length"}\NormalTok{)} +\NormalTok{ax[}\DecValTok{1}\NormalTok{].set\_ylabel(}\VerbatimStringTok{r"$\textbackslash{}log\{(Age)\}$"}\NormalTok{)} + +\NormalTok{plt.subplots\_adjust(wspace}\OperatorTok{=}\FloatTok{0.3}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-19-output-1.pdf} + +Our SLR fit looks a lot better! We now have a new target variable: the +SLR model is now trying to predict the \emph{log} of \texttt{"Age"}, +rather than the untransformed \texttt{"Age"}. In other words, we are +applying the transformation \(z_i = \log{(y_i)}\). Notice that the +resulting model is still \textbf{linear in the parameters} +\(\theta = [\theta_0, \theta_1]\). The SLR model becomes: + +\[\hat{\log{y}} = \theta_0 + \theta_1 x\] +\[\hat{z} = \theta_0 + \theta_1 x\] + +It turns out that this linearized relationship can help us understand +the underlying relationship between \(x\) and \(y\). If we rearrange the +relationship above, we find: + +\[\log{(y)} = \theta_0 + \theta_1 x\] \[y = e^{\theta_0 + \theta_1 x}\] +\[y = (e^{\theta_0})e^{\theta_1 x}\] \[y_i = C e^{k x}\] + +For some constants \(C\) and \(k\). + +\(y\) is an \emph{exponential} function of \(x\). Applying an +exponential fit to the untransformed variables corroborates this +finding. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{plt.figure(dpi}\OperatorTok{=}\DecValTok{120}\NormalTok{, figsize}\OperatorTok{=}\NormalTok{(}\DecValTok{4}\NormalTok{, }\DecValTok{3}\NormalTok{))} + +\NormalTok{plt.scatter(x, y)} +\NormalTok{plt.plot(x, np.exp(theta\_0) }\OperatorTok{*}\NormalTok{ np.exp(theta\_1 }\OperatorTok{*}\NormalTok{ x), }\StringTok{"tab:red"}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"Length"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Age"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Text(0, 0.5, 'Age') +\end{verbatim} + +\includegraphics{constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-20-output-2.pdf} + +You may wonder: why did we choose to apply a log transformation +specifically? Why not some other function to linearize the data? + +Practically, many other mathematical operations that modify the relative +scales of \texttt{"Age"} and \texttt{"Length"} could have worked here. + +\section{Multiple Linear Regression}\label{multiple-linear-regression} + +Multiple linear regression is an extension of simple linear regression +that adds additional features to the model. The multiple linear +regression model takes the form: + +\[\hat{y} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:...\:+\:\theta_p x_{p}\] + +Our predicted value of \(y\), \(\hat{y}\), is a linear combination of +the single \textbf{observations} (features), \(x_i\), and the +parameters, \(\theta_i\). + +We'll dive deeper into Multiple Linear Regression in the next lecture. + +\section{Bonus: Calculating Constant Model MSE Using an Algebraic +Trick}\label{bonus-calculating-constant-model-mse-using-an-algebraic-trick} + +Earlier, we calculated the constant model MSE using calculus. It turns +out that there is a much more elegant way of performing this same +minimization algebraically, without using calculus at all. + +In this calculation, we use the fact that the \textbf{sum of deviations +from the mean is \(0\)} or that \(\sum_{i=1}^{n} (y_i - \bar{y}) = 0\). + +Let's quickly walk through the proof for this: \[ +\begin{align} +\sum_{i=1}^{n} (y_i - \bar{y}) &= \sum_{i=1}^{n} y_i - \sum_{i=1}^{n} \bar{y} \\ + &= \sum_{i=1}^{n} y_i - n\bar{y} \\ + &= \sum_{i=1}^{n} y_i - n\frac{1}{n}\sum_{i=1}^{n}y_i \\ + &= \sum_{i=1}^{n} y_i - \sum_{i=1}^{n}y_i \\ + & = 0 +\end{align} +\] + +In our calculations, we'll also be using the definition of the variance +as a sample. As a refresher: + +\[\sigma_y^2 = \frac{1}{n}\sum_{i=1}^{n} (y_i - \bar{y})^2\] + +Getting into our calculation for MSE minimization: + +\[ +\begin{align} +R(\theta) &= {\frac{1}{n}}\sum^{n}_{i=1} (y_i - \theta)^2 +\\ &= \frac{1}{n}\sum^{n}_{i=1} [(y_i - \bar{y}) + (\bar{y} - \theta)]^2\quad \quad \text{using trick that a-b can be written as (a-c) + (c-b) } \\ +&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \space \space \text{where a, b, and c are any numbers} +\\ &= \frac{1}{n}\sum^{n}_{i=1} [(y_i - \bar{y})^2 + 2(y_i - \bar{y})(\bar{y} - \theta) + (\bar{y} - \theta)^2] +\\ &= \frac{1}{n}[\sum^{n}_{i=1}(y_i - \bar{y})^2 + 2(\bar{y} - \theta)\sum^{n}_{i=1}(y_i - \bar{y}) + n(\bar{y} - \theta)^2] \quad \quad \text{distribute sum to individual terms} +\\ &= \frac{1}{n}\sum^{n}_{i=1}(y_i - \bar{y})^2 + \frac{2}{n}(\bar{y} - \theta)\cdot0 + (\bar{y} - \theta)^2 \quad \quad \text{sum of deviations from mean is 0} +\\ &= \sigma_y^2 + (\bar{y} - \theta)^2 +\end{align} +\] + +Since variance can't be negative, we know that our first term, +\(\sigma_y^2\) is greater than or equal to \(0\). Also note, that +\textbf{the first term doesn't involve \(\theta\) at all}, meaning +changing our model won't change this value. For the purposes of +determining \$\hat{\theta}\#, we can then essentially ignore this term. + +Looking at the second term, \((\bar{y} - \theta)^2\), since it is +squared, we know it must be greater than or equal to \(0\). As this term +does involve \(\theta\), picking the value of \(\theta\) that minimizes +this term will allow us to minimize our average loss. For the second +term to equal \(0\), \(\theta = \bar{y}\), or in other words, +\(\hat{\theta} = \bar{y} = mean(y)\). + +\paragraph{Note}\label{note} + +In the derivation above, we decompose the expected loss, \(R(\theta)\), +into two key components: the variance of the data, \(\sigma_y^2\), and +the square of the bias, \((\bar{y} - \theta)^2\). This decomposition is +insightful for understanding the behavior of estimators in statistical +models. + +\begin{itemize} +\item + \textbf{Variance, \(\sigma_y^2\)}: This term represents the spread of + the data points around their mean, \(\bar{y}\), and is a measure of + the data's inherent variability. Importantly, it does not depend on + the choice of \(\theta\), meaning it's a fixed property of the data. + Variance serves as an indicator of the data's dispersion and is + crucial in understanding the dataset's structure, but it remains + constant regardless of how we adjust our model parameter \(\theta\). +\item + \textbf{Bias Squared, \((\bar{y} - \theta)^2\)}: This term captures + the bias of the estimator, defined as the square of the difference + between the mean of the data points, \(\bar{y}\), and the parameter + \(\theta\). The bias quantifies the systematic error introduced when + estimating \(\theta\). Minimizing this term is essential for improving + the accuracy of the estimator. When \(\theta = \bar{y}\), the bias is + \(0\), indicating that the estimator is unbiased for the parameter it + estimates. This highlights a critical principle in statistical + estimation: choosing \(\theta\) to be the sample mean, \(\bar{y}\), + minimizes the average loss, rendering the estimator both efficient and + unbiased for the population mean. +\end{itemize} + +\bookmarksetup{startatroot} + +\chapter{Ordinary Least Squares}\label{ordinary-least-squares} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Define linearity with respect to a vector of parameters \(\theta\). +\item + Understand the use of matrix notation to express multiple linear + regression. +\item + Interpret ordinary least squares as the minimization of the norm of + the residual vector. +\item + Compute performance metrics for multiple linear regression. +\end{itemize} + +\end{tcolorbox} + +We've now spent a number of lectures exploring how to build effective +models -- we introduced the SLR and constant models, selected cost +functions to suit our modeling task, and applied transformations to +improve the linear fit. + +Throughout all of this, we considered models of one feature +(\(\hat{y}_i = \theta_0 + \theta_1 x_i\)) or zero features +(\(\hat{y}_i = \theta_0\)). As data scientists, we usually have access +to datasets containing \emph{many} features. To make the best models we +can, it will be beneficial to consider all of the variables available to +us as inputs to a model, rather than just one. In today's lecture, we'll +introduce \textbf{multiple linear regression} as a framework to +incorporate multiple features into a model. We will also learn how to +accelerate the modeling process -- specifically, we'll see how linear +algebra offers us a powerful set of tools for understanding model +performance. + +\section{OLS Problem Formulation}\label{ols-problem-formulation} + +\subsection{Multiple Linear +Regression}\label{multiple-linear-regression-1} + +Multiple linear regression is an extension of simple linear regression +that adds additional features to the model. The multiple linear +regression model takes the form: + +\[\hat{y} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:...\:+\:\theta_p x_{p}\] + +Our predicted value of \(y\), \(\hat{y}\), is a linear combination of +the single \textbf{observations} (features), \(x_i\), and the +parameters, \(\theta_i\). + +We can explore this idea further by looking at a dataset containing +aggregate per-player data from the 2018-19 NBA season, downloaded from +\href{https://www.kaggle.com/schmadam97/nba-regular-season-stats-20182019}{Kaggle}. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\NormalTok{nba }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{\textquotesingle{}data/nba18{-}19.csv\textquotesingle{}}\NormalTok{, index\_col}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\NormalTok{nba.index.name }\OperatorTok{=} \VariableTok{None} \CommentTok{\# Drops name of index (players are ordered by rank)} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{nba.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllllllllllllllll@{}} +\toprule\noalign{} +& Player & Pos & Age & Tm & G & GS & MP & FG & FGA & FG\% & ... & FT\% & +ORB & DRB & TRB & AST & STL & BLK & TOV & PF & PTS \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1 & Álex Abrines\textbackslash abrinal01 & SG & 25 & OKC & 31 & 2 & 19.0 +& 1.8 & 5.1 & 0.357 & ... & 0.923 & 0.2 & 1.4 & 1.5 & 0.6 & 0.5 & 0.2 & +0.5 & 1.7 & 5.3 \\ +2 & Quincy Acy\textbackslash acyqu01 & PF & 28 & PHO & 10 & 0 & 12.3 & +0.4 & 1.8 & 0.222 & ... & 0.700 & 0.3 & 2.2 & 2.5 & 0.8 & 0.1 & 0.4 & +0.4 & 2.4 & 1.7 \\ +3 & Jaylen Adams\textbackslash adamsja01 & PG & 22 & ATL & 34 & 1 & 12.6 +& 1.1 & 3.2 & 0.345 & ... & 0.778 & 0.3 & 1.4 & 1.8 & 1.9 & 0.4 & 0.1 & +0.8 & 1.3 & 3.2 \\ +4 & Steven Adams\textbackslash adamsst01 & C & 25 & OKC & 80 & 80 & 33.4 +& 6.0 & 10.1 & 0.595 & ... & 0.500 & 4.9 & 4.6 & 9.5 & 1.6 & 1.5 & 1.0 & +1.7 & 2.6 & 13.9 \\ +5 & Bam Adebayo\textbackslash adebaba01 & C & 21 & MIA & 82 & 28 & 23.3 +& 3.4 & 5.9 & 0.576 & ... & 0.735 & 2.0 & 5.3 & 7.3 & 2.2 & 0.9 & 0.8 & +1.5 & 2.5 & 8.9 \\ +\end{longtable} + +Let's say we are interested in predicting the number of points +(\texttt{PTS}) an athlete will score in a basketball game this season. + +Suppose we want to fit a linear model by using some characteristics, or +\textbf{features} of a player. Specifically, we'll focus on field goals, +assists, and 3-point attempts. + +\begin{itemize} +\tightlist +\item + \texttt{FG}, the average number of (2-point) field goals per game +\item + \texttt{AST}, the average number of assists per game +\item + \texttt{3PA}, the average number of 3-point field goals attempted per + game +\end{itemize} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{nba[[}\StringTok{\textquotesingle{}FG\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}AST\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}3PA\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}PTS\textquotesingle{}}\NormalTok{]].head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& FG & AST & 3PA & PTS \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +1 & 1.8 & 0.6 & 4.1 & 5.3 \\ +2 & 0.4 & 0.8 & 1.5 & 1.7 \\ +3 & 1.1 & 1.9 & 2.2 & 3.2 \\ +4 & 6.0 & 1.6 & 0.0 & 13.9 \\ +5 & 3.4 & 2.2 & 0.2 & 8.9 \\ +\end{longtable} + +Because we are now dealing with many parameter values, we've collected +them all into a \textbf{parameter vector} with dimensions +\((p+1) \times 1\) to keep things tidy. Remember that \(p\) represents +the number of features we have (in this case, 3). + +\[\theta = \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \vdots \\ + \theta_{p} + \end{bmatrix}\] + +We are working with two vectors here: a row vector representing the +observed data, and a column vector containing the model parameters. The +multiple linear regression model is \textbf{equivalent to the dot +(scalar) product of the observation vector and parameter vector}. + +\[[1,\:x_{1},\:x_{2},\:x_{3},\:...,\:x_{p}] \theta = [1,\:x_{1},\:x_{2},\:x_{3},\:...,\:x_{p}] \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \vdots \\ + \theta_{p} + \end{bmatrix} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:...\:+\:\theta_p x_{p}\] + +Notice that we have inserted 1 as the first value in the observation +vector. When the dot product is computed, this 1 will be multiplied with +\(\theta_0\) to give the intercept of the regression model. We call this +1 entry the \textbf{intercept} or \textbf{bias} term. + +Given that we have three features here, we can express this model as: +\[\hat{y} = \theta_0\:+\:\theta_1x_{1}\:+\:\theta_2 x_{2}\:+\:\theta_3 x_{3}\] + +Our features are represented by \(x_1\) (\texttt{FG}), \(x_2\) +(\texttt{AST}), and \(x_3\) (\texttt{3PA}) with each having +correpsonding parameters, \(\theta_1\), \(\theta_2\), and \(\theta_3\). + +In statistics, this model + loss is called \textbf{Ordinary Least +Squares (OLS)}. The solution to OLS is the minimizing loss for +parameters \(\hat{\theta}\), also called the \textbf{least squares +estimate}. + +\subsection{Linear Algebra Approach}\label{linear-algebra-approach} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Linear Algebra Review: Vector Dot Product}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +The \textbf{dot product (or inner product)} is a vector operation that: + +\begin{itemize} +\tightlist +\item + Can only be carried out on two vectors of the \textbf{same length} +\item + Sums up the products of the corresponding entries of the two vectors +\item + Returns a single number +\end{itemize} + +For example, let \[ +\begin{align} +\vec{u} = \begin{bmatrix}1 \\ 2 \\ 3\end{bmatrix}, \vec{v} = \begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix} +\end{align} +\] + +The dot product between \(\vec{u}\) and \(\vec{v}\) is \[ +\begin{align} +\vec{u} \cdot \vec{v} &= \vec{u}^T \vec{v} = \vec{v}^T \vec{u} \\ + &= 1 \cdot 1 + 2 \cdot 1 + 3 \cdot 1 \\ + &= 6 +\end{align} +\] + +While not in scope, note that we can also interpret the dot product +geometrically: + +\begin{itemize} +\tightlist +\item + It is the product of three things: the \textbf{magnitude} of both + vectors, and the \textbf{cosine} of the angles between them: + \[\vec{u} \cdot \vec{v} = ||\vec{u}|| \cdot ||\vec{v}|| \cdot {cos \theta}\] +\end{itemize} + +\end{tcolorbox} + +We now know how to generate a single prediction from multiple observed +features. Data scientists usually work at scale -- that is, they want to +build models that can produce many predictions, all at once. The vector +notation we introduced above gives us a hint on how we can expedite +multiple linear regression. We want to use the tools of linear algebra. + +Let's think about how we can apply what we did above. To accommodate for +the fact that we're considering several feature variables, we'll adjust +our notation slightly. Each observation can now be thought of as a row +vector with an entry for each of \(p\) features. + +To make a prediction from the \emph{first} observation in the data, we +take the dot product of the parameter vector and \emph{first} +observation vector. To make a prediction from the \emph{second} +observation, we would repeat this process to find the dot product of the +parameter vector and the \emph{second} observation vector. If we wanted +to find the model predictions for each observation in the dataset, we'd +repeat this process for all \(n\) observations in the data. + +\[\hat{y}_1 = \theta_0 + \theta_1 x_{11} + \theta_2 x_{12} + ... + \theta_p x_{1p} = [1,\:x_{11},\:x_{12},\:x_{13},\:...,\:x_{1p}] \theta\] +\[\hat{y}_2 = \theta_0 + \theta_1 x_{21} + \theta_2 x_{22} + ... + \theta_p x_{2p} = [1,\:x_{21},\:x_{22},\:x_{23},\:...,\:x_{2p}] \theta\] +\[\vdots\] +\[\hat{y}_n = \theta_0 + \theta_1 x_{n1} + \theta_2 x_{n2} + ... + \theta_p x_{np} = [1,\:x_{n1},\:x_{n2},\:x_{n3},\:...,\:x_{np}] \theta\] + +Our observed data is represented by \(n\) row vectors, each with +dimension \((p+1)\). We can collect them all into a single matrix, which +we call \(\mathbb{X}\). + +The matrix \(\mathbb{X}\) is known as the \textbf{design matrix}. It +contains all observed data for each of our \(p\) features, where each +\textbf{row} corresponds to one \textbf{observation}, and each +\textbf{column} corresponds to a \textbf{feature}. It often (but not +always) contains an additional column of all ones to represent the +\textbf{intercept} or \textbf{bias column}. + +To review what is happening in the design matrix: each row represents a +single observation. For example, a student in Data 100. Each column +represents a feature. For example, the ages of students in Data 100. +This convention allows us to easily transfer our previous work in +DataFrames over to this new linear algebra perspective. + +The multiple linear regression model can then be restated in terms of +matrices: \[ +\Large +\mathbb{\hat{Y}} = \mathbb{X} \theta +\] + +Here, \(\mathbb{\hat{Y}}\) is the \textbf{prediction vector} with \(n\) +elements (\(\mathbb{\hat{Y}} \in \mathbb{R}^{n}\)); it contains the +prediction made by the model for each of the \(n\) input observations. +\(\mathbb{X}\) is the \textbf{design matrix} with dimensions +\(\mathbb{X} \in \mathbb{R}^{n \times (p + 1)}\), and \(\theta\) is the +\textbf{parameter vector} with dimensions +\(\theta \in \mathbb{R}^{(p + 1)}\). Note that our \textbf{true output} +\(\mathbb{Y}\) is also a vector with \(n\) elements +(\(\mathbb{Y} \in \mathbb{R}^{n}\)). + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Linear Algebra Review: Linearity}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +An expression is \textbf{linear in \(\theta\)} (a set of parameters) if +it is a linear combination of the elements of the set. Checking if an +expression can separate into a matrix product of two terms -- a +\textbf{vector of \(\theta\)} s, and a matrix/vector \textbf{not +involving \(\theta\)} -- is a good indicator of linearity. + +For example, consider the vector +\(\theta = [\theta_0, \theta_1, \theta_2]\) + +\begin{itemize} +\tightlist +\item + \(\hat{y} = \theta_0 + 2\theta_1 + 3\theta_2\) is linear in theta, and + we can separate it into a matrix product of two terms: +\end{itemize} + +\[\hat{y} = \begin{bmatrix} 1 \space 2 \space 3 \end{bmatrix} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix}\] + +\begin{itemize} +\tightlist +\item + \(\hat{y} = \theta_0\theta_1 + 2\theta_1^2 + 3log(\theta_2)\) is + \emph{not} linear in theta, as the \(\theta_1\) term is squared, and + the \(\theta_2\) term is logged. We cannot separate it into a matrix + product of two terms. +\end{itemize} + +\end{tcolorbox} + +\subsection{Mean Squared Error}\label{mean-squared-error} + +We now have a new approach to understanding models in terms of vectors +and matrices. To accompany this new convention, we should update our +understanding of risk functions and model fitting. + +Recall our definition of MSE: +\[R(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2\] + +At its heart, the MSE is a measure of \emph{distance} -- it gives an +indication of how ``far away'' the predictions are from the true values, +on average. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Linear Algebra: L2 Norm}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +When working with vectors, this idea of ``distance'' or the vector's +\textbf{size/length} is represented by the \textbf{norm}. More +precisely, the distance between two vectors \(\vec{a}\) and \(\vec{b}\) +can be expressed as: +\[||\vec{a} - \vec{b}||_2 = \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2 + \ldots + (a_n - b_n)^2} = \sqrt{\sum_{i=1}^n (a_i - b_i)^2}\] + +The double bars are mathematical notation for the norm. The subscript 2 +indicates that we are computing the L2, or squared norm. + +The two norms we need to know for Data 100 are the L1 and L2 norms +(sound familiar?). In this note, we'll focus on L2 norm. We'll dive into +L1 norm in future lectures. + +For the n-dimensional vector +\[\vec{x} = \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}\] +its \textbf{L2 vector norm} is + +\[||\vec{x}||_2 = \sqrt{(x_1)^2 + (x_2)^2 + \ldots + (x_n)^2} = \sqrt{\sum_{i=1}^n (x_i)^2}\] + +The L2 vector norm is a generalization of the Pythagorean theorem in +\(n\) dimensions. Thus, it can be used as a measure of the +\textbf{length} of a vector or even as a measure of the +\textbf{distance} between two vectors. + +\end{tcolorbox} + +We can express the MSE as a squared L2 norm if we rewrite it in terms of +the prediction vector, \(\hat{\mathbb{Y}}\), and true target vector, +\(\mathbb{Y}\): + +\[R(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2 = \frac{1}{n} (||\mathbb{Y} - \hat{\mathbb{Y}}||_2)^2\] + +Here, the superscript 2 outside of the parentheses means that we are +\emph{squaring} the norm. If we plug in our linear model +\(\hat{\mathbb{Y}} = \mathbb{X} \theta\), we find the MSE cost function +in vector notation: + +\[R(\theta) = \frac{1}{n} (||\mathbb{Y} - \mathbb{X} \theta||_2)^2\] + +Under the linear algebra perspective, our new task is to fit the optimal +parameter vector \(\theta\) such that the cost function is minimized. +Equivalently, we wish to minimize the norm +\[||\mathbb{Y} - \mathbb{X} \theta||_2 = ||\mathbb{Y} - \hat{\mathbb{Y}}||_2.\] + +We can restate this goal in two ways: + +\begin{itemize} +\tightlist +\item + Minimize the \textbf{distance} between the vector of true values, + \(\mathbb{Y}\), and the vector of predicted values, + \(\mathbb{\hat{Y}}\) +\item + Minimize the \textbf{length} of the \textbf{residual vector}, defined + as: \[e = \mathbb{Y} - \mathbb{\hat{Y}} = \begin{bmatrix} + y_1 - \hat{y}_1 \\ + y_2 - \hat{y}_2 \\ + \vdots \\ + y_n - \hat{y}_n + \end{bmatrix}\] +\end{itemize} + +\subsection{A Note on Terminology for Multiple Linear +Regression}\label{a-note-on-terminology-for-multiple-linear-regression} + +There are several equivalent terms in the context of regression. The +ones we use most often for this course are bolded. + +\begin{itemize} +\tightlist +\item + \(x\) can be called a + + \begin{itemize} + \tightlist + \item + \textbf{Feature(s)} + \item + Covariate(s) + \item + \textbf{Independent variable(s)} + \item + Explanatory variable(s) + \item + Predictor(s) + \item + Input(s) + \item + Regressor(s) + \end{itemize} +\item + \(y\) can be called an + + \begin{itemize} + \tightlist + \item + \textbf{Output} + \item + Outcome + \item + \textbf{Response} + \item + Dependent variable + \end{itemize} +\item + \(\hat{y}\) can be called a + + \begin{itemize} + \tightlist + \item + \textbf{Prediction} + \item + Predicted response + \item + Estimated value + \end{itemize} +\item + \(\theta\) can be called a + + \begin{itemize} + \tightlist + \item + \textbf{Weight(s)} + \item + \textbf{Parameter(s)} + \item + Coefficient(s) + \end{itemize} +\item + \(\hat{\theta}\) can be called a + + \begin{itemize} + \tightlist + \item + \textbf{Estimator(s)} + \item + \textbf{Optimal parameter(s)} + \end{itemize} +\item + A datapoint \((x, y)\) is also called an observation. +\end{itemize} + +\section{Geometric Derivation}\label{geometric-derivation} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Linear Algebra: Span}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Recall that the \textbf{span} or \textbf{column space} of a matrix +\(\mathbb{X}\) (denoted \(span(\mathbb{X})\)) is the set of all possible +linear combinations of the matrix's columns. In other words, the span +represents every point in space that could possibly be reached by adding +and scaling some combination of the matrix columns. Additionally, if +each column of \(\mathbb{X}\) has length \(n\), \(span(\mathbb{X})\) is +a subspace of \(\mathbb{R}^{n}\). + +\end{tcolorbox} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Linear Algebra: Matrix-Vector Multiplication}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +There are 2 ways we can think about matrix-vector multiplication + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + So far, we've thought of our model as horizontally stacked predictions + per datapoint +\item + However, it is helpful sometimes to think of matrix-vector + multiplication as performed by columns. We can also think of + \(\mathbb{Y}\) as a \emph{linear combination of feature vectors}, + scaled by \emph{parameters}. +\end{enumerate} + +\end{tcolorbox} + +Up until now, we've mostly thought of our model as a scalar product +between horizontally stacked observations and the parameter vector. We +can also think of \(\hat{\mathbb{Y}}\) as a \textbf{linear combination +of feature vectors}, scaled by the \textbf{parameters}. We use the +notation \(\mathbb{X}_{:, i}\) to denote the \(i\)th column of the +design matrix. You can think of this as following the same convention as +used when calling \texttt{.iloc} and \texttt{.loc}. ``:'' means that we +are taking all entries in the \(i\)th column. + +\[ +\hat{\mathbb{Y}} = +\theta_0 \begin{bmatrix} + 1 \\ + 1 \\ + \vdots \\ + 1 + \end{bmatrix} + \theta_1 \begin{bmatrix} + x_{11} \\ + x_{21} \\ + \vdots \\ + x_{n1} + \end{bmatrix} + \ldots + \theta_p \begin{bmatrix} + x_{1p} \\ + x_{2p} \\ + \vdots \\ + x_{np} + \end{bmatrix} + = \theta_0 \mathbb{X}_{:,\:1} + \theta_1 \mathbb{X}_{:,\:2} + \ldots + \theta_p \mathbb{X}_{:,\:p+1}\] + +This new approach is useful because it allows us to take advantage of +the properties of linear combinations. + +Because the prediction vector, \(\hat{\mathbb{Y}} = \mathbb{X} \theta\), +is a \textbf{linear combination} of the columns of \(\mathbb{X}\), we +know that the \textbf{predictions are contained in the span of +\(\mathbb{X}\)}. That is, we know that +\(\mathbb{\hat{Y}} \in \text{Span}(\mathbb{X})\). + +The diagram below is a simplified view of \(\text{Span}(\mathbb{X})\), +assuming that each column of \(\mathbb{X}\) has length \(n\). Notice +that the columns of \(\mathbb{X}\) define a subspace of +\(\mathbb{R}^n\), where each point in the subspace can be reached by a +linear combination of \(\mathbb{X}\)'s columns. The prediction vector +\(\mathbb{\hat{Y}}\) lies somewhere in this subspace. + +Examining this diagram, we find a problem. The vector of true values, +\(\mathbb{Y}\), could theoretically lie \emph{anywhere} in +\(\mathbb{R}^n\) space -- its exact location depends on the data we +collect out in the real world. However, our multiple linear regression +model can only make predictions in the subspace of \(\mathbb{R}^n\) +spanned by \(\mathbb{X}\). Remember the model fitting goal we +established in the previous section: we want to generate predictions +such that the distance between the vector of true values, +\(\mathbb{Y}\), and the vector of predicted values, +\(\mathbb{\hat{Y}}\), is minimized. This means that \textbf{we want +\(\mathbb{\hat{Y}}\) to be the vector in \(\text{Span}(\mathbb{X})\) +that is closest to \(\mathbb{Y}\)}. + +Another way of rephrasing this goal is to say that we wish to minimize +the length of the residual vector \(e\), as measured by its \(L_2\) +norm. + +The vector in \(\text{Span}(\mathbb{X})\) that is closest to +\(\mathbb{Y}\) is always the \textbf{orthogonal projection} of +\(\mathbb{Y}\) onto \(\text{Span}(\mathbb{X}).\) Thus, we should choose +the parameter vector \(\theta\) that makes the \textbf{residual vector +orthogonal to any vector in \(\text{Span}(\mathbb{X})\)}. You can +visualize this as the vector created by dropping a perpendicular line +from \(\mathbb{Y}\) onto the span of \(\mathbb{X}\). + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Linear Algebra: Orthogonality}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Recall that two vectors \(\vec{a}\) and \(\vec{b}\) are orthogonal if +their dot product is zero: \(\vec{a}^{T}\vec{b} = 0\). + +A vector \(v\) is \textbf{orthogonal} to the span of a matrix \(M\) if +and only if \(v\) is orthogonal to \textbf{each column} in \(M\). Put +together, a vector \(v\) is orthogonal to \(\text{Span}(M)\) if: + +\[M^Tv = \vec{0}\] + +Note that \(\vec{0}\) represents the \textbf{zero vector}, a +\(d\)-length vector full of 0s. + +\end{tcolorbox} + +Remember our goal is to find \(\hat{\theta}\) such that we minimize the +objective function \(R(\theta)\). Equivalently, this is the +\(\hat{\theta}\) such that the residual vector +\(e = \mathbb{Y} - \mathbb{X} \hat{\theta}\) is orthogonal to +\(\text{Span}(\mathbb{X})\). + +Looking at the definition of orthogonality of +\(\mathbb{Y} - \mathbb{X}\hat{\theta}\) to \(span(\mathbb{X})\), we can +write: \[\mathbb{X}^T (\mathbb{Y} - \mathbb{X}\hat{\theta}) = \vec{0}\] + +Let's then rearrange the terms: +\[\mathbb{X}^T \mathbb{Y} - \mathbb{X}^T \mathbb{X} \hat{\theta} = \vec{0}\] + +And finally, we end up with the \textbf{normal equation}: +\[\mathbb{X}^T \mathbb{X} \hat{\theta} = \mathbb{X}^T \mathbb{Y}\] + +Any vector \(\theta\) that minimizes MSE on a dataset must satisfy this +equation. + +If \(\mathbb{X}^T \mathbb{X}\) is invertible, we can conclude: +\[\hat{\theta} = (\mathbb{X}^T \mathbb{X})^{-1} \mathbb{X}^T \mathbb{Y}\] + +This is called the \textbf{least squares estimate} of \(\theta\): it is +the value of \(\theta\) that minimizes the squared loss. + +Note that the least squares estimate was derived under the assumption +that \(\mathbb{X}^T \mathbb{X}\) is \emph{invertible}. This condition +holds true when \(\mathbb{X}^T \mathbb{X}\) is full column rank, which, +in turn, happens when \(\mathbb{X}\) is full column rank. The proof for +why \(\mathbb{X}\) needs to be full column rank is optional and in the +Bonus section at the end. + +\section{Evaluating Model +Performance}\label{evaluating-model-performance} + +Our geometric view of multiple linear regression has taken us far! We +have identified the optimal set of parameter values to minimize MSE in a +model of multiple features. Now, we want to understand how well our +fitted model performs. + +\subsection{RMSE}\label{rmse} + +One measure of model performance is the \textbf{Root Mean Squared +Error}, or RMSE. The RMSE is simply the square root of MSE. Taking the +square root converts the value back into the original, non-squared units +of \(y_i\), which is useful for understanding the model's performance. A +low RMSE indicates more ``accurate'' predictions -- that there is a +lower average loss across the dataset. + +\[\text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2}\] + +\subsection{Residual Plots}\label{residual-plots} + +When working with SLR, we generated plots of the residuals against a +single feature to understand the behavior of residuals. When working +with several features in multiple linear regression, it no longer makes +sense to consider a single feature in our residual plots. Instead, +multiple linear regression is evaluated by making plots of the residuals +against the predicted values. As was the case with SLR, a multiple +linear model performs well if its residual plot shows no patterns. + +\subsection{\texorpdfstring{Multiple +\(R^2\)}{Multiple R\^{}2}}\label{multiple-r2} + +For SLR, we used the correlation coefficient to capture the association +between the target variable and a single feature variable. In a multiple +linear model setting, we will need a performance metric that can account +for multiple features at once. \textbf{Multiple \(R^2\)}, also called +the \textbf{coefficient of determination}, is the \textbf{proportion of +variance} of our \textbf{fitted values} (predictions) \(\hat{y}_i\) to +our true values \(y_i\). It ranges from 0 to 1 and is effectively the +\emph{proportion} of variance in the observations that the \textbf{model +explains}. + +\[R^2 = \frac{\text{variance of } \hat{y}_i}{\text{variance of } y_i} = \frac{\sigma^2_{\hat{y}}}{\sigma^2_y}\] + +Note that for OLS with an intercept term, for example +\(\hat{y} = \theta_0 + \theta_1x_1 + \theta_2x_2 + \cdots + \theta_px_p\), +\(R^2\) is equal to the square of the correlation between \(y\) and +\(\hat{y}\). On the other hand for SLR, \(R^2\) is equal to \(r^2\), the +correlation between \(x\) and \(y\). The proof of these last two +properties is out of scope for this course. + +Additionally, as we add more features, our fitted values tend to become +closer and closer to our actual values. Thus, \(R^2\) increases. + +Adding more features doesn't always mean our model is better though! +We'll see why later in the course. + +\section{OLS Properties}\label{ols-properties} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + When using the optimal parameter vector, our residuals + \(e = \mathbb{Y} - \hat{\mathbb{Y}}\) are orthogonal to + \(span(\mathbb{X})\). +\end{enumerate} + +\[\mathbb{X}^Te = 0 \] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-color-frame, left=2mm, breakable, rightrule=.15mm, bottomrule=.15mm, opacityback=0, toprule=.15mm, leftrule=.75mm, arc=.35mm, colback=white] + +Proof: + +\begin{itemize} +\tightlist +\item + The optimal parameter vector, \(\hat{\theta}\), solves the normal + equations + \(\implies \hat{\theta} = (\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y}\) +\end{itemize} + +\[\mathbb{X}^Te = \mathbb{X}^T (\mathbb{Y} - \mathbb{\hat{Y}}) \] + +\[\mathbb{X}^T (\mathbb{Y} - \mathbb{X}\hat{\theta}) = \mathbb{X}^T\mathbb{Y} - \mathbb{X}^T\mathbb{X}\hat{\theta}\] + +\begin{itemize} +\tightlist +\item + Any matrix multiplied with its own inverse is the identity matrix + \(\mathbb{I}\) +\end{itemize} + +\[\mathbb{X}^T\mathbb{Y} - (\mathbb{X}^T\mathbb{X})(\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y} = \mathbb{X}^T\mathbb{Y} - \mathbb{X}^T\mathbb{Y} = 0\] + +\end{tcolorbox} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{1} +\tightlist +\item + For all linear models with an \textbf{intercept term}, the \textbf{sum + of residuals is zero}. +\end{enumerate} + +\[\sum_i^n e_i = 0\] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-color-frame, left=2mm, breakable, rightrule=.15mm, bottomrule=.15mm, opacityback=0, toprule=.15mm, leftrule=.75mm, arc=.35mm, colback=white] + +Proof: + +\begin{itemize} +\tightlist +\item + For all linear models with an \textbf{intercept term}, the average of + the predicted \(y\) values is equal to the average of the true \(y\) + values. \[\bar{y} = \bar{\hat{y}}\] +\item + Rewriting the sum of residuals as two separate sums, + \[\sum_i^n e_i = \sum_i^n y_i - \sum_i^n\hat{y}_i\] +\item + Each respective sum is a multiple of the average of the sum. + \[\sum_i^n e_i = n\bar{y} - n\bar{y} = n(\bar{y} - \bar{y}) = 0\] +\end{itemize} + +\end{tcolorbox} + +To summarize: + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Model +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Estimate +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Unique? +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Constant Model + MSE & \(\hat{y} = \theta_0\) & +\(\hat{\theta}_0 = mean(y) = \bar{y}\) & \textbf{Yes}. Any set of values +has a unique mean. \\ +Constant Model + MAE & \(\hat{y} = \theta_0\) & +\(\hat{\theta}_0 = median(y)\) & \textbf{Yes}, if odd. \textbf{No}, if +even. Return the average of the middle 2 values. \\ +Simple Linear Regression + MSE & \(\hat{y} = \theta_0 + \theta_1x\) & +\(\hat{\theta}_0 = \bar{y} - \hat{\theta}_1\bar{x}\) +\(\hat{\theta}_1 = r\frac{\sigma_y}{\sigma_x}\) & \textbf{Yes}. Any set +of non-constant* values has a unique mean, SD, and correlation +coefficient. \\ +\textbf{OLS} (Linear Model + MSE) & +\(\mathbb{\hat{Y}} = \mathbb{X}\mathbb{\theta}\) & +\(\hat{\theta} = (\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y}\) & +\textbf{Yes}, if \(\mathbb{X}\) is full column rank (all columns are +linearly independent, \# of datapoints +\textgreater\textgreater\textgreater{} \# of features). \\ +\end{longtable} + +\section{Bonus: Uniqueness of the +Solution}\label{bonus-uniqueness-of-the-solution} + +The Least Squares estimate \(\hat{\theta}\) is \textbf{unique} if and +only if \(\mathbb{X}\) is \textbf{full column rank}. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-color-frame, left=2mm, breakable, rightrule=.15mm, bottomrule=.15mm, opacityback=0, toprule=.15mm, leftrule=.75mm, arc=.35mm, colback=white] + +Proof: + +\begin{itemize} +\tightlist +\item + We know the solution to the normal equation + \(\mathbb{X}^T\mathbb{X}\hat{\theta} = \mathbb{X}^T\mathbb{Y}\) is the + least square estimate that minimizes the squared loss. +\item + \(\hat{\theta}\) has a \textbf{unique} solution \(\iff\) the square + matrix \(\mathbb{X}^T\mathbb{X}\) is \textbf{invertible} \(\iff\) + \(\mathbb{X}^T\mathbb{X}\) is full rank. + + \begin{itemize} + \tightlist + \item + The \textbf{column} rank of a square matrix is the max number of + linearly independent columns it contains. + \item + An \(n\) x \(n\) square matrix is deemed full column rank when all + of its columns are linearly independent. That is, its rank would be + equal to \(n\). + \item + \(\mathbb{X}^T\mathbb{X}\) has shape \((p + 1) \times (p + 1)\), and + therefore has max rank \(p + 1\). + \end{itemize} +\item + \(rank(\mathbb{X}^T\mathbb{X})\) = \(rank(\mathbb{X})\) (proof out of + scope). +\item + Therefore, \(\mathbb{X}^T\mathbb{X}\) has rank \(p + 1\) \(\iff\) + \(\mathbb{X}\) has rank \(p + 1\) \(\iff \mathbb{X}\) is full column + rank. +\end{itemize} + +\end{tcolorbox} + +Therefore, if \(\mathbb{X}\) is not full column rank, we will not have +unique estimates. This can happen for two major reasons. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + If our design matrix \(\mathbb{X}\) is ``\textbf{wide}'': + + \begin{itemize} + \tightlist + \item + If n \textless{} p, then we have way more features (columns) than + observations (rows). + \item + Then \(rank(\mathbb{X})\) = min(n, p+1) \textless{} p+1, so + \(\hat{\theta}\) is not unique. + \item + Typically we have n \textgreater\textgreater{} p so this is less of + an issue. + \end{itemize} +\item + If our design matrix \(\mathbb{X}\) has features that are + \textbf{linear combinations} of other features: + + \begin{itemize} + \tightlist + \item + By definition, rank of \(\mathbb{X}\) is number of linearly + independent columns in \(\mathbb{X}\). + \item + Example: If ``Width'', ``Height'', and ``Perimeter'' are all + columns, + + \begin{itemize} + \tightlist + \item + Perimeter = 2 * Width + 2 * Height \(\rightarrow\) \(\mathbb{X}\) + is not full rank. + \end{itemize} + \item + Important with one-hot encoding (to discuss later). + \end{itemize} +\end{enumerate} + +\bookmarksetup{startatroot} + +\chapter{sklearn and Gradient +Descent}\label{sklearn-and-gradient-descent} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Apply the \texttt{sklearn} library for model creation and training +\item + Optimizing complex models +\item + Identifying cases where straight calculus or geometric arguments won't + help solve the loss function +\item + Applying gradient descent for numerical optimization +\end{itemize} + +\end{tcolorbox} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ plotly.express }\ImportTok{as}\NormalTok{ px} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{from}\NormalTok{ sklearn.linear\_model }\ImportTok{import}\NormalTok{ LinearRegression} +\NormalTok{pd.options.mode.chained\_assignment }\OperatorTok{=} \VariableTok{None} \CommentTok{\# default=\textquotesingle{}warn\textquotesingle{}} +\end{Highlighting} +\end{Shaded} + +\section{\texorpdfstring{\texttt{sklearn}}{sklearn}}\label{sklearn} + +\subsection{Implementing Derived Formulas in +Code}\label{implementing-derived-formulas-in-code} + +Throughout this lecture, we'll refer to the \texttt{penguins} dataset. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} + +\NormalTok{penguins }\OperatorTok{=}\NormalTok{ sns.load\_dataset(}\StringTok{"penguins"}\NormalTok{)} +\NormalTok{penguins }\OperatorTok{=}\NormalTok{ penguins[penguins[}\StringTok{"species"}\NormalTok{] }\OperatorTok{==} \StringTok{"Adelie"}\NormalTok{].dropna()} +\NormalTok{penguins.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& species & island & bill\_length\_mm & bill\_depth\_mm & +flipper\_length\_mm & body\_mass\_g & sex \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & Adelie & Torgersen & 39.1 & 18.7 & 181.0 & 3750.0 & Male \\ +1 & Adelie & Torgersen & 39.5 & 17.4 & 186.0 & 3800.0 & Female \\ +2 & Adelie & Torgersen & 40.3 & 18.0 & 195.0 & 3250.0 & Female \\ +4 & Adelie & Torgersen & 36.7 & 19.3 & 193.0 & 3450.0 & Female \\ +5 & Adelie & Torgersen & 39.3 & 20.6 & 190.0 & 3650.0 & Male \\ +\end{longtable} + +Our goal will be to predict the value of the \texttt{"bill\_depth\_mm"} +for a particular penguin given its \texttt{"flipper\_length\_mm"} and +\texttt{"body\_mass\_g"}. We'll also add a bias column of all ones to +represent the intercept term of our models. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Add a bias column of all ones to \textasciigrave{}penguins\textasciigrave{}} +\NormalTok{penguins[}\StringTok{"bias"}\NormalTok{] }\OperatorTok{=}\NormalTok{ np.ones(}\BuiltInTok{len}\NormalTok{(penguins), dtype}\OperatorTok{=}\BuiltInTok{int}\NormalTok{) } + +\CommentTok{\# Define the design matrix, X...} +\CommentTok{\# Note that we use .to\_numpy() to convert our DataFrame into a NumPy array so it is in Matrix form} +\NormalTok{X }\OperatorTok{=}\NormalTok{ penguins[[}\StringTok{"bias"}\NormalTok{, }\StringTok{"flipper\_length\_mm"}\NormalTok{, }\StringTok{"body\_mass\_g"}\NormalTok{]].to\_numpy()} + +\CommentTok{\# ...as well as the target variable, Y} +\CommentTok{\# Again, we use .to\_numpy() to convert our DataFrame into a NumPy array so it is in Matrix form} +\NormalTok{Y }\OperatorTok{=}\NormalTok{ penguins[[}\StringTok{"bill\_depth\_mm"}\NormalTok{]].to\_numpy()} +\end{Highlighting} +\end{Shaded} + +In the lecture on ordinary least squares, we expressed multiple linear +regression using matrix notation. + +\[\hat{\mathbb{Y}} = \mathbb{X}\theta\] + +We used a geometric approach to derive the following expression for the +optimal model parameters: + +\[\hat{\theta} = (\mathbb{X}^T \mathbb{X})^{-1}\mathbb{X}^T \mathbb{Y}\] + +That's a whole lot of matrix manipulation. How do we implement it in +\texttt{python}? + +There are three operations we need to perform here: multiplying +matrices, taking transposes, and finding inverses. + +\begin{itemize} +\tightlist +\item + To perform matrix multiplication, use the \texttt{@} operator +\item + To take a transpose, call the \texttt{.T} attribute of an + \texttt{NumPy} array or \texttt{DataFrame} +\item + To compute an inverse, use \texttt{NumPy}'s in-built method + \texttt{np.linalg.inv} +\end{itemize} + +Putting this all together, we can compute the OLS estimate for the +optimal model parameters, stored in the array \texttt{theta\_hat}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{theta\_hat }\OperatorTok{=}\NormalTok{ np.linalg.inv(X.T }\OperatorTok{@}\NormalTok{ X) }\OperatorTok{@}\NormalTok{ X.T }\OperatorTok{@}\NormalTok{ Y} +\NormalTok{theta\_hat} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([[1.10029953e+01], + [9.82848689e-03], + [1.47749591e-03]]) +\end{verbatim} + +To make predictions using our optimized parameter values, we +matrix-multiply the design matrix with the parameter vector: + +\[\hat{\mathbb{Y}} = \mathbb{X}\theta\] + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{Y\_hat }\OperatorTok{=}\NormalTok{ X }\OperatorTok{@}\NormalTok{ theta\_hat} +\NormalTok{pd.DataFrame(Y\_hat).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& 0 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 18.322561 \\ +1 & 18.445578 \\ +2 & 17.721412 \\ +3 & 17.997254 \\ +4 & 18.263268 \\ +\end{longtable} + +\subsection{\texorpdfstring{The \texttt{sklearn} +Workflow}{The sklearn Workflow}}\label{the-sklearn-workflow} + +We've already saved a lot of time (and avoided tedious calculations) by +translating our derived formulas into code. However, we still had to go +through the process of writing out the linear algebra ourselves. + +To make life \emph{even easier}, we can turn to the \texttt{sklearn} +\href{https://scikit-learn.org/stable/}{\texttt{python} library}. +\texttt{sklearn} is a robust library of machine learning tools used +extensively in research and industry. It is the standard for simple +machine learning tasks and gives us a wide variety of in-built modeling +frameworks and methods, so we'll keep returning to \texttt{sklearn} +techniques as we progress through Data 100. + +Regardless of the specific type of model being implemented, +\texttt{sklearn} follows a standard set of steps for creating a model: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Import the \texttt{LinearRegression} model from \texttt{sklearn} + +\begin{verbatim} +from sklearn.linear_model import LinearRegression +\end{verbatim} +\item + Create a model object. This generates a new instance of the model + class. You can think of it as making a new ``copy'' of a standard + ``template'' for a model. In code, this looks like: + +\begin{verbatim} +my_model = LinearRegression() +\end{verbatim} +\item + Fit the model to the \texttt{X} design matrix and \texttt{Y} target + vector. This calculates the optimal model parameters ``behind the + scenes'' without us explicitly working through the calculations + ourselves. The fitted parameters are then stored within the model for + use in future predictions: + +\begin{verbatim} +my_model.fit(X, Y) +\end{verbatim} +\item + Use the fitted model to make predictions on the \texttt{X} input data + using \texttt{.predict}. + +\begin{verbatim} +my_model.predict(X) +\end{verbatim} +\end{enumerate} + +To extract the fitted parameters, we can use: + +\begin{verbatim} +my_model.coef_ + +my_model.intercept_ +\end{verbatim} + +Let's put this into action with our multiple regression task! + +\textbf{1. Initialize an instance of the model class} + +\texttt{sklearn} stores ``templates'' of useful models for machine +learning. We begin the modeling process by making a ``copy'' of one of +these templates for our own use. Model initialization looks like +\texttt{ModelClass()}, where \texttt{ModelClass} is the type of model we +wish to create. + +For now, let's create a linear regression model using +\texttt{LinearRegression}. + +\texttt{my\_model} is now an instance of the \texttt{LinearRegression} +class. You can think of it as the ``idea'' of a linear regression model. +We haven't trained it yet, so it doesn't know any model parameters and +cannot be used to make predictions. In fact, we haven't even told it +what data to use for modeling! It simply waits for further instructions. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{my\_model }\OperatorTok{=}\NormalTok{ LinearRegression()} +\end{Highlighting} +\end{Shaded} + +\textbf{2. Train the model using \texttt{.fit}} + +Before the model can make predictions, we will need to fit it to our +training data. When we fit the model, \texttt{sklearn} will run gradient +descent behind the scenes to determine the optimal model parameters. It +will then save these model parameters to our model instance for future +use. + +All \texttt{sklearn} model classes include a \texttt{.fit} method, which +is used to fit the model. It takes in two inputs: the design matrix, +\texttt{X}, and the target variable, \texttt{Y}. + +Let's start by fitting a model with just one feature: the flipper +length. We create a design matrix \texttt{X} by pulling out the +\texttt{"flipper\_length\_mm"} column from the \texttt{DataFrame}. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# .fit expects a 2D data design matrix, so we use double brackets to extract a DataFrame} +\NormalTok{X }\OperatorTok{=}\NormalTok{ penguins[[}\StringTok{"flipper\_length\_mm"}\NormalTok{]]} +\NormalTok{Y }\OperatorTok{=}\NormalTok{ penguins[}\StringTok{"bill\_depth\_mm"}\NormalTok{]} + +\NormalTok{my\_model.fit(X, Y)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +LinearRegression() +\end{verbatim} + +Notice that we use \textbf{double brackets} to extract this column. Why +double brackets instead of just single brackets? The \texttt{.fit} +method, by default, expects to receive \textbf{2-dimensional} data -- +some kind of data that includes both rows and columns. Writing +\texttt{penguins{[}"flipper\_length\_mm"{]}} would return a 1D +\texttt{Series}, causing \texttt{sklearn} to error. We avoid this by +writing \texttt{penguins{[}{[}"flipper\_length\_mm"{]}{]}} to produce a +2D \texttt{DataFrame}. + +And in just three lines of code, our model has run gradient descent to +determine the optimal model parameters! Our single-feature model takes +the form: + +\[\text{bill depth} = \theta_0 + \theta_1 \text{flipper length}\] + +Note that \texttt{LinearRegression} will automatically include an +intercept term. + +The fitted model parameters are stored as attributes of the model +instance. \texttt{my\_model.intercept\_} will return the value of +\(\hat{\theta}_0\) as a scalar. \texttt{my\_model.coef\_} will return +all values \(\hat{\theta}_1, +\hat{\theta}_1, ...\) in an array. Because our model only contains one +feature, we see just the value of \(\hat{\theta}_1\) in the cell below. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# The intercept term, theta\_0} +\NormalTok{my\_model.intercept\_} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +np.float64(7.297305899612313) +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# All parameters theta\_1, ..., theta\_p} +\NormalTok{my\_model.coef\_} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([0.05812622]) +\end{verbatim} + +\textbf{3. Use the fitted model to make predictions} + +Now that the model has been trained, we can use it to make predictions! +To do so, we use the \texttt{.predict} method. \texttt{.predict} takes +in one argument: the design matrix that should be used to generate +predictions. To understand how the model performs on the training set, +we would pass in the training data. Alternatively, to make predictions +on unseen data, we would pass in a new dataset that wasn't used to train +the model. + +Below, we call \texttt{.predict} to generate model predictions on the +original training data. As before, we use double brackets to ensure that +we extract 2-dimensional data. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{Y\_hat\_one\_feature }\OperatorTok{=}\NormalTok{ my\_model.predict(penguins[[}\StringTok{"flipper\_length\_mm"}\NormalTok{]])} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"The RMSE of the model is }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{sqrt(np.mean((Y}\OperatorTok{{-}}\NormalTok{Y\_hat\_one\_feature)}\OperatorTok{**}\DecValTok{2}\NormalTok{))}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +The RMSE of the model is 1.154936309923901 +\end{verbatim} + +What if we wanted a model with two features? + +\[\text{bill depth} = \theta_0 + \theta_1 \text{flipper length} + \theta_2 \text{body mass}\] + +We repeat this three-step process by intializing a new model object, +then calling \texttt{.fit} and \texttt{.predict} as before. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Step 1: initialize LinearRegression model} +\NormalTok{two\_feature\_model }\OperatorTok{=}\NormalTok{ LinearRegression()} + +\CommentTok{\# Step 2: fit the model} +\NormalTok{X\_two\_features }\OperatorTok{=}\NormalTok{ penguins[[}\StringTok{"flipper\_length\_mm"}\NormalTok{, }\StringTok{"body\_mass\_g"}\NormalTok{]]} +\NormalTok{Y }\OperatorTok{=}\NormalTok{ penguins[}\StringTok{"bill\_depth\_mm"}\NormalTok{]} + +\NormalTok{two\_feature\_model.fit(X\_two\_features, Y)} + +\CommentTok{\# Step 3: make predictions} +\NormalTok{Y\_hat\_two\_features }\OperatorTok{=}\NormalTok{ two\_feature\_model.predict(X\_two\_features)} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"The RMSE of the model is }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{sqrt(np.mean((Y}\OperatorTok{{-}}\NormalTok{Y\_hat\_two\_features)}\OperatorTok{**}\DecValTok{2}\NormalTok{))}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +The RMSE of the model is 0.9881331104079043 +\end{verbatim} + +We can also see that we obtain the same predictions using +\texttt{sklearn} as we did when applying the ordinary least squares +formula before! + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.DataFrame(\{}\StringTok{"Y\_hat from OLS"}\NormalTok{:np.squeeze(Y\_hat), }\StringTok{"Y\_hat from sklearn"}\NormalTok{:Y\_hat\_two\_features\}).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Y\_hat from OLS & Y\_hat from sklearn \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 18.322561 & 18.322561 \\ +1 & 18.445578 & 18.445578 \\ +2 & 17.721412 & 17.721412 \\ +3 & 17.997254 & 17.997254 \\ +4 & 18.263268 & 18.263268 \\ +\end{longtable} + +\section{Gradient Descent}\label{gradient-descent} + +At this point, we've grown quite familiar with the process of choosing a +model and a corresponding loss function and optimizing parameters by +choosing the values of \(\theta\) that minimize the loss function. So +far, we've optimized \(\theta\) by + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Using calculus to take the derivative of the loss function with + respect to \(\theta\), setting it equal to 0, and solving for + \(\theta\). +\item + Using the geometric argument of orthogonality to derive the OLS + solution + \(\hat{\theta} = (\mathbb{X}^T \mathbb{X})^{-1}\mathbb{X}^T \mathbb{Y}\). +\end{enumerate} + +One thing to note, however, is that the techniques we used above can +only be applied if we make some big assumptions. For the calculus +approach, we assumed that the loss function was differentiable at all +points and that we could algebraically solve for the zero points of the +derivative; for the geometric approach, OLS \emph{only} applies when +using a linear model with MSE loss. What happens when we have more +complex models with different, more complex loss functions? The +techniques we've learned so far will not work, so we need a new +optimization technique: \textbf{gradient descent}. + +\begin{quote} +\textbf{BIG IDEA}: use an iterative algorithm to numerically compute the +minimum of the loss. +\end{quote} + +\subsection{Minimizing an Arbitrary 1D +Function}\label{minimizing-an-arbitrary-1d-function} + +Let's consider an arbitrary function. Our goal is to find the value of +\(x\) that minimizes this function. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ arbitrary(x):} + \ControlFlowTok{return}\NormalTok{ (x}\OperatorTok{**}\DecValTok{4} \OperatorTok{{-}} \DecValTok{15}\OperatorTok{*}\NormalTok{x}\OperatorTok{**}\DecValTok{3} \OperatorTok{+} \DecValTok{80}\OperatorTok{*}\NormalTok{x}\OperatorTok{**}\DecValTok{2} \OperatorTok{{-}} \DecValTok{180}\OperatorTok{*}\NormalTok{x }\OperatorTok{+} \DecValTok{144}\NormalTok{)}\OperatorTok{/}\DecValTok{10} +\end{Highlighting} +\end{Shaded} + +\subsubsection{The Naive Approach: Guess and +Check}\label{the-naive-approach-guess-and-check} + +Above, we saw that the minimum is somewhere around 5.3. Let's see if we +can figure out how to find the exact minimum algorithmically from +scratch. One very slow (and terrible) way would be manual +guess-and-check. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{arbitrary(}\DecValTok{6}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +0.0 +\end{verbatim} + +A somewhat better (but still slow) approach is to use brute force to try +out a bunch of x values and return the one that yields the lowest loss. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ simple\_minimize(f, xs):} + \CommentTok{\# Takes in a function f and a set of values xs. } + \CommentTok{\# Calculates the value of the function f at all values x in xs} + \CommentTok{\# Takes the minimum value of f(x) and returns the corresponding value x } +\NormalTok{ y }\OperatorTok{=}\NormalTok{ [f(x) }\ControlFlowTok{for}\NormalTok{ x }\KeywordTok{in}\NormalTok{ xs] } + \ControlFlowTok{return}\NormalTok{ xs[np.argmin(y)]} + +\NormalTok{guesses }\OperatorTok{=}\NormalTok{ [}\FloatTok{5.3}\NormalTok{, }\FloatTok{5.31}\NormalTok{, }\FloatTok{5.32}\NormalTok{, }\FloatTok{5.33}\NormalTok{, }\FloatTok{5.34}\NormalTok{, }\FloatTok{5.35}\NormalTok{]} +\NormalTok{simple\_minimize(arbitrary, guesses)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +5.33 +\end{verbatim} + +This process is essentially the same as before where we made a graphical +plot, it's just that we're only looking at 20 selected points. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{xs }\OperatorTok{=}\NormalTok{ np.linspace(}\DecValTok{1}\NormalTok{, }\DecValTok{7}\NormalTok{, }\DecValTok{200}\NormalTok{)} +\NormalTok{sparse\_xs }\OperatorTok{=}\NormalTok{ np.linspace(}\DecValTok{1}\NormalTok{, }\DecValTok{7}\NormalTok{, }\DecValTok{5}\NormalTok{)} + +\NormalTok{ys }\OperatorTok{=}\NormalTok{ arbitrary(xs)} +\NormalTok{sparse\_ys }\OperatorTok{=}\NormalTok{ arbitrary(sparse\_xs)} + +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.line(x }\OperatorTok{=}\NormalTok{ xs, y }\OperatorTok{=}\NormalTok{ arbitrary(xs))} +\NormalTok{fig.add\_scatter(x }\OperatorTok{=}\NormalTok{ sparse\_xs, y }\OperatorTok{=}\NormalTok{ arbitrary(sparse\_xs), mode }\OperatorTok{=} \StringTok{"markers"}\NormalTok{)} +\NormalTok{fig.update\_layout(showlegend}\OperatorTok{=} \VariableTok{False}\NormalTok{)} +\NormalTok{fig.update\_layout(autosize}\OperatorTok{=}\VariableTok{False}\NormalTok{, width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{600}\NormalTok{)} +\NormalTok{fig.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +This basic approach suffers from three major flaws: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + If the minimum is outside our range of guesses, the answer will be + completely wrong. +\item + Even if our range of guesses is correct, if the guesses are too + coarse, our answer will be inaccurate. +\item + It is \emph{very} computationally inefficient, considering potentially + vast numbers of guesses that are useless. +\end{enumerate} + +\subsubsection{\texorpdfstring{\texttt{Scipy.optimize.minimize}}{Scipy.optimize.minimize}}\label{scipy.optimize.minimize} + +One way to minimize this mathematical function is to use the +\texttt{scipy.optimize.minimize} function. It takes a function and a +starting guess and tries to find the minimum. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ scipy.optimize }\ImportTok{import}\NormalTok{ minimize} + +\CommentTok{\# takes a function f and a starting point x0 and returns a readout } +\CommentTok{\# with the optimal input value of x which minimizes f} +\NormalTok{minimize(arbitrary, x0 }\OperatorTok{=} \FloatTok{3.5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + message: Optimization terminated successfully. + success: True + status: 0 + fun: -0.13827491292966557 + x: [ 2.393e+00] + nit: 3 + jac: [ 6.486e-06] + hess_inv: [[ 7.385e-01]] + nfev: 20 + njev: 10 +\end{verbatim} + +\texttt{scipy.optimize.minimize} is great. It may also seem a bit +magical. How could you write a function that can find the minimum of any +mathematical function? There are a number of ways to do this, which +we'll explore in today's lecture, eventually arriving at the important +idea of \textbf{gradient descent}, which is the principle that +\texttt{scipy.optimize.minimize} uses. + +It turns out that under the hood, the \texttt{fit} method for +\texttt{LinearRegression} models uses gradient descent. Gradient descent +is also how much of machine learning works, including even advanced +neural network models. + +In Data 100, the gradient descent process will usually be invisible to +us, hidden beneath an abstraction layer. However, to be good data +scientists, it's important that we know the underlying principles that +optimization functions harness to find optimal parameters. + +\subsubsection{Digging into Gradient +Descent}\label{digging-into-gradient-descent} + +Looking at the function across this domain, it is clear that the +function's minimum value occurs around \(\theta = 5.3\). Let's pretend +for a moment that we \emph{couldn't} see the full view of the cost +function. How would we guess the value of \(\theta\) that minimizes the +function? + +It turns out that the first derivative of the function can give us a +clue. In the plots below, the line indicates the value of the derivative +of each value of \(\theta\). The derivative is negative where it is red +and positive where it is green. + +Say we make a guess for the minimizing value of \(\theta\). Remember +that we read plots from left to right, and assume that our starting +\(\theta\) value is to the left of the optimal \(\hat{\theta}\). If the +guess ``undershoots'' the true minimizing value -- our guess for +\(\theta\) is lower than the value of the \(\hat{\theta}\) that +minimizes the function -- the derivative will be \textbf{negative}. This +means that if we increase \(\theta\) (move further to the right), then +we \textbf{can decrease} our loss function further. If this guess +``overshoots'' the true minimizing value, the derivative will be +positive, implying the converse. + +We can use this pattern to help formulate our next guess for the optimal +\(\hat{\theta}\). Consider the case where we've undershot \(\theta\) by +guessing too low of a value. We'll want our next guess to be greater in +value than our previous guess -- that is, we want to shift our guess to +the right. You can think of this as following the slope ``downhill'' to +the function's minimum value. + +If we've overshot \(\hat{\theta}\) by guessing too high of a value, +we'll want our next guess to be lower in value -- we want to shift our +guess for \(\hat{\theta}\) to the left. + +In other words, the derivative of the function at each point tells us +the direction of our next guess. + +\begin{itemize} +\tightlist +\item + A negative slope means we want to step to the right, or move in the + \emph{positive} direction. +\item + A positive slope means we want to step to the left, or move in the + \emph{negative} direction. +\end{itemize} + +\subsubsection{Algorithm Attempt 1}\label{algorithm-attempt-1} + +Armed with this knowledge, let's try to see if we can use the derivative +to optimize the function. + +We start by making some guess for the minimizing value of \(x\). Then, +we look at the derivative of the function at this value of \(x\), and +step downhill in the \emph{opposite} direction. We can express our new +rule as a recurrence relation: + +\[x^{(t+1)} = x^{(t)} - \frac{d}{dx} f(x^{(t)})\] + +Translating this statement into English: we obtain \textbf{our next +guess} for the minimizing value of \(x\) at timestep \(t+1\) +(\(x^{(t+1)}\)) by taking \textbf{our last guess} (\(x^{(t)}\)) and +subtracting the \textbf{derivative of the function} at that point +(\(\frac{d}{dx} f(x^{(t)})\)). + +A few steps are shown below, where the old step is shown as a +transparent point, and the next step taken is the green-filled dot. + +Looking pretty good! We do have a problem though -- once we arrive close +to the minimum value of the function, our guesses ``bounce'' back and +forth past the minimum without ever reaching it. + +In other words, each step we take when updating our guess moves us too +far. We can address this by decreasing the size of each step. + +\subsubsection{Algorithm Attempt 2}\label{algorithm-attempt-2} + +Let's update our algorithm to use a \textbf{learning rate} (also +sometimes called the step size), which controls how far we move with +each update. We represent the learning rate with \(\alpha\). + +\[x^{(t+1)} = x^{(t)} - \alpha \frac{d}{dx} f(x^{(t)})\] + +A small \(\alpha\) means that we will take small steps; a large +\(\alpha\) means we will take large steps. When do we stop updating? We +stop updating either after a fixed number of updates or after a +subsequent update doesn't change much. + +Updating our function to use \(\alpha=0.3\), our algorithm successfully +\textbf{converges} (settles on a solution and stops updating +significantly, or at all) on the minimum value. + +\subsection{Convexity}\label{convexity} + +In our analysis above, we focused our attention on the global minimum of +the loss function. You may be wondering: what about the local minimum +that's just to the left? + +If we had chosen a different starting guess for \(\theta\), or a +different value for the learning rate \(\alpha\), our algorithm may have +gotten ``stuck'' and converged on the local minimum, rather than on the +true optimum value of loss. + +If the loss function is \textbf{convex}, gradient descent is guaranteed +to converge and find the global minimum of the objective function. +Formally, a function \(f\) is convex if: +\[tf(a) + (1-t)f(b) \geq f(ta + (1-t)b)\] for all \(a, b\) in the domain +of \(f\) and \(t \in [0, 1]\). + +To put this into words: if you drew a line between any two points on the +curve, all values on the curve must be \emph{on or below} the line. +Importantly, any local minimum of a convex function is also its global +minimum so we avoid the situation where the algorithm converges on some +critical point that is not the minimum of the function. + +In summary, non-convex loss functions can cause problems with +optimization. This means that our choice of loss function is a key +factor in our modeling process. It turns out that MSE \emph{is} convex, +which is a major reason why it is such a popular choice of loss +function. Gradient descent is only guaranteed to converge (given enough +iterations and an appropriate step size) for convex functions. + +\subsection{Gradient Descent in 1 +Dimension}\label{gradient-descent-in-1-dimension} + +\begin{quote} +\textbf{Terminology clarification}: In past lectures, we have used +``loss'' to refer to the error incurred on a \emph{single} datapoint. In +applications, we usually care more about the average error across +\emph{all} datapoints. Going forward, we will take the ``model's loss'' +to mean the model's average error across the dataset. This is sometimes +also known as the empirical risk, cost function, or objective function. +\[L(\theta) = R(\theta) = \frac{1}{n} \sum_{i=1}^{n} L(y, \hat{y})\] +\end{quote} + +In our discussion above, we worked with some arbitrary function \(f\). +As data scientists, we will almost always work with gradient descent in +the context of optimizing \emph{models} -- specifically, we want to +apply gradient descent to find the minimum of a \emph{loss function}. In +a modeling context, our goal is to minimize a loss function by choosing +the minimizing model \emph{parameters}. + +Recall our modeling workflow from the past few lectures: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Define a model with some parameters \(\theta_i\) +\item + Choose a loss function +\item + Select the values of \(\theta_i\) that minimize the loss function on + the data +\end{enumerate} + +Gradient descent is a powerful technique for completing this last task. +By applying the gradient descent algorithm, we can select values for our +parameters \(\theta_i\) that will lead to the model having minimal loss +on the training data. + +When using gradient descent in a modeling context, we: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Make guesses for the minimizing \(\theta_i\) +\item + Compute the derivative of the loss function \(L\) +\end{enumerate} + +We can ``translate'' our gradient descent rule from before by replacing +\(x\) with \(\theta\) and \(f\) with \(L\): + +\[\theta^{(t+1)} = \theta^{(t)} - \alpha \frac{d}{d\theta} L(\theta^{(t)})\] + +\subsubsection{\texorpdfstring{Gradient Descent on the \texttt{tips} +Dataset}{Gradient Descent on the tips Dataset}}\label{gradient-descent-on-the-tips-dataset} + +To see this in action, let's consider a case where we have a linear +model with no offset. We want to predict the tip (y) given the price of +a meal (x). To do this, we + +\begin{itemize} +\tightlist +\item + Choose a model: \(\hat{y} = \theta_1 x\), +\item + Choose a loss function: + \(L(\theta) = MSE(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \theta_1x_i)^2\). +\end{itemize} + +Let's apply our \texttt{gradient\_descent} function from before to +optimize our model on the \texttt{tips} dataset. We will try to select +the best parameter \(\theta_i\) to predict the \texttt{tip} \(y\) from +the \texttt{total\_bill} \(x\). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{df }\OperatorTok{=}\NormalTok{ sns.load\_dataset(}\StringTok{"tips"}\NormalTok{)} +\NormalTok{df.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& total\_bill & tip & sex & smoker & day & time & size \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 16.99 & 1.01 & Female & No & Sun & Dinner & 2 \\ +1 & 10.34 & 1.66 & Male & No & Sun & Dinner & 3 \\ +2 & 21.01 & 3.50 & Male & No & Sun & Dinner & 3 \\ +3 & 23.68 & 3.31 & Male & No & Sun & Dinner & 2 \\ +4 & 24.59 & 3.61 & Female & No & Sun & Dinner & 4 \\ +\end{longtable} + +We can visualize the value of the MSE on our dataset for different +possible choices of \(\theta_1\). To optimize our model, we want to +select the value of \(\theta_1\) that leads to the lowest MSE. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ plotly.graph\_objects }\ImportTok{as}\NormalTok{ go} + +\KeywordTok{def}\NormalTok{ derivative\_arbitrary(x):} + \ControlFlowTok{return}\NormalTok{ (}\DecValTok{4}\OperatorTok{*}\NormalTok{x}\OperatorTok{**}\DecValTok{3} \OperatorTok{{-}} \DecValTok{45}\OperatorTok{*}\NormalTok{x}\OperatorTok{**}\DecValTok{2} \OperatorTok{+} \DecValTok{160}\OperatorTok{*}\NormalTok{x }\OperatorTok{{-}} \DecValTok{180}\NormalTok{)}\OperatorTok{/}\DecValTok{10} + +\NormalTok{fig }\OperatorTok{=}\NormalTok{ go.Figure()} +\NormalTok{roots }\OperatorTok{=}\NormalTok{ np.array([}\FloatTok{2.3927}\NormalTok{, }\FloatTok{3.5309}\NormalTok{, }\FloatTok{5.3263}\NormalTok{])} + +\NormalTok{fig.add\_trace(go.Scatter(x }\OperatorTok{=}\NormalTok{ xs, y }\OperatorTok{=}\NormalTok{ arbitrary(xs), } +\NormalTok{ mode }\OperatorTok{=} \StringTok{"lines"}\NormalTok{, name }\OperatorTok{=} \StringTok{"f"}\NormalTok{))} +\NormalTok{fig.add\_trace(go.Scatter(x }\OperatorTok{=}\NormalTok{ xs, y }\OperatorTok{=}\NormalTok{ derivative\_arbitrary(xs), } +\NormalTok{ mode }\OperatorTok{=} \StringTok{"lines"}\NormalTok{, name }\OperatorTok{=} \StringTok{"df"}\NormalTok{, line }\OperatorTok{=}\NormalTok{ \{}\StringTok{"dash"}\NormalTok{: }\StringTok{"dash"}\NormalTok{\}))} +\NormalTok{fig.add\_trace(go.Scatter(x }\OperatorTok{=}\NormalTok{ np.array(roots), y }\OperatorTok{=} \DecValTok{0}\OperatorTok{*}\NormalTok{roots, } +\NormalTok{ mode }\OperatorTok{=} \StringTok{"markers"}\NormalTok{, name }\OperatorTok{=} \StringTok{"df = zero"}\NormalTok{, marker\_size }\OperatorTok{=} \DecValTok{12}\NormalTok{))} +\NormalTok{fig.update\_layout(font\_size }\OperatorTok{=} \DecValTok{20}\NormalTok{, yaxis\_range}\OperatorTok{=}\NormalTok{[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{, }\DecValTok{3}\NormalTok{])} +\NormalTok{fig.update\_layout(autosize}\OperatorTok{=}\VariableTok{False}\NormalTok{, width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{600}\NormalTok{)} +\NormalTok{fig.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +To apply gradient descent, we need to compute the derivative of the loss +function with respect to our parameter \(\theta_1\). + +\begin{itemize} +\tightlist +\item + Given our loss function, + \[L(\theta) = MSE(\theta) = \frac{1}{n} \sum_{i=1}^n (y_i - \theta_1x_i)^2\] +\item + We take the derivative with respect to \(\theta_1\) + \[\frac{\partial}{\partial \theta_{1}} L(\theta_1^{(t)}) = \frac{-2}{n} \sum_{i=1}^n (y_i - \theta_1^{(t)} x_i) x_i\] +\item + Which results in the gradient descent update rule + \[\theta_1^{(t+1)} = \theta_1^{(t)} - \alpha \frac{d}{d\theta}L(\theta_1^{(t)})\] +\end{itemize} + +for some learning rate \(\alpha\). + +Implementing this in code, we can visualize the MSE loss on the +\texttt{tips} data. \textbf{MSE is convex}, so there is one global +minimum. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ gradient\_descent(df, initial\_guess, alpha, n):} + \CommentTok{"""Performs n steps of gradient descent on df using learning rate alpha starting} +\CommentTok{ from initial\_guess. Returns a numpy array of all guesses over time."""} +\NormalTok{ guesses }\OperatorTok{=}\NormalTok{ [initial\_guess]} +\NormalTok{ current\_guess }\OperatorTok{=}\NormalTok{ initial\_guess} + \ControlFlowTok{while} \BuiltInTok{len}\NormalTok{(guesses) }\OperatorTok{\textless{}}\NormalTok{ n:} +\NormalTok{ current\_guess }\OperatorTok{=}\NormalTok{ current\_guess }\OperatorTok{{-}}\NormalTok{ alpha }\OperatorTok{*}\NormalTok{ df(current\_guess)} +\NormalTok{ guesses.append(current\_guess)} + + \ControlFlowTok{return}\NormalTok{ np.array(guesses)} + +\KeywordTok{def}\NormalTok{ mse\_single\_arg(theta\_1):} + \CommentTok{"""Returns the MSE on our data for the given theta1"""} +\NormalTok{ x }\OperatorTok{=}\NormalTok{ df[}\StringTok{"total\_bill"}\NormalTok{]} +\NormalTok{ y\_obs }\OperatorTok{=}\NormalTok{ df[}\StringTok{"tip"}\NormalTok{]} +\NormalTok{ y\_hat }\OperatorTok{=}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ x} + \ControlFlowTok{return}\NormalTok{ np.mean((y\_hat }\OperatorTok{{-}}\NormalTok{ y\_obs) }\OperatorTok{**} \DecValTok{2}\NormalTok{)} + +\KeywordTok{def}\NormalTok{ mse\_loss\_derivative\_single\_arg(theta\_1):} + \CommentTok{"""Returns the derivative of the MSE on our data for the given theta1"""} +\NormalTok{ x }\OperatorTok{=}\NormalTok{ df[}\StringTok{"total\_bill"}\NormalTok{]} +\NormalTok{ y\_obs }\OperatorTok{=}\NormalTok{ df[}\StringTok{"tip"}\NormalTok{]} +\NormalTok{ y\_hat }\OperatorTok{=}\NormalTok{ theta\_1 }\OperatorTok{*}\NormalTok{ x} + + \ControlFlowTok{return}\NormalTok{ np.mean(}\DecValTok{2} \OperatorTok{*}\NormalTok{ (y\_hat }\OperatorTok{{-}}\NormalTok{ y\_obs) }\OperatorTok{*}\NormalTok{ x)} + +\NormalTok{loss\_df }\OperatorTok{=}\NormalTok{ pd.DataFrame(\{}\StringTok{"theta\_1"}\NormalTok{:np.linspace(}\OperatorTok{{-}}\FloatTok{1.5}\NormalTok{, }\DecValTok{1}\NormalTok{), }\StringTok{"MSE"}\NormalTok{:[mse\_single\_arg(theta\_1) }\ControlFlowTok{for}\NormalTok{ theta\_1 }\KeywordTok{in}\NormalTok{ np.linspace(}\OperatorTok{{-}}\FloatTok{1.5}\NormalTok{, }\DecValTok{1}\NormalTok{)]\})} + +\NormalTok{trajectory }\OperatorTok{=}\NormalTok{ gradient\_descent(mse\_loss\_derivative\_single\_arg, }\OperatorTok{{-}}\FloatTok{0.5}\NormalTok{, }\FloatTok{0.0001}\NormalTok{, }\DecValTok{100}\NormalTok{)} + +\NormalTok{plt.plot(loss\_df[}\StringTok{"theta\_1"}\NormalTok{], loss\_df[}\StringTok{"MSE"}\NormalTok{])} +\NormalTok{plt.scatter(trajectory, [mse\_single\_arg(guess) }\ControlFlowTok{for}\NormalTok{ guess }\KeywordTok{in}\NormalTok{ trajectory], c}\OperatorTok{=}\StringTok{"white"}\NormalTok{, edgecolor}\OperatorTok{=}\StringTok{"firebrick"}\NormalTok{)} +\NormalTok{plt.scatter(trajectory[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{], mse\_single\_arg(trajectory[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{]), c}\OperatorTok{=}\StringTok{"firebrick"}\NormalTok{)} +\NormalTok{plt.xlabel(}\VerbatimStringTok{r"$\textbackslash{}theta\_1$"}\NormalTok{)} +\NormalTok{plt.ylabel(}\VerbatimStringTok{r"$L(\textbackslash{}theta\_1)$"}\NormalTok{)}\OperatorTok{;} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"Final guess for theta\_1: }\SpecialCharTok{\{}\NormalTok{trajectory[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Final guess for theta_1: 0.14369554654231262 +\end{verbatim} + +\includegraphics{gradient_descent/gradient_descent_files/figure-pdf/cell-21-output-2.pdf} + +\subsection{Gradient Descent on Multi-Dimensional +Models}\label{gradient-descent-on-multi-dimensional-models} + +The function we worked with above was one-dimensional -- we were only +minimizing the function with respect to a single parameter, \(\theta\). +However, models usually have a cost function with multiple parameters +that need to be optimized. For example, simple linear regression has 2 +parameters: \[\hat{y} + \theta_0 + \theta_1x\] and multiple linear +regression has \(p+1\) parameters: +\[\mathbb{Y} = \theta_0 + \theta_1 \Bbb{X}_{:,1} + \theta_2 \Bbb{X}_{:,2} + \cdots + \theta_p \Bbb{X}_{:,p}\] + +We'll need to expand gradient descent so we can update our guesses for +all model parameters all in one go. + +With multiple parameters to optimize, we consider a \textbf{loss +surface}, or the model's loss for a particular \emph{combination} of +possible parameter values. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ plotly.graph\_objects }\ImportTok{as}\NormalTok{ go} + + +\KeywordTok{def}\NormalTok{ mse\_loss(theta, X, y\_obs):} +\NormalTok{ y\_hat }\OperatorTok{=}\NormalTok{ X }\OperatorTok{@}\NormalTok{ theta} + \ControlFlowTok{return}\NormalTok{ np.mean((y\_hat }\OperatorTok{{-}}\NormalTok{ y\_obs) }\OperatorTok{**} \DecValTok{2}\NormalTok{) } + +\NormalTok{tips\_with\_bias }\OperatorTok{=}\NormalTok{ df.copy()} +\NormalTok{tips\_with\_bias[}\StringTok{"bias"}\NormalTok{] }\OperatorTok{=} \DecValTok{1} +\NormalTok{tips\_with\_bias }\OperatorTok{=}\NormalTok{ tips\_with\_bias[[}\StringTok{"bias"}\NormalTok{, }\StringTok{"total\_bill"}\NormalTok{]]} + +\NormalTok{uvalues }\OperatorTok{=}\NormalTok{ np.linspace(}\DecValTok{0}\NormalTok{, }\DecValTok{2}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{vvalues }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\FloatTok{0.1}\NormalTok{, }\FloatTok{0.35}\NormalTok{, }\DecValTok{10}\NormalTok{)} +\NormalTok{(u,v) }\OperatorTok{=}\NormalTok{ np.meshgrid(uvalues, vvalues)} +\NormalTok{thetas }\OperatorTok{=}\NormalTok{ np.vstack((u.flatten(),v.flatten()))} + +\KeywordTok{def}\NormalTok{ mse\_loss\_single\_arg(theta):} + \ControlFlowTok{return}\NormalTok{ mse\_loss(theta, tips\_with\_bias, df[}\StringTok{"tip"}\NormalTok{])} + +\NormalTok{MSE }\OperatorTok{=}\NormalTok{ np.array([mse\_loss\_single\_arg(t) }\ControlFlowTok{for}\NormalTok{ t }\KeywordTok{in}\NormalTok{ thetas.T])} + +\NormalTok{loss\_surface }\OperatorTok{=}\NormalTok{ go.Surface(x}\OperatorTok{=}\NormalTok{u, y}\OperatorTok{=}\NormalTok{v, z}\OperatorTok{=}\NormalTok{np.reshape(MSE, u.shape))} + +\NormalTok{ind }\OperatorTok{=}\NormalTok{ np.argmin(MSE)} +\NormalTok{optimal\_point }\OperatorTok{=}\NormalTok{ go.Scatter3d(name }\OperatorTok{=} \StringTok{"Optimal Point"}\NormalTok{,} +\NormalTok{ x }\OperatorTok{=}\NormalTok{ [thetas.T[ind,}\DecValTok{0}\NormalTok{]], y }\OperatorTok{=}\NormalTok{ [thetas.T[ind,}\DecValTok{1}\NormalTok{]], } +\NormalTok{ z }\OperatorTok{=}\NormalTok{ [MSE[ind]],} +\NormalTok{ marker}\OperatorTok{=}\BuiltInTok{dict}\NormalTok{(size}\OperatorTok{=}\DecValTok{10}\NormalTok{, color}\OperatorTok{=}\StringTok{"red"}\NormalTok{))} + +\NormalTok{fig }\OperatorTok{=}\NormalTok{ go.Figure(data}\OperatorTok{=}\NormalTok{[loss\_surface, optimal\_point])} +\NormalTok{fig.update\_layout(scene }\OperatorTok{=} \BuiltInTok{dict}\NormalTok{(} +\NormalTok{ xaxis\_title }\OperatorTok{=} \StringTok{"theta0"}\NormalTok{,} +\NormalTok{ yaxis\_title }\OperatorTok{=} \StringTok{"theta1"}\NormalTok{,} +\NormalTok{ zaxis\_title }\OperatorTok{=} \StringTok{"MSE"}\NormalTok{), autosize}\OperatorTok{=}\VariableTok{False}\NormalTok{, width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{600}\NormalTok{)} + +\NormalTok{fig.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +We can also visualize a bird's-eye view of the loss surface from above +using a contour plot: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{contour }\OperatorTok{=}\NormalTok{ go.Contour(x}\OperatorTok{=}\NormalTok{u[}\DecValTok{0}\NormalTok{], y}\OperatorTok{=}\NormalTok{v[:, }\DecValTok{0}\NormalTok{], z}\OperatorTok{=}\NormalTok{np.reshape(MSE, u.shape))} +\NormalTok{fig }\OperatorTok{=}\NormalTok{ go.Figure(contour)} +\NormalTok{fig.update\_layout(} +\NormalTok{ xaxis\_title }\OperatorTok{=} \StringTok{"theta0"}\NormalTok{,} +\NormalTok{ yaxis\_title }\OperatorTok{=} \StringTok{"theta1"}\NormalTok{, autosize}\OperatorTok{=}\VariableTok{False}\NormalTok{, width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{600}\NormalTok{)} + +\NormalTok{fig.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\subsubsection{The Gradient Vector}\label{the-gradient-vector} + +As before, the derivative of the loss function tells us the best way +towards the minimum value. + +On a 2D (or higher) surface, the best way to go down (gradient) is +described by a \emph{vector}. + +\begin{quote} +Math Aside: Partial Derivatives +\end{quote} + +\begin{quote} +\begin{itemize} +\tightlist +\item + For an equation with multiple variables, we take a \textbf{partial + derivative} by differentiating with respect to just one variable at a + time. The partial derivative is denoted with a \(\partial\). + Intuitively, we want to see how the function changes if we only vary + one variable while holding other variables constant. +\item + Using \(f(x, y) = 3x^2 + y\) as an example, + + \begin{itemize} + \tightlist + \item + taking the partial derivative with respect to x and treating y as a + constant gives us \(\frac{\partial f}{\partial x} = 6x\) + \item + taking the partial derivative with respect to y and treating x as a + constant gives us \(\frac{\partial f}{\partial y} = 1\) + \end{itemize} +\end{itemize} +\end{quote} + +For the \emph{vector} of parameter values +\(\vec{\theta} = \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \end{bmatrix}\), we take the \emph{partial derivative} of loss +with respect to each parameter: \(\frac{\partial L}{\partial \theta_0}\) +and \(\frac{\partial L}{\partial \theta_1}\). + +\begin{quote} +For example, consider the 2D function: +\[f(\theta_0, \theta_1) = 8 \theta_0^2 + 3\theta_0\theta_1\] For a +function of 2 variables \(f(\theta_0, \theta_1)\), we define the +gradient \[ +\begin{align} +\frac{\partial f}{\partial \theta_{0}} &= 16\theta_0 + 3\theta_1 \\ +\frac{\partial f}{\partial \theta_{1}} &= 3\theta_0 \\ +\nabla_{\vec{\theta}} f(\vec{\theta}) &= \begin{bmatrix} 16\theta_0 + 3\theta_1 \\ 3\theta_0 \\ \end{bmatrix} +\end{align} +\] +\end{quote} + +The \textbf{gradient vector} of a generic function of \(p+1\) variables +is therefore +\[\nabla_{\vec{\theta}} L = \begin{bmatrix} \frac{\partial L}{\partial \theta_0} \\ \frac{\partial L}{\partial \theta_1} \\ \vdots \end{bmatrix}\] +where \(\nabla_\theta L\) always points in the downhill direction of the +surface. We can interpret each gradient as: ``If I nudge the \(i\)th +model weight, what happens to loss?'' + +We can use this to update our 1D gradient rule for models with multiple +parameters. + +\begin{itemize} +\item + Recall our 1D update rule: + \[\theta^{(t+1)} = \theta^{(t)} - \alpha \frac{d}{d\theta}L(\theta^{(t)})\] +\item + For models with multiple parameters, we work in terms of vectors: + \[\begin{bmatrix} + \theta_{0}^{(t+1)} \\ + \theta_{1}^{(t+1)} \\ + \vdots + \end{bmatrix} = \begin{bmatrix} + \theta_{0}^{(t)} \\ + \theta_{1}^{(t)} \\ + \vdots + \end{bmatrix} - \alpha \begin{bmatrix} + \frac{\partial L}{\partial \theta_{0}} \\ + \frac{\partial L}{\partial \theta_{1}} \\ + \vdots \\ + \end{bmatrix}\] +\item + Written in a more compact form, + \[\vec{\theta}^{(t+1)} = \vec{\theta}^{(t)} - \alpha \nabla_{\vec{\theta}} L(\theta^{(t)}) \] + + \begin{itemize} + \tightlist + \item + \(\theta\) is a vector with our model weights + \item + \(L\) is the loss function + \item + \(\alpha\) is the learning rate (ours is constant, but other + techniques use an \(\alpha\) that decreases over time) + \item + \(\vec{\theta}^{(t)}\) is the current value of \(\theta\) + \item + \(\vec{\theta}^{(t+1)}\) is the next value of \(\theta\) + \item + \(\nabla_{\vec{\theta}} L(\theta^{(t)})\) is the gradient of the + loss function evaluated at the current \(\vec{\theta}^{(t)}\) + \end{itemize} +\end{itemize} + +\subsection{Batch Gradient Descent and Stochastic Gradient +Descent}\label{batch-gradient-descent-and-stochastic-gradient-descent} + +Formally, the algorithm we derived above is called \textbf{batch +gradient descent.} For each iteration of the algorithm, the derivative +of loss is computed across the \emph{entire} batch of all \(n\) +datapoints. While this update rule works well in theory, it is not +practical in most circumstances. For large datasets (with perhaps +billions of datapoints), finding the gradient across all the data is +incredibly computationally taxing; gradient descent will converge slowly +because each individual update is slow. + +\textbf{Stochastic (mini-batch) gradient descent} tries to address this +issue. In stochastic descent, only a \emph{sample} of the full dataset +is used at each update. We estimate the true gradient of the loss +surface using just that sample of data. The \textbf{batch size} is the +number of data points used in each sample. The sampling strategy is +generally without replacement (data is shuffled and batch size examples +are selected one at a time.) + +Each complete ``pass'' through the data is known as a \textbf{training +epoch}. After shuffling the data, in a single \textbf{training epoch} of +stochastic gradient descent, we + +\begin{itemize} +\tightlist +\item + Compute the gradient on the first x\% of the data. Update the + parameter guesses. +\item + Compute the gradient on the next x\% of the data. Update the parameter + guesses. +\item + \(\dots\) +\item + Compute the gradient on the last x\% of the data. Update the parameter + guesses. +\end{itemize} + +Every data point appears once in a single training epoch. We then +perform several training epochs until we're satisfied. + +Batch gradient descent is a deterministic technique -- because the +entire dataset is used at each update iteration, the algorithm will +always advance towards the minimum of the loss surface. In contrast, +stochastic gradient descent involve an element of randomness. Since only +a subset of the full data is used to update the guess for +\(\vec{\theta}\) at each iteration, there's a chance the algorithm will +not progress towards the true minimum of loss with each update. Over the +longer term, these stochastic techniques should still converge towards +the optimal solution. + +The diagrams below represent a ``bird's eye view'' of a loss surface +from above. Notice that batch gradient descent takes a direct path +towards the optimal \(\hat{\theta}\). Stochastic gradient descent, in +contrast, ``hops around'' on its path to the minimum point on the loss +surface. This reflects the randomness of the sampling process at each +update step. + +To summarize the tradeoffs of batch size: + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3333}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3333}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3333}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +- +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Smaller Batch Size +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Larger Batch Size +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Pros & More frequent gradient updates & Leverage hardware acceleration +to improve overall system performance and higher quality gradient +updates \\ +Cons & More variability in the gradient estimates & Less frequent +gradient updates \\ +\end{longtable} + +The typical solution is to set batch size to ensure sufficient hardware +utilization. + +\bookmarksetup{startatroot} + +\chapter{Feature Engineering}\label{feature-engineering} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Recognize the value of feature engineering as a tool to improve model + performance +\item + Implement polynomial feature generation and one hot encoding +\item + Understand the interactions between model complexity, model variance, + and training error +\end{itemize} + +\end{tcolorbox} + +At this point, we've grown quite familiar with the modeling process. +We've introduced the concept of loss, used it to fit several types of +models, and, most recently, extended our analysis to multiple +regression. Along the way, we've forged our way through the mathematics +of deriving the optimal model parameters in all its gory detail. It's +time to make our lives a little easier -- let's implement the modeling +process in code! + +In this lecture, we'll explore two techniques for model fitting: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Translating our derived formulas for regression to \texttt{python} +\item + Using \texttt{python}'s \texttt{sklearn} package +\end{enumerate} + +With our new programming frameworks in hand, we will also add +sophistication to our models by introducing more complex features to +enhance model performance. + +\section{Gradient Descent Cont.}\label{gradient-descent-cont.} + +Before we dive into feature engineering, let's quickly review gradient +descent, which we covered in the last lecture. Recall that gradient +descent is a powerful technique for choosing the model parameters that +minimize the loss function. + +\subsection{Gradient Descent Review}\label{gradient-descent-review} + +As we learned earlier, we set the derivative of the loss function to +zero and solve to determine the optimal parameters \(\theta\) that +minimize loss. For a loss surface in 2D (or higher), the best way to +minimize loss is to ``walk'' down the loss surface until we reach our +optimal parameters \(\vec{\theta}\). The \textbf{gradient vector} tells +us which direction to ``walk'' in. + +For example, the \emph{vector} of parameter values +\(\vec{\theta} = \begin{bmatrix} + \theta_{0} \\ + \theta_{1} \\ + \end{bmatrix}\) gives us a two parameter model (d = 2). To +calculate our gradient vector, we can take the \emph{partial derivative} +of loss with respect to each parameter: +\(\frac{\partial L}{\partial \theta_0}\) and +\(\frac{\partial L}{\partial \theta_1}\). + +Its \textbf{gradient vector} would then be the 2D vector: +\[\nabla_{\vec{\theta}} L = \begin{bmatrix} \frac{\partial L}{\partial \theta_0} \\ \frac{\partial L}{\partial \theta_1} \end{bmatrix}\] + +Note that \(-\nabla_{\vec{\theta}} L\) always points in the +\textbf{downhill direction} of the surface. + +Recall that we also discussed the gradient descent update rule, where we +nudge \(\theta\) in a negative gradient direction until \(\theta\) +converges. + +As a refresher, the rule is as follows: +\[\vec{\theta}^{(t+1)} = \vec{\theta}^{(t)} - \alpha \nabla_{\vec{\theta}} L(\vec{\theta}^{(t)}) \] + +\begin{itemize} +\tightlist +\item + \(\theta\) is a vector with our model weights +\item + \(L\) is the loss function +\item + \(\alpha\) is the learning rate +\item + \(\vec{\theta}^{(t)}\) is the current value of \(\theta\) +\item + \(\vec{\theta}^{(t+1)}\) is the next value of \(\theta\) +\item + \(\nabla_{\vec{\theta}} L(\vec{\theta}^{(t)})\) is the gradient of the + loss function evaluated at the current \(\theta\): + \[\frac{1}{n}\sum_{i=1}^{n}\nabla_{\vec{\theta}} l(y_i, f_{\vec{\theta}^{(t)}}(X_i))\] +\end{itemize} + +Let's now walk through an example of calculating and updating the +gradient vector. Say our model and loss are: \[\begin{align} +f_{\vec{\theta}}(\vec{x}) &= \vec{x}^T\vec{\theta} = \theta_0x_0 + \theta_1x_1 +\\l(y, \hat{y}) &= (y - \hat{y})^2 +\end{align} +\] + +Plugging in \(f_{\vec{\theta}}(\vec{x})\) for \(\hat{y}\), our loss +function becomes +\(l(\vec{\theta}, \vec{x}, y_i) = (y_i - \theta_0x_0 - \theta_1x_1)^2\). + +To calculate our gradient vector, we can start by computing the partial +derivative of the loss function with respect to \(\theta_0\): +\[\frac{\partial}{\partial \theta_{0}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_0)\] + +Let's now do the same but with respect to \(\theta_1\): +\[\frac{\partial}{\partial \theta_{1}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_1)\] + +Putting this together, our gradient vector is: +\[\nabla_{\theta} l(\vec{\theta}, \vec{x}, y_i) = \begin{bmatrix} -2(y_i - \theta_0x_0 - \theta_1x_1)(x_0) \\ -2(y_i - \theta_0x_0 - \theta_1x_1)(x_1) \end{bmatrix}\] + +Remember that we need to keep updating \(\theta\) until the algorithm +\textbf{converges} to a solution and stops updating significantly (or at +all). When updating \(\theta\), we'll have a fixed number of updates and +subsequent updates will be quite small (we won't change \(\theta\) by +much). + +\subsection{Stochastic (Mini-batch) Gradient +Descent}\label{stochastic-mini-batch-gradient-descent} + +Let's now dive deeper into gradient and stochastic gradient descent. In +the previous lecture, we discussed how finding the gradient across all +the data is extremeley computationally taxing and takes a lot of +resources to calculate. + +We know that the solution to the normal equation is +\(\hat{\theta} = (\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y}\). +Let's break this down and determine the computational complexity for +this solution. + +Let \(n\) be the number of samples (rows) and \(d\) be the number of +features (columns). + +\begin{itemize} +\tightlist +\item + Computing \((\mathbb{X}^{\top}\mathbb{X})\) takes \(O(nd^2)\) time, + and it's inverse takes another \(O(d^3)\) time to calculate; overall, + \((\mathbb{X}^{\top}\mathbb{X})^{-1}\) takes \(O(nd^2) + O(d^3)\) + time. +\item + \(\mathbb{X}^{\top}\mathbb{Y}\) takes \(O(nd)\) time. +\item + Multiplying \((\mathbb{X}^{\top}\mathbb{X})^{-1}\) and + \(\mathbb{X}^{\top}\mathbb{Y}\) takes \(O(d^2)\) time. +\end{itemize} + +In total, calculating the solution to the normal equation takes +\(O(nd^2) + O(d^3) + O(nd) + O(d^2)\) time. We can see that +\(O(nd^2) + O(d^3)\) dominates the complexity --- this can be +problematic for high-dimensional models and very large datasets. + +On the other hand, the time complexity of a single gradient descent step +takes only \(O(nd)\) time. + +Suppose we run \(T\) iterations. The final complexity would then be +\(O(Tnd)\). Typically, \(n\) is much larger than \(T\) and \(d\). How +can we reduce the cost of this algorithm using a technique from Data +100? Do we really need to use \(n\) data points? We don't! Instead, we +can use stochastic gradient descent. + +We know that our true gradient of +\(\nabla_{\vec{\theta}} L (\vec{\theta^{(t)}}) = \frac{1}{n}\sum_{i=1}^{n}\nabla_{\vec{\theta}} l(y_i, f_{\vec{\theta}^{(t)}}(X_i))\) +has a time complexity of \(O(nd)\). Instead of using all \(n\) samples +to calculate the true gradient of the loss surface, let's use a sample +of our data to approximate. Say we sample \(b\) records +(\(s_1, \cdots, s_b\)) from our \(n\) datapoints. Our new (stochastic) +gradient descent function will be +\(\nabla_{\vec{\theta}} L (\vec{\theta^{(t)}}) = \frac{1}{b}\sum_{i=1}^{b}\nabla_{\vec{\theta}} l(y_{s_i}, f_{\vec{\theta}^{(t)}}(X_{s_i}))\) +and will now have a time complexity of \(O(bd)\), which is much faster! + +Stochastic gradient descent helps us approximate the gradient while also +reducing the time complexity and computational cost. The time complexity +scales with the number of datapoints selected in the sample. To sample +data, there are two approaches we can use: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Shuffle the data and select samples one at a time. +\item + Take a simple random sample for each gradient computation. +\end{enumerate} + +But how do we decide our mini-batch size (\(b\)), or the number of +datapoints in our sample? The original stochastic gradient descent +algorithm uses \(b=1\) so that only one sample is used to approximate +the gradient at a time. Although we don't use such a small mini-batch +size often, \(b\) typically is small. When choosing \(b\), there are +several factors to consider: a larger batch size results in a better +gradient estimate, parallelism, and other systems factors. On the other +hand, a smaller batch size will be faster and have more frequent +updates. It is up to data scientists to balance the tradeoff between +batch size and time complexity. + +Summarizing our two gradient descent techniques: + +\begin{itemize} +\tightlist +\item + \textbf{(Batch) Gradient Descent}: Gradient descent computes the + \textbf{true} descent and always descends towards the true minimum of + the loss. While accurate, it can often be computationally expensive. +\end{itemize} + +\begin{itemize} +\tightlist +\item + \textbf{(Minibatch) Stochastic gradient descent}: Stochastic gradient + descent \textbf{approximates} the true gradient descent. It may not + descend towards the true minimum with each update, but it's often less + computationally expensive than batch gradient descent. +\end{itemize} + +\section{Feature Engineering}\label{feature-engineering-1} + +At this point in the course, we've equipped ourselves with some powerful +techniques to build and optimize models. We've explored how to develop +models of multiple variables, as well as how to transform variables to +help \textbf{linearize} a dataset and fit these models to maximize their +performance. + +All of this was done with one major caveat: the regression models we've +worked with so far are all \textbf{linear in the input variables}. We've +assumed that our predictions should be some combination of linear +variables. While this works well in some cases, the real world isn't +always so straightforward. We'll learn an important method to address +this issue -- feature engineering -- and consider some new problems that +can arise when we do so. + +Feature engineering is the process of \emph{transforming} raw features +into \emph{more informative features} that can be used in modeling or +EDA tasks and improve model performance. + +Feature engineering allows you to: + +\begin{itemize} +\tightlist +\item + Capture domain knowledge +\item + Express non-linear relationships using linear models +\item + Use non-numeric (qualitative) features in models +\end{itemize} + +\section{Feature Functions}\label{feature-functions} + +A \textbf{feature function} describes the transformations we apply to +raw features in a dataset to create a design matrix of transformed +features. We typically denote the feature function as \(\Phi\) (the +Greek letter ``phi'' that we use to represent the true function). When +we apply the feature function to our original dataset \(\mathbb{X}\), +the result, \(\Phi(\mathbb{X})\), is a transformed design matrix ready +to be used in modeling. + +For example, we might design a feature function that computes the square +of an existing feature and adds it to the design matrix. In this case, +our existing matrix \([x]\) is transformed to \([x, x^2]\). Its +\emph{dimension} increases from 1 to 2. Often, the dimension of the +\emph{featurized} dataset increases as seen here. + +The new features introduced by the feature function can then be used in +modeling. Often, we use the symbol \(\phi_i\) to represent transformed +features after feature engineering. + +\[ +\begin{align} +\hat{y} &= \theta_0 + \theta_1 x + \theta_2 x^2 \\ +\hat{y} &= \theta_0 + \theta_1 \phi_1 + \theta_2 \phi_2 +\end{align} +\] + +In matrix notation, the symbol \(\Phi\) is sometimes used to denote the +design matrix after feature engineering has been performed. Note that in +the usage below, \(\Phi\) is now a feature-engineered matrix, rather +than a function. + +\[\hat{\mathbb{Y}} = \Phi \theta\] + +More formally, we describe a feature function as transforming the +original \(\mathbb{R}^{n \times p}\) dataset \(\mathbb{X}\) to a +featurized \(\mathbb{R}^{n \times p'}\) dataset \(\mathbb{\Phi}\), where +\(p'\) is typically greater than \(p\). + +\[\mathbb{X} \in \mathbb{R}^{n \times p} \longrightarrow \Phi \in \mathbb{R}^{n \times p'}\] + +\section{One Hot Encoding}\label{one-hot-encoding} + +Feature engineering opens up a whole new set of possibilities for +designing better-performing models. As you will see in lab and homework, +feature engineering is one of the most important parts of the entire +modeling process. + +A particularly powerful use of feature engineering is to allow us to +perform regression on \emph{non-numeric} features. \textbf{One hot +encoding} is a feature engineering technique that generates numeric +features from categorical data, allowing us to use our usual methods to +fit a regression model on the data. + +To illustrate how this works, we'll refer back to the \texttt{tips} +dataset from previous lectures. Consider the \texttt{"day"} column of +the dataset: + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ sklearn.linear\_model }\ImportTok{as}\NormalTok{ lm} +\NormalTok{tips }\OperatorTok{=}\NormalTok{ sns.load\_dataset(}\StringTok{"tips"}\NormalTok{)} +\NormalTok{tips.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllll@{}} +\toprule\noalign{} +& total\_bill & tip & sex & smoker & day & time & size \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 16.99 & 1.01 & Female & No & Sun & Dinner & 2 \\ +1 & 10.34 & 1.66 & Male & No & Sun & Dinner & 3 \\ +2 & 21.01 & 3.50 & Male & No & Sun & Dinner & 3 \\ +3 & 23.68 & 3.31 & Male & No & Sun & Dinner & 2 \\ +4 & 24.59 & 3.61 & Female & No & Sun & Dinner & 4 \\ +\end{longtable} + +At first glance, it doesn't seem possible to fit a regression model to +this data -- we can't directly perform any mathematical operations on +the entry ``Sun''. + +To resolve this, we instead create a new table with a feature for each +unique value in the original \texttt{"day"} column. We then iterate +through the \texttt{"day"} column. For each entry in \texttt{"day"} we +fill the corresponding feature in the new table with 1. All other +features are set to 0. + +In short, each category of a categorical variable gets its own feature + +Value = 1 if a row belongs to the category + +Value = 0 otherwise + +The \texttt{OneHotEncoder} class of \texttt{sklearn} +(\href{https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html\#sklearn.preprocessing.OneHotEncoder.get_feature_names_out}{documentation}) +offers a quick way to perform this one-hot encoding. You will explore +its use in detail in the lab. For now, recognize that we follow a very +similar workflow to when we were working with the +\texttt{LinearRegression} class: we initialize a \texttt{OneHotEncoder} +object, fit it to our data, and finally use \texttt{.transform} to apply +the fitted encoder. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sklearn.preprocessing }\ImportTok{import}\NormalTok{ OneHotEncoder} + +\CommentTok{\# Initialize a OneHotEncoder object} +\NormalTok{ohe }\OperatorTok{=}\NormalTok{ OneHotEncoder()} + +\CommentTok{\# Fit the encoder} +\NormalTok{ohe.fit(tips[[}\StringTok{"day"}\NormalTok{]])} + +\CommentTok{\# Use the encoder to transform the raw "day" feature} +\NormalTok{encoded\_day }\OperatorTok{=}\NormalTok{ ohe.transform(tips[[}\StringTok{"day"}\NormalTok{]]).toarray()} +\NormalTok{encoded\_day\_df }\OperatorTok{=}\NormalTok{ pd.DataFrame(encoded\_day, columns}\OperatorTok{=}\NormalTok{ohe.get\_feature\_names\_out())} + +\NormalTok{encoded\_day\_df.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& day\_Fri & day\_Sat & day\_Sun & day\_Thur \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 0.0 & 0.0 & 1.0 & 0.0 \\ +1 & 0.0 & 0.0 & 1.0 & 0.0 \\ +2 & 0.0 & 0.0 & 1.0 & 0.0 \\ +3 & 0.0 & 0.0 & 1.0 & 0.0 \\ +4 & 0.0 & 0.0 & 1.0 & 0.0 \\ +\end{longtable} + +The one-hot encoded features can then be used in the design matrix to +train a model: + +\[\hat{y} = \theta_1 (\text{total}\_\text{bill}) + \theta_2 (\text{size}) + \theta_3 (\text{day}\_\text{Fri}) + \theta_4 (\text{day}\_\text{Sat}) + \theta_5 (\text{day}\_\text{Sun}) + \theta_6 (\text{day}\_\text{Thur})\] + +Or in shorthand: + +\[\hat{y} = \theta_{1}\phi_{1} + \theta_{2}\phi_{2} + \theta_{3}\phi_{3} + \theta_{4}\phi_{4} + \theta_{5}\phi_{5} + \theta_{6}\phi_{6}\] + +Now, the \texttt{day} feature (or rather, the four new boolean features +that represent day) can be used to fit a model. + +Using \texttt{sklearn} to fit the new model, we can determine the model +coefficients, allowing us to understand how each feature impacts the +predicted tip. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sklearn.linear\_model }\ImportTok{import}\NormalTok{ LinearRegression} +\NormalTok{data\_w\_ohe }\OperatorTok{=}\NormalTok{ tips[[}\StringTok{"total\_bill"}\NormalTok{, }\StringTok{"size"}\NormalTok{, }\StringTok{"day"}\NormalTok{]].join(encoded\_day\_df).drop(columns }\OperatorTok{=} \StringTok{"day"}\NormalTok{)} +\NormalTok{ohe\_model }\OperatorTok{=}\NormalTok{ lm.LinearRegression(fit\_intercept}\OperatorTok{=}\VariableTok{False}\NormalTok{) }\CommentTok{\#Tell sklearn to not add an additional bias column. Why?} +\NormalTok{ohe\_model.fit(data\_w\_ohe, tips[}\StringTok{"tip"}\NormalTok{])} + +\NormalTok{pd.DataFrame(\{}\StringTok{"Feature"}\NormalTok{:data\_w\_ohe.columns, }\StringTok{"Model Coefficient"}\NormalTok{:ohe\_model.coef\_\})} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Feature & Model Coefficient \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & total\_bill & 0.092994 \\ +1 & size & 0.187132 \\ +2 & day\_Fri & 0.745787 \\ +3 & day\_Sat & 0.621129 \\ +4 & day\_Sun & 0.732289 \\ +5 & day\_Thur & 0.668294 \\ +\end{longtable} + +For example, when looking at the coefficient for \texttt{day\_Fri}, we +can now understand the impact of it being Friday on the predicted tip +--- if it is a Friday, the predicted tip increases by approximately +\$0.75. + +When one-hot encoding, keep in mind that any set of one-hot encoded +columns will always sum to a column of all ones, representing the bias +column. More formally, the bias column is a linear combination of the +OHE columns. + +We must be careful not to include this bias column in our design matrix. +Otherwise, there will be linear dependence in the model, meaning +\(\mathbb{X}^{\top}\mathbb{X}\) would no longer be invertible, and our +OLS estimate +\(\hat{\theta} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\) +fails. + +To resolve this issue, we simply omit one of the one-hot encoded columns +\emph{or} do not include an intercept term. The adjusted design matrices +are shown below. + +Either approach works --- we still retain the same information as the +omitted column being a linear combination of the remaining columns. + +\section{Polynomial Features}\label{polynomial-features} + +We have encountered a few cases now where models with linear features +have performed poorly on datasets that show clear non-linear curvature. + +As an example, consider the \texttt{vehicles} dataset, which contains +information about cars. Suppose we want to use the \texttt{hp} +(horsepower) of a car to predict its \texttt{"mpg"} (gas mileage in +miles per gallon). If we visualize the relationship between these two +variables, we see a non-linear curvature. Fitting a linear model to +these variables results in a high (poor) value of RMSE. + +\[\hat{y} = \theta_0 + \theta_1 (\text{hp})\] + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.options.mode.chained\_assignment }\OperatorTok{=} \VariableTok{None} +\NormalTok{vehicles }\OperatorTok{=}\NormalTok{ sns.load\_dataset(}\StringTok{"mpg"}\NormalTok{).dropna().rename(columns }\OperatorTok{=}\NormalTok{ \{}\StringTok{"horsepower"}\NormalTok{: }\StringTok{"hp"}\NormalTok{\}).sort\_values(}\StringTok{"hp"}\NormalTok{)} + +\NormalTok{X }\OperatorTok{=}\NormalTok{ vehicles[[}\StringTok{"hp"}\NormalTok{]]} +\NormalTok{Y }\OperatorTok{=}\NormalTok{ vehicles[}\StringTok{"mpg"}\NormalTok{]} + +\NormalTok{hp\_model }\OperatorTok{=}\NormalTok{ lm.LinearRegression()} +\NormalTok{hp\_model.fit(X, Y)} +\NormalTok{hp\_model\_predictions }\OperatorTok{=}\NormalTok{ hp\_model.predict(X)} + +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} + +\NormalTok{sns.scatterplot(data}\OperatorTok{=}\NormalTok{vehicles, x}\OperatorTok{=}\StringTok{"hp"}\NormalTok{, y}\OperatorTok{=}\StringTok{"mpg"}\NormalTok{)} +\NormalTok{plt.plot(vehicles[}\StringTok{"hp"}\NormalTok{], hp\_model\_predictions, c}\OperatorTok{=}\StringTok{"tab:red"}\NormalTok{)}\OperatorTok{;} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"MSE of model with (hp) feature: }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{mean((Y}\OperatorTok{{-}}\NormalTok{hp\_model\_predictions)}\OperatorTok{**}\DecValTok{2}\NormalTok{)}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +MSE of model with (hp) feature: 23.943662938603108 +\end{verbatim} + +\includegraphics{feature_engineering/feature_engineering_files/figure-pdf/cell-5-output-2.pdf} + +As we can see from the plot, the data follows a curved line rather than +a straight one. To capture this non-linearity, we can incorporate +\textbf{non-linear} features. Let's introduce a \textbf{polynomial} +term, \(\text{hp}^2\), into our regression model. The model now takes +the form: + +\[\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2)\] +\[\hat{y} = \theta_0 + \theta_1 \phi_1 + \theta_2 \phi_2\] + +How can we fit a model with non-linear features? We can use the exact +same techniques as before: ordinary least squares, gradient descent, or +\texttt{sklearn}. This is because our new model is still a +\textbf{linear model}. Although it contains non-linear \emph{features}, +it is linear with respect to the model \emph{parameters}. All of our +previous work on fitting models was done under the assumption that we +were working with linear models. Because our new model is still linear, +we can apply our existing methods to determine the optimal parameters. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Add a hp\^{}2 feature to the design matrix} +\NormalTok{X }\OperatorTok{=}\NormalTok{ vehicles[[}\StringTok{"hp"}\NormalTok{]]} +\NormalTok{X[}\StringTok{"hp\^{}2"}\NormalTok{] }\OperatorTok{=}\NormalTok{ vehicles[}\StringTok{"hp"}\NormalTok{]}\OperatorTok{**}\DecValTok{2} + +\CommentTok{\# Use sklearn to fit the model} +\NormalTok{hp2\_model }\OperatorTok{=}\NormalTok{ lm.LinearRegression()} +\NormalTok{hp2\_model.fit(X, Y)} +\NormalTok{hp2\_model\_predictions }\OperatorTok{=}\NormalTok{ hp2\_model.predict(X)} + +\NormalTok{sns.scatterplot(data}\OperatorTok{=}\NormalTok{vehicles, x}\OperatorTok{=}\StringTok{"hp"}\NormalTok{, y}\OperatorTok{=}\StringTok{"mpg"}\NormalTok{)} +\NormalTok{plt.plot(vehicles[}\StringTok{"hp"}\NormalTok{], hp2\_model\_predictions, c}\OperatorTok{=}\StringTok{"tab:red"}\NormalTok{)}\OperatorTok{;} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"MSE of model with (hp\^{}2) feature: }\SpecialCharTok{\{}\NormalTok{np}\SpecialCharTok{.}\NormalTok{mean((Y}\OperatorTok{{-}}\NormalTok{hp2\_model\_predictions)}\OperatorTok{**}\DecValTok{2}\NormalTok{)}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +MSE of model with (hp^2) feature: 18.98476890761722 +\end{verbatim} + +\includegraphics{feature_engineering/feature_engineering_files/figure-pdf/cell-6-output-2.pdf} + +Looking a lot better! By incorporating a squared feature, we are able to +capture the curvature of the dataset. Our model is now a parabola +centered on our data. Notice that our new model's error has decreased +relative to the original model with linear features. + +\section{Complexity and Overfitting}\label{complexity-and-overfitting} + +We've seen now that feature engineering allows us to build all sorts of +features to improve the performance of the model. In particular, we saw +that designing a more complex feature (squaring \texttt{hp} in the +\texttt{vehicles} data previously) substantially improved the model's +ability to capture non-linear relationships. To take full advantage of +this, we might be inclined to design increasingly complex features. +Consider the following three models, each of different order (the +maximum exponent power of each model): + +\begin{itemize} +\tightlist +\item + Model with order 2: + \(\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2)\) +\item + Model with order 3: + \(\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2) + \theta_3 (\text{hp}^3)\) +\item + Model with order 4: + \(\hat{y} = \theta_0 + \theta_1 (\text{hp}) + \theta_2 (\text{hp}^2) + \theta_3 (\text{hp}^3) + \theta_4 (\text{hp}^4)\) +\end{itemize} + +As we can see in the plots above, MSE continues to decrease with each +additional polynomial term. To visualize it further, let's plot models +as the complexity increases from 0 to 7: + +When we use our model to make predictions on the same data that was used +to fit the model, we find that the MSE decreases with each additional +polynomial term (as our model gets more complex). The \textbf{training +error} is the model's error when generating predictions from the same +data that was used for training purposes. We can conclude that the +training error goes down as the complexity of the model increases. + +This seems like good news -- when working on the \textbf{training data}, +we can improve model performance by designing increasingly complex +models. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Math Fact: Polynomial Degrees}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Given \(N\) overlapping data points, we can always find a polynomial of +degree \(N-1\) that goes through all those points. + +For example, there always exists a degree-4 polynomial curve that can +perfectly model a dataset of 5 datapoints: + +\end{tcolorbox} + +However, high model complexity comes with its own set of issues. When +building the \texttt{vehicles} models above, we trained the models on +the \emph{entire} dataset and then evaluated their performance on this +same dataset. In reality, we are likely to instead train the model on a +\emph{sample} from the population, then use it to make predictions on +data it didn't encounter during training. + +Let's walk through a more realistic example. Say we are given a training +dataset of just 6 datapoints and want to train a model to then make +predictions on a \emph{different} set of points. We may be tempted to +make a highly complex model (e.g., degree 5), especially given it makes +perfect predictions on the training data as clear on the left. However, +as shown in the graph on the right, this model would perform +\emph{horribly} on the rest of the population! + +This phenomenon called \textbf{overfitting}. The model effectively just +memorized the training data it encountered when it was fitted, leaving +it unable to \textbf{generalize} well to data it didn't encounter during +training. This is a problem: we want models that are generalizable to +``unseen'' data. + +Additionally, since complex models are sensitive to the specific dataset +used to train them, they have high \textbf{variance}. A model with high +variance tends to \emph{vary} more dramatically when trained on +different datasets. Going back to our example above, we can see our +degree-5 model varies erratically when we fit it to different samples of +6 points from \texttt{vehicles}. + +We now face a dilemma: we know that we can \textbf{decrease training +error} by increasing model complexity, but models that are \emph{too} +complex start to overfit and can't be reapplied to new datasets due to +\textbf{high variance}. + +We can see that there is a clear trade-off that comes from the +complexity of our model. As model complexity increases, the model's +error on the training data decreases. At the same time, the model's +variance tends to increase. + +The takeaway here: we need to strike a balance in the complexity of our +models; we want models that are generalizable to ``unseen'' data. A +model that is too simple won't be able to capture the key relationships +between our variables of interest; a model that is too complex runs the +risk of overfitting. + +This begs the question: how do we control the complexity of a model? +Stay tuned for Lecture 17 on Cross-Validation and Regularization! + +\section{\texorpdfstring{{[}Bonus{]} Stochastic Gradient Descent in +\texttt{PyTorch}}{{[}Bonus{]} Stochastic Gradient Descent in PyTorch}}\label{bonus-stochastic-gradient-descent-in-pytorch} + +While this material is out of scope for Data 100, it is useful if you +plan to enter a career in data science! + +In practice, you will use software packages such as \texttt{PyTorch} +when computing gradients and implementing gradient descent. You'll often +follow three main steps: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Sample a batch of the data. +\item + Compute the loss and the gradient. +\item + Update your gradient until you reach an appropriate estimate of the + true gradient. +\end{enumerate} + +If you want to learn more, this +\href{https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html}{Intro +to PyTorch tutorial} is a great resource to get started! + +\bookmarksetup{startatroot} + +\chapter{Case Study in Human Contexts and +Ethics}\label{case-study-in-human-contexts-and-ethics} + +\textbf{Note:} Given the nuanced nature of some of the arguments made in +the lecture, it is highly recommended that you view the lecture +recording given by Professor Ari Edmundson to fully engage and +understand the material. The course notes will have the same broader +structure but are by no means comprehensive. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Learn about the ethical dilemmas that data scientists face. +\item + Examine the Cook County Assessor's Office and Property Appraisal case + study for fairness in housing appraisal. +\item + Know how to critique models using contextual knowledge about data. +\end{itemize} + +\end{tcolorbox} + +\begin{quote} +\textbf{Disclaimer}: The following note discusses issues of structural +racism. Some of the items in this note may be sensitive and may or may +not be the opinions, ideas, and beliefs of the students who collected +the materials. The Data 100 course staff tries its best to only present +information that is relevant for teaching the lessons at hand. +\end{quote} + +As data scientists, our goal is to wrangle data, recognize patterns and +use them to make predictions within a certain context. However, it is +often easy to abstract data away from its original context. In previous +lectures, we've explored datasets like \texttt{elections}, +\texttt{babynames}, and \texttt{world\_bank} to learn fundamental +techniques for working with data, but rarely do we stop to ask questions +like ``How/when was this data collected?'' or ``Are there any inherent +biases in the data that could affect results?''. It turns out that +inquiries like these profoundly affect how data scientists approach a +task and convey their findings. This lecture explores these ethical +dilemmas through the lens of a case study. + +Let's immerse ourselves in the real-world story of data scientists +working for an organization called the Cook County Assessor's Office +(CCAO) located in Chicago, Illinois. Their job is to \textbf{estimate +the values of houses} in order to \textbf{assign property taxes}. This +is because the tax burden in this area is determined by the estimated +\textbf{value} of a house rather than its price. Since value changes +over time and has no obvious indicators, the CCAO created a +\textbf{model} to estimate the values of houses. In this note, we will +dig deep into biases that arose in the model, the consequences to human +lives, and what we can learn from this example to avoid the same +mistakes in the future. + +\section{The Problem}\label{the-problem} + +What prompted the formation of the CCAO and led to the development of +this model? In 2017, an +\href{https://apps.chicagotribune.com/news/watchdog/cook-county-property-tax-divide/assessments.html}{investigative +report} by the \emph{Chicago Tribune} uncovered a major scandal in the +property assessment system managed by the CCAO under the watch of former +County Assessor Joseph Berrios. Working with experts from the University +of Chicago, the \emph{Chicago Tribune} journalists found that the CCAO's +model for estimating house value perpetuated a highly regressive tax +system that disproportionately burdened African-American and Latinx +homeowners in Cook County. How did the journalists demonstrate this +disparity? + +The image above shows two standard metrics to estimate the fairness of +assessments: the +\href{https://www.realestateagent.com/real-estate-glossary/real-estate/coefficient-of-dispersion.html}{coefficient +of dispersion} and +\href{https://leg.wa.gov/House/Committees/FIN/Documents/2009/RatioText.pdf}{price-related +differential}. How they're calculated is out of scope for this class, +but you can assume that these metrics have been rigorously tested by +experts in the field and are a good indication of fairness. As we see +above, calculating these metrics for the Cook County prices revealed +that the pricing created by the CCAO did not fall in acceptable ranges. +While this on its own is \textbf{not the entire} story, it was a good +indicator that \textbf{something fishy was going on}. + +This prompted journalists to investigate if the CCAO's model itself was +producing fair tax rates. When accounting for the homeowner's income, +they found that the model actually produced a \textbf{regressive} tax +rate (see figure above). A tax rate is \textbf{regressive} if the +percentage tax rate is higher for individuals with lower net income; it +is \textbf{progressive} if the percentage tax rate is higher for +individuals with higher net income. + +Digging further, journalists found that the model was not only +regressive and unfair to lower-income individuals, but it was also +unfair to non-white homeowners (see figure above). The likelihood of a +property being under- or over-assessed was highly dependent on the +owner's race, and that did not sit well with many homeowners. + +\subsection{Spotlight: Appeals}\label{spotlight-appeals} + +What was the cause of such a major issue? It might be easy to simply +blame ``biased'' algorithms, but the main issue was not a faulty model. +Instead, it was largely due to the \textbf{appeals system} which enabled +the wealthy and privileged to more easily and successfully challenge +their assessments. Once given the CCAO model's initial assessment of +their home's value, homeowners could choose to appeal to a board of +elected officials to try and change the listed value of their home and, +consequently, how much they are taxed. In theory, this sounds like a +very fair system: a human being oversees the final pricing of houses +rather than a computer algorithm. In reality, this ended up exacerbating +the problem. + +\begin{quote} +``Appeals are a good thing,'' Thomas Jaconetty, deputy assessor for +valuation and appeals, said in an interview. ``The goal here is +fairness. We made the numbers. We can change them.'' +\end{quote} + +We can borrow lessons from +\href{https://www.britannica.com/topic/critical-race-theory}{Critical +Race Theory} ------ on the surface, everyone has the legal right to try +and appeal the value of their home. However, not everyone has an +\emph{equal ability} to do so. Those who have the money to hire tax +lawyers to appeal for them have a drastically higher chance of trying +and succeeding in their appeal (see above figure). Many homeowners who +appealed were generally under-assessed compared to homeowners who did +not (see figure below). Clearly, the model is part of a deeper +institutional pattern rife with potential corruption. + +In fact, Chicago boasts a large and thriving tax attorney industry +dedicated precisely to appealing property assessments, reflected in the +growing number of appeals in Cook County in the 21st century. Given +wealthier, whiter neighborhoods typically have greater access to +lawyers, they often appealed more and won reductions far more often than +their less wealthy neighbors. In other words, those with higher incomes +pay less in property tax, tax lawyers can grow their business due to +their role in appeals, and politicians are socially connected to the +aforementioned tax lawyers and wealthy homeowners. All these +stakeholders have reasons to advertise the appeals system as an integral +part of a fair system; after all, it serves to benefit them. Here lies +the value in asking questions: a system that seems fair on the surface +may, in reality, be unfair upon taking a closer look. + +\subsection{Human Impacts}\label{human-impacts} + +What happened as a result of this corrupt system? As the \emph{Chicago +Tribune} reported, many African American and Latino homeowners purchased +homes only to find their houses were later appraised at levels far +higher than what they paid. As a result, homeowners were now responsible +for paying significantly more in taxes every year than initially +budgeted, putting them at risk of not being able to afford their homes +and losing them. + +The impact of the housing model extends beyond the realm of home +ownership and taxation ------ the issues of justice go much deeper. This +model perpetrated much older patterns of racially discriminatory +practices in Chicago and across the United States. Unfortunately, it is +no accident that this happened in Chicago, one of the most segregated +cities in the United States +(\href{https://fivethirtyeight.com/features/the-most-diverse-cities-are-often-the-most-segregated/}{source}). +These factors are central to informing us, as data scientists, about +what is at stake. + +\subsection{Spotlight: Intersection of Real Estate and +Race}\label{spotlight-intersection-of-real-estate-and-race} + +Before we dive into how the CCAO used data science to ``solve'' this +problem, let's briefly go through the history of discriminatory housing +practices in the United States to give more context on the gravity and +urgency of this situation. + +Housing and real estate, among other factors, have been one of the most +significant and enduring drivers of structural racism and racial +inequality in the United States since the Civil War. It is one of the +main areas where inequalities are created and reproduced. In the early +20th century, +\href{https://www.history.com/topics/early-20th-century-us/jim-crow-laws}{Jim +Crow} laws were explicit in forbidding people of color from utilizing +the same facilities ------ such as buses, bathrooms, and pools ------ as +white individuals. This set of practices by government actors in +combination with overlapping practices driven by the private real estate +industry further served to make neighborhoods increasingly segregated. + +Although advancements in civil rights have been made, the spirit of the +laws is alive in many parts of the US. In the 1920s and 1930s, it was +illegal for governments to actively segregate neighborhoods according to +race, but other methods were available for achieving the same ends. One +of the most notorious practices was \textbf{redlining}: the federal +housing agencies' process of distinguishing neighborhoods in a city in +terms of relative risk. The goal was to increase access to homeownership +for low-income Americans. In practice, however, it allowed real estate +professionals to legally perpetuate segregation. The federal housing +agencies deemed predominantly African American neighborhoods as high +risk and colored them in red ------ hence the name redlining +------~making it nearly impossible for African Americans to own a home. + +The origins of the data that made these maps possible lay in a kind of +``racial data revolution'' in the private real estate industry beginning +in the 1920s. Segregation was established and reinforced in part through +the work of real estate agents who were also very concerned with +establishing reliable methods for predicting the value of a home. The +effects of these practices continue to resonate today. + +Source: Colin Koopman, How We Became Our Data (2019) p.~137 + +\section{The Response: Cook County Open Data +Initiative}\label{the-response-cook-county-open-data-initiative} + +The response to this problem started in politics. A new assessor, Fritz +Kaegi, was elected and created a new mandate with two goals: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Distributional equity in property taxation, meaning that properties of + the same value are treated alike during assessments. +\item + Creating a new Office of Data Science. +\end{enumerate} + +He wanted to not only create a more accurate algorithmic model but also +to design a new system to address the problems with the CCAO. + +Let's frame this problem through the lens of the data science lifecycle. + +\subsection{1. Question/Problem +Formulation}\label{questionproblem-formulation} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Driving Questions}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + What do we want to know? +\item + What problems are we trying to solve? +\item + What are the hypotheses we want to test? +\item + What are our metrics for success? +\end{itemize} + +\end{tcolorbox} + +The old system was unfair because it was systemically inaccurate; it +made one kind of error for one group, and another kind of error for +another. Its goal was to ``create a robust pipeline that accurately +assesses property values at scale and is fair'', and in turn, they +defined fairness as accuracy: ``the ability of our pipeline to +accurately assess all residential property values, accounting for +disparities in geography, information, etc.'' Thus, the plan ------ make +the system more fair ------ was already framed in terms of a task +appropriate to a data scientist: make the assessments more accurate (or +more precisely, minimize errors in a particular way). + +The idea here is that if the model is more accurate it will also +(perhaps necessarily) become more fair, which is a big assumption. There +are, in a sense, two different problems ------ make accurate +assessments, and make a fair system. Treating these two problems as one +makes it a more straightforward issue that can be solved technically +(with a good model) but does raise the question of if fairness and +accuracy are one and the same. + +For now, let's just talk about the technical part of this ------ +accuracy. For you, the data scientist, this part might feel more +comfortable. We can determine some metrics of success and frame a social +problem as a data science problem. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Definitions: Fairness and Transparency}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +The definitions, as given by the Cook County Assessor's Office, are +given below: + +\begin{itemize} +\tightlist +\item + Fairness: The ability of our pipeline to accurately assess property + values, accounting for disparities in geography, information, etc. +\item + Transparency: The ability of the data science department to share and + explain pipeline results and decisions to both internal and external + stakeholders +\end{itemize} + +\end{tcolorbox} + +The new Office of Data Science started by framing the problem and +redefining their goals. They determined that they needed to: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Accurately, uniformly, and impartially assess the value of a home and + accurately predict the sale price of a home within the next year by: + + \begin{itemize} + \tightlist + \item + Following international standards (e.g., coefficient of dispersion) + \item + Predicting the value of all homes with as little total error as + possible + \end{itemize} +\item + Create a robust pipeline that accurately assesses property values at + scale and is fair to all people by: + + \begin{itemize} + \tightlist + \item + Disrupting the circuit of corruption (Board of Review appeals + process) + \item + Eliminating regressivity + \item + Engendering trust in the system among all stakeholders + \end{itemize} +\end{enumerate} + +The goals defined above lead us to ask the question: what does it +actually mean to accurately assess property values, and what role does +``scale'' play? + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + What is an assessment of a home's value? +\item + What makes one assessment more accurate than another? +\item + What makes one batch of assessments more accurate than another batch? +\end{enumerate} + +Each of the above questions leads to a slew of more questions. +Considering just the first question, one answer could be that an +assessment is an estimate of the value of a home. This leads to more +inquiries: what is the value of a home? What determines it? How do we +know? For this class, we take it to be the house's market value, or how +much it would sell for. + +Unfortunately, if you are the county assessor, it becomes hard to +determine property values with this definition. After all, you can't +make everyone sell their house every year. And as many properties +haven't been sold in decades, every year that passes makes that previous +sale less reliable as an indicator. + +So how would one generate reliable estimates? You're probably thinking, +well, with data about homes and their sale prices you can probably +predict the value of a property reliably. Even if you're not a data +scientist, you might know there are websites like Zillow and RedFin that +estimate what properties would sell for and constantly update them. They +don't know the value, but they estimate them. How do you think they do +this? Let's start with the data ------ which is the next step in the +lifecycle. + +\subsection{2. Data Acquisition and +Cleaning}\label{data-acquisition-and-cleaning} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Driving Questions}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + What data do we have, and what data do we need? +\item + How will we sample more data? +\item + Is our data representative of the population we want to study? +\end{itemize} + +\end{tcolorbox} + +To generate estimates, the data scientists used two datasets. The first +contained all recorded sales data from 2013 to 2019. The second +contained property characteristics, including a property identification +number and physical characteristics (e.g., age, bedroom, baths, square +feet, neighborhood, site desirability, etc.). + +As they examined the datasets, they asked the questions: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + How was this data collected? +\item + When was this data collected? +\item + Who collected this data? +\item + For what purposes was the data collected? +\item + How and why were particular categories created? +\end{enumerate} + +With so much data available, data scientists worked to see how all the +different data points correlated with each other and with the sales +prices. By discovering patterns in datasets containing known sale prices +and characteristics of similar and nearby properties, training a model +on this data, and applying it to all the properties without sales data, +it was now possible to create a linear model that could predict the sale +price (``fair market value'') of unsold properties. + +Some other key questions data scientists asked about the data were: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Are any attributes of a house differentially reported? How might these + attributes be differentially reported? +\item + How are ``improvements'' like renovations tracked and updated? +\item + Which data is missing, and for which neighborhoods or populations is + it missing? +\item + What other data sources or attributes might be valuable? +\end{enumerate} + +Attributes can have different likelihoods of appearing in the data. For +example, housing data in the floodplain geographic region of Chicago +were less represented than other regions. + +Features can also be reported at different rates. Improvements in homes, +which tend to increase property value, were unlikely to be reported by +the homeowners. + +Additionally, they found that there was simply more missing data in +lower-income neighborhoods. + +\subsection{3. Exploratory Data +Analysis}\label{exploratory-data-analysis} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Driving Questions}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + How is our data organized, and what does it contain? +\item + Do we already have relevant data? +\item + What are the biases, anomalies, or other issues with the data? +\item + How do we transform the data to enable effective analysis? +\end{itemize} + +\end{tcolorbox} + +Before the modeling step, they investigated a multitude of crucial +questions: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Which attributes are most predictive of sales price? +\item + Is the data uniformly distributed? +\item + Do all neighborhoods have recent data? Do all neighborhoods have the + same granularity?\\ +\item + Do some neighborhoods have missing or outdated data? +\end{enumerate} + +They found that certain features, such as bedroom number, were much more +useful in determining house value for certain neighborhoods than for +others. This informed them that different models should be used +depending on the neighborhood. + +They also noticed that low-income neighborhoods had disproportionately +spottier data. This informed them that they needed to develop new data +collection practices - including finding new sources of data. + +\subsection{4. Prediction and Inference}\label{prediction-and-inference} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Driving Questions}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + What does the data say about the world? +\item + Does it answer our questions or accurately solve the problem? +\item + How robust are our conclusions, and can we trust the predictions? +\end{itemize} + +\end{tcolorbox} + +Rather than using a singular model to predict sale prices (``fair market +value'') of unsold properties, the CCAO predicts sale prices using +machine learning models that discover patterns in data sets containing +known sale prices and characteristics of \textbf{similar and nearby +properties}. It uses different model weights for each neighborhood. + +Compared to traditional mass appraisal, the CCAO's new approach is more +granular and more sensitive to neighborhood variations. + +But how do we know if an assessment is accurate? We can see how our +model performs when predicting the sales prices of properties it wasn't +trained on! We can then evaluate how ``close'' our estimate was to the +actual sales price, using Root Mean Square Error (RMSE). However, is +RMSE a good proxy for fairness in this context? + +Broad metrics of error like RMSE can be limiting when evaluating the +``fairness'' of a property appraisal system. RMSE does not tell us +anything about the distribution of errors, whether the errors are +positive or negative, and the relative size of the errors. It does not +tell us anything about the regressivity of the model, instead just +giving a rough measure of our model's overall error. + +Even with a low RMSE, we can't guarantee a fair model. The error we see +(no matter how small) may be a result of our model overvaluing less +expensive homes and undervaluing more expensive homes. + +Regarding accuracy, it's important to ask what makes a batch of +assessments better or more accurate than another batch of assessments. +The value of a home that a model predicts is relational. It's a product +of the interaction of social and technical elements so property +assessment involves social trust. + +Why should any particular individual believe that the model is accurate +for their property? Why should any individual trust the model? + +To foster public trust, the CCAO focuses on ``transparency'', putting +data, models, and the pipeline onto GitLab. By doing so, they can better +equate the production of ``accurate assessments'' with ``fairness''. + +There's a lot more to be said here on the relationship between accuracy, +fairness, and metrics we tend to use when evaluating our models. Given +the nuanced nature of the argument, it is recommended you view the +corresponding lecture as the course notes are not as comprehensive for +this portion of the lecture. + +\subsection{5. Results and Conclusions}\label{results-and-conclusions} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Driving Questions}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + How successful is the system for each goal? + + \begin{itemize} + \tightlist + \item + Accuracy/uniformity of the model + \item + Fairness and transparency that eliminates regressivity and engenders + trust + \end{itemize} +\item + How do you know? +\end{itemize} + +\end{tcolorbox} + +Unfortunately, it may be naive to hope that a more accurate and +transparent algorithm will translate into more fair outcomes in +practice. Even if our model is perfectly optimized according to the +standards of fairness we've set, there is no guarantee that people will +actually pay their expected share of taxes as determined by the model. +While it is a good step in the right direction, maintaining a level of +social trust is key to ensuring people pay their fair share. + +Despite all their best efforts, the CCAO is still struggling to create +fair assessments and engender trust. + +Stories like +\href{https://www.axios.com/local/chicago/2022/12/01/why-chicagos-property-tax-bills-so-high}{the +one} show that total taxes for residential properties went up overall +(because commercial taxes went down). But looking at the distribution, +we can see that the biggest increases occurred in wealthy neighborhoods, +and the biggest decreases occurred in poorer, predominantly Black +neighborhoods. So maybe there was some success after all? + +However, it'll ultimately be hard to overcome the propensity of the +board of review to reduce the tax burden of the rich, preventing the +CCAO from creating a truly fair system. This is in part because there +are many cases where the model makes big, frustrating mistakes. In some +cases like +\href{https://www.axios.com/local/chicago/2023/05/22/cook-county-property-tax-appeal-process}{this +one}, it is due to spotty data. + +\section{Summary: Questions to +Consider}\label{summary-questions-to-consider} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Question/Problem Formulation + + \begin{itemize} + \tightlist + \item + Who is responsible for framing the problem? + \item + Who are the stakeholders? How are they involved in the problem + framing? + \item + What do you bring to the table? How does your positionality affect + your understanding of the problem? + \item + What are the narratives that you're tapping into? + \end{itemize} +\item + Data Acquisition and Cleaning + + \begin{itemize} + \tightlist + \item + Where does the data come from? + \item + Who collected it? For what purpose? + \item + What kinds of collecting and recording systems and techniques were + used? + \item + How has this data been used in the past? + \item + What restrictions are there on access to the data, and what enables + you to have access? + \end{itemize} +\item + Exploratory Data Analysis \& Visualization + + \begin{itemize} + \tightlist + \item + What kind of personal or group identities have become salient in + this data? + \item + Which variables became salient, and what kinds of relationships do + we see between them? + \item + Do any of the relationships made visible lend themselves to + arguments that might be potentially harmful to a particular + community? + \end{itemize} +\item + Prediction and Inference + + \begin{itemize} + \tightlist + \item + What does the prediction or inference do in the world? + \item + Are the results useful for the intended purposes? + \item + Are there benchmarks to compare the results? + \item + How are your predictions and inferences dependent upon the larger + system in which your model works? + \end{itemize} +\end{enumerate} + +\section{Key Takeaways}\label{key-takeaways} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Accuracy is a necessary, but not sufficient, condition of a fair + system. +\item + Fairness and transparency are context-dependent and + \textbf{sociotechnical} concepts. +\item + Learn to work with contexts, and consider how your data analysis will + reshape them. +\item + Keep in mind the power, and limits, of data analysis. +\end{enumerate} + +\bookmarksetup{startatroot} + +\chapter{Cross Validation and +Regularization}\label{cross-validation-and-regularization} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Recognize the need for validation and test sets to preview model + performance on unseen data +\item + Apply cross-validation to select model hyperparameters +\item + Understand the conceptual basis for L1 and L2 regularization +\end{itemize} + +\end{tcolorbox} + +At the end of the Feature Engineering lecture (Lecture 14), we arrived +at the issue of fine-tuning model complexity. We identified that a model +that's too complex can lead to overfitting while a model that's too +simple can lead to underfitting. This brings us to a natural question: +how do we control model complexity to avoid under- and overfitting? + +To answer this question, we will need to address two things: first, we +need to understand \emph{when} our model begins to overfit by assessing +its performance on unseen data. We can achieve this through +\textbf{cross-validation}. Secondly, we need to introduce a technique to +adjust the complexity of our models ourselves -- to do so, we will apply +\textbf{regularization}. + +\section{Cross-validation}\label{cross-validation} + +\subsection{Training, Test, and Validation +Sets}\label{training-test-and-validation-sets} + +From the last lecture, we learned that \emph{increasing} model +complexity \emph{decreased} our model's training error but +\emph{increased} its variance. This makes intuitive sense: adding more +features causes our model to fit more closely to data it encountered +during training, but it generalizes worse to new data that hasn't been +seen before. For this reason, a low training error is not always +representative of our model's underlying performance -- we need to also +assess how well it performs on unseen data to ensure that it is not +overfitting. + +Truly, the only way to know when our model overfits is by evaluating it +on unseen data. Unfortunately, that means we need to wait for more data. +This may be very expensive and time-consuming. + +How should we proceed? In this section, we will build up a viable +solution to this problem. + +\subsubsection{Test Sets}\label{test-sets} + +The simplest approach to avoid overfitting is to keep some of our data +``secret'' from ourselves. We can set aside a random portion of our full +dataset to use \emph{only} for testing purposes. The datapoints in this +\textbf{test set} will \emph{not} be used to fit the model. Instead, we +will: + +\begin{itemize} +\tightlist +\item + Use the remaining portion of our dataset -- now called the + \textbf{training set} -- to run ordinary least squares, gradient + descent, or some other technique to train our model, +\item + Take the fitted model and use it to make predictions on datapoints in + the test set. The model's performance on the test set (expressed as + the MSE, RMSE, etc.) is now indicative of how well it can make + predictions on \emph{unseen} data +\end{itemize} + +Importantly, the optimal model parameters were found by \emph{only} +considering the data in the training set. After the model has been +fitted to the training data, we do not change any parameters before +making predictions on the test set. Importantly, we only ever make +predictions on the test set \textbf{once} after all model design has +been completely finalized. We treat the test set performance as the +final test of how well a model does. To reiterate, the test set is only +ever touched once: to compute the performance of the model after all +fine-tuning has been completed. + +The process of sub-dividing our dataset into training and test sets is +known as a \textbf{train-test split}. Typically, between 10\% and 20\% +of the data is allocated to the test set. + +In \texttt{sklearn}, the \texttt{train\_test\_split} function +(\href{https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html}{documentation}) +of the \texttt{model\_selection} module allows us to automatically +generate train-test splits. + +We will work with the \texttt{vehicles} dataset from previous lectures. +As before, we will attempt to predict the \texttt{mpg} of a vehicle from +transformations of its \texttt{hp}. In the cell below, we allocate 20\% +of the full dataset to testing, and the remaining 80\% to training. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ warnings} +\NormalTok{warnings.filterwarnings(}\StringTok{\textquotesingle{}ignore\textquotesingle{}}\NormalTok{)} + +\CommentTok{\# Load the dataset and construct the design matrix} +\NormalTok{vehicles }\OperatorTok{=}\NormalTok{ sns.load\_dataset(}\StringTok{"mpg"}\NormalTok{).rename(columns}\OperatorTok{=}\NormalTok{\{}\StringTok{"horsepower"}\NormalTok{:}\StringTok{"hp"}\NormalTok{\}).dropna()} +\NormalTok{X }\OperatorTok{=}\NormalTok{ vehicles[[}\StringTok{"hp"}\NormalTok{]]} +\NormalTok{X[}\StringTok{"hp\^{}2"}\NormalTok{] }\OperatorTok{=}\NormalTok{ vehicles[}\StringTok{"hp"}\NormalTok{]}\OperatorTok{**}\DecValTok{2} +\NormalTok{X[}\StringTok{"hp\^{}3"}\NormalTok{] }\OperatorTok{=}\NormalTok{ vehicles[}\StringTok{"hp"}\NormalTok{]}\OperatorTok{**}\DecValTok{3} +\NormalTok{X[}\StringTok{"hp\^{}4"}\NormalTok{] }\OperatorTok{=}\NormalTok{ vehicles[}\StringTok{"hp"}\NormalTok{]}\OperatorTok{**}\DecValTok{4} + +\NormalTok{Y }\OperatorTok{=}\NormalTok{ vehicles[}\StringTok{"mpg"}\NormalTok{]} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sklearn.model\_selection }\ImportTok{import}\NormalTok{ train\_test\_split} + +\CommentTok{\# \textasciigrave{}test\_size\textasciigrave{} specifies the proportion of the full dataset that should be allocated to testing} +\CommentTok{\# \textasciigrave{}random\_state\textasciigrave{} makes our results reproducible for educational purposes} +\NormalTok{X\_train, X\_test, Y\_train, Y\_test }\OperatorTok{=}\NormalTok{ train\_test\_split(} +\NormalTok{ X, } +\NormalTok{ Y, } +\NormalTok{ test\_size}\OperatorTok{=}\FloatTok{0.2}\NormalTok{, } +\NormalTok{ random\_state}\OperatorTok{=}\DecValTok{220} +\NormalTok{ )} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"Size of full dataset: }\SpecialCharTok{\{}\NormalTok{X}\SpecialCharTok{.}\NormalTok{shape[}\DecValTok{0}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{ points"}\NormalTok{)} +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"Size of training set: }\SpecialCharTok{\{}\NormalTok{X\_train}\SpecialCharTok{.}\NormalTok{shape[}\DecValTok{0}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{ points"}\NormalTok{)} +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"Size of test set: }\SpecialCharTok{\{}\NormalTok{X\_test}\SpecialCharTok{.}\NormalTok{shape[}\DecValTok{0}\NormalTok{]}\SpecialCharTok{\}}\SpecialStringTok{ points"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Size of full dataset: 392 points +Size of training set: 313 points +Size of test set: 79 points +\end{verbatim} + +After performing our train-test split, we fit a model to the training +set and assess its performance on the test set. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ sklearn.linear\_model }\ImportTok{as}\NormalTok{ lm} +\ImportTok{from}\NormalTok{ sklearn.metrics }\ImportTok{import}\NormalTok{ mean\_squared\_error} + +\NormalTok{model }\OperatorTok{=}\NormalTok{ lm.LinearRegression()} + +\CommentTok{\# Fit to the training set} +\NormalTok{model.fit(X\_train, Y\_train)} + +\CommentTok{\# Calculate errors} +\NormalTok{train\_error }\OperatorTok{=}\NormalTok{ mean\_squared\_error(Y\_train, model.predict(X\_train))} +\NormalTok{test\_error }\OperatorTok{=}\NormalTok{ mean\_squared\_error(Y\_test, model.predict(X\_test))} + +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"Training error: }\SpecialCharTok{\{}\NormalTok{train\_error}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f"Test error: }\SpecialCharTok{\{}\NormalTok{test\_error}\SpecialCharTok{\}}\SpecialStringTok{"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Training error: 17.858516841012097 +Test error: 23.19240562932651 +\end{verbatim} + +\subsubsection{Validation Sets}\label{validation-sets} + +Now, what if we were dissatisfied with our test set performance? With +our current framework, we'd be stuck. As outlined previously, assessing +model performance on the test set is the \emph{final} stage of the model +design process; we can't go back and adjust our model based on the new +discovery that it is overfitting. If we did, then we would be +\emph{factoring in information from the test set} to design our model. +The test error would no longer be a true representation of the model's +performance on \emph{unseen} data! + +Our solution is to introduce a \textbf{validation set}. A validation set +is a random portion of the \emph{training set} that is set aside for +assessing model performance while the model is \emph{still being +developed}. The process for using a validation set is: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Perform a train-test split. +\item + Set the test set aside; we will not touch it until the very end of the + model design process. +\item + Set aside a portion of the training set to be used for validation. +\item + Fit the model parameters to the datapoints contained in the remaining + portion of the training set. +\item + Assess the model's performance on the validation set. Adjust the model + as needed, re-fit it to the remaining portion of the training set, + then re-evaluate it on the validation set. Repeat as necessary until + you are satisfied. +\item + After \emph{all} model development is complete, assess the model's + performance on the test set. This is the final test of how well the + model performs on unseen data. No further modifications should be made + to the model. +\end{enumerate} + +The process of creating a validation set is called a \textbf{validation +split}. + +Note that the validation error behaves quite differently from the +training error explored previously. As the model becomes more complex, +it makes better predictions on the training data; the variance of the +model typically increases as model complexity increases. Validation +error, on the other hand, decreases \emph{then increases} as we increase +model complexity. This reflects the transition from under- to +overfitting: at low model complexity, the model underfits because it is +not complex enough to capture the main trends in the data; at high model +complexity, the model overfits because it ``memorizes'' the training +data too closely. + +We can update our understanding of the relationships between error, +complexity, and model variance: + +Our goal is to train a model with complexity near the orange dotted line +-- this is where our model minimizes the validation error. Note that +this relationship is a simplification of the real-world, but it's a good +enough approximation for the purposes of Data 100. + +\subsection{K-Fold Cross-Validation}\label{k-fold-cross-validation} + +Introducing a validation set gave us an ``extra'' chance to assess model +performance on another set of unseen data. We are able to finetune the +model design based on its performance on this one set of validation +data. + +But what if, by random chance, our validation set just happened to +contain many outliers? It is possible that the validation datapoints we +set aside do not actually represent other unseen data that the model +might encounter. Ideally, we would like to validate our model's +performance on several different unseen datasets. This would give us +greater confidence in our understanding of how the model behaves on new +data. + +Let's think back to our validation framework. Earlier, we set aside +\(x\)\% of our training data (say, 20\%) to use for validation. + +In the example above, we set aside the first 20\% of training datapoints +for the validation set. This was an arbitrary choice. We could have set +aside \emph{any} 20\% portion of the training data for validation. In +fact, there are 5 non-overlapping ``chunks'' of training points that we +could have designated as the validation set. + +The common term for one of these chunks is a \textbf{fold}. In the +example above, we had 5 folds, each containing 20\% of the training +data. This gives us a new perspective: we really have \emph{5} +validation sets ``hidden'' in our training set. + +In \textbf{cross-validation}, we perform validation splits for each fold +in the training set. For a dataset with \(K\) folds, we: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Pick one fold to be the validation fold +\item + Fit the model to training data from every fold \emph{other} than the + validation fold +\item + Compute the model's error on the validation fold and record it +\item + Repeat for all \(K\) folds +\end{enumerate} + +The \textbf{cross-validation error} is then the \emph{average} error +across all \(K\) validation folds. In the example below, the +cross-validation error is the mean of validation errors \#1 to \#5. + +\subsection{Model Selection Workflow}\label{model-selection-workflow} + +At this stage, we have refined our model selection workflow. We begin by +performing a train-test split to set aside a test set for the final +evaluation of model performance. Then, we alternate between adjusting +our design matrix and computing the cross-validation error to finetune +the model's design. In the example below, we illustrate the use of +4-fold cross-validation to help inform model design. + +\subsection{Hyperparameters}\label{hyperparameters} + +An important use of cross-validation is for \textbf{hyperparameter} +selection. A hyperparameter is some value in a model that is chosen +\emph{before} the model is fit to any data. This means that it is +distinct from the \emph{model parameters}, \(\theta_i\), because its +value is selected \emph{before} the training process begins. We cannot +use our usual techniques -- calculus, ordinary least squares, or +gradient descent -- to choose its value. Instead, we must decide it +ourselves. + +Some examples of hyperparameters in Data 100 are: + +\begin{itemize} +\tightlist +\item + The degree of our polynomial model (recall that we selected the degree + before creating our design matrix and calling \texttt{.fit}) +\item + The learning rate, \(\alpha\), in gradient descent +\item + The regularization penalty, \(\lambda\) (to be introduced later this + lecture) +\end{itemize} + +To select a hyperparameter value via cross-validation, we first list out +several ``guesses'' for what the best hyperparameter may be. For each +guess, we then run cross-validation to compute the cross-validation +error incurred by the model when using that choice of hyperparameter +value. We then select the value of the hyperparameter that resulted in +the lowest cross-validation error. + +For example, we may wish to use cross-validation to decide what value we +should use for \(\alpha\), which controls the step size of each gradient +descent update. To do so, we list out some possible guesses for the best +\(\alpha\), like 0.1, 1, and 10. For each possible value, we perform +cross-validation to see what error the model has when we use that value +of \(\alpha\) to train it. + +\section{Regularization}\label{regularization} + +We've now addressed the first of our two goals for today: creating a +framework to assess model performance on unseen data. Now, we'll discuss +our second objective: developing a technique to adjust model complexity. +This will allow us to directly tackle the issues of under- and +overfitting. + +Earlier, we adjusted the complexity of our polynomial model by tuning a +hyperparameter -- the degree of the polynomial. We tested out several +different polynomial degrees, computed the validation error for each, +and selected the value that minimized the validation error. Tweaking the +``complexity'' was simple; it was only a matter of adjusting the +polynomial degree. + +In most machine learning problems, complexity is defined differently +from what we have seen so far. Today, we'll explore two different +definitions of complexity: the \emph{squared} and \emph{absolute} +magnitude of \(\theta_i\) coefficients. + +\subsection{Constraining Model +Parameters}\label{constraining-model-parameters} + +Think back to our work using gradient descent to descend down a loss +surface. You may find it helpful to refer back to the Gradient Descent +note to refresh your memory. Our aim was to find the combination of +model parameters that the smallest, minimum loss. We visualized this +using a contour map by plotting possible parameter values on the +horizontal and vertical axes, which allows us to take a bird's eye view +above the loss surface. Notice that the contour map has \(p=2\) +parameters for ease of visualization. We want to find the model +parameters corresponding to the lowest point on the loss surface. + +Let's review our current modeling framework. + +\[\hat{\mathbb{Y}} = \theta_0 + \theta_1 \phi_1 + \theta_2 \phi_2 + \ldots + \theta_p \phi_p\] + +Recall that we represent our features with \(\phi_i\) to reflect the +fact that we have performed feature engineering. + +Previously, we restricted model complexity by limiting the total number +of features present in the model. We only included a limited number of +polynomial features at a time; all other polynomials were excluded from +the model. + +What if, instead of fully removing particular features, we kept all +features and used each one only a ``little bit''? If we put a limit on +how \emph{much} each feature can contribute to the predictions, we can +still control the model's complexity without the need to manually +determine how many features should be removed. + +What do we mean by a ``little bit''? Consider the case where some +parameter \(\theta_i\) is close to or equal to 0. Then, feature +\(\phi_i\) barely impacts the prediction -- the feature is weighted by +such a small value that its presence doesn't significantly change the +value of \(\hat{\mathbb{Y}}\). If we restrict how large each parameter +\(\theta_i\) can be, we restrict how much feature \(\phi_i\) contributes +to the model. This has the effect of \emph{reducing} model complexity. + +In \textbf{regularization}, we restrict model complexity by putting a +limit on the \emph{magnitudes} of the model parameters \(\theta_i\). + +What do these limits look like? Suppose we specify that the sum of all +absolute parameter values can be no greater than some number \(Q\). In +other words: + +\[\sum_{i=1}^p |\theta_i| \leq Q\] + +where \(p\) is the total number of parameters in the model. You can +think of this as us giving our model a ``budget'' for how it distributes +the magnitudes of each parameter. If the model assigns a large value to +some \(\theta_i\), it may have to assign a small value to some other +\(\theta_j\). This has the effect of increasing feature \(\phi_i\)'s +influence on the predictions while decreasing the influence of feature +\(\phi_j\). The model will need to be strategic about how the parameter +weights are distributed -- ideally, more ``important'' features will +receive greater weighting. + +Notice that the intercept term, \(\theta_0\), is excluded from this +constraint. \textbf{We typically do not regularize the intercept term}. + +Now, let's think back to gradient descent and visualize the loss surface +as a contour map. As a refresher, a loss surface means that each point +represents the model's loss for a particular combination of +\(\theta_1\), \(\theta_2\). Let's say our goal is to find the +combination of parameters that gives us the lowest loss. + +With no constraint, the optimal \(\hat{\theta}\) is in the center. We +denote this as \(\hat{\theta}_\text{No Reg}\). + +Applying this constraint limits what combinations of model parameters +are valid. We can now only consider parameter combinations with a total +absolute sum less than or equal to our number \(Q\). For our 2D example, +the constraint \(\sum_{i=1}^p |\theta_i| \leq Q\) can be rewritten as +\(|\theta_0| + |\theta_1| \leq Q\). This means that we can only assign +our \emph{regularized} parameter vector \(\hat{\theta}_{\text{Reg}}\) to +positions in the green diamond below. + +We can no longer select the parameter vector that \emph{truly} minimizes +the loss surface, \(\hat{\theta}_{\text{No Reg}}\), because this +combination of parameters does not lie within our allowed region. +Instead, we select whatever allowable combination brings us +\emph{closest} to the true minimum loss, which is depicted by the red +point below. + +Notice that, under regularization, our optimized \(\theta_1\) and +\(\theta_2\) values are much smaller than they were without +regularization (indeed, \(\theta_1\) has decreased to 0). The model has +\emph{decreased in complexity} because we have limited how much our +features contribute to the model. In fact, by setting its parameter to +0, we have effectively removed the influence of feature \(\phi_1\) from +the model altogether. + +If we change the value of \(Q\), we change the region of allowed +parameter combinations. The model will still choose the combination of +parameters that produces the lowest loss -- the closest point in the +constrained region to the true minimizer, +\(\hat{\theta}_{\text{No Reg}}\). + +When \(Q\) is small, we severely restrict the size of our parameters. +\(\theta_i\)s are small in value, and features \(\phi_i\) only +contribute a little to the model. The allowed region of model parameters +contracts, and the model becomes much simpler: + +When \(Q\) is large, we do not restrict our parameter sizes by much. +\(\theta_i\)s are large in value, and features \(\phi_i\) contribute +more to the model. The allowed region of model parameters expands, and +the model becomes more complex: + +Consider the extreme case of when \(Q\) is extremely large. In this +situation, our restriction has essentially no effect, and the allowed +region includes the OLS solution! + +Now what if \(Q\) was extremely small? Most parameters are then set to +(essentially) 0. + +\begin{itemize} +\tightlist +\item + If the model has no intercept term: + \(\hat{\mathbb{Y}} = (0)\phi_1 + (0)\phi_2 + \ldots = 0\). +\item + If the model has an intercept term: + \(\hat{\mathbb{Y}} = (0)\phi_1 + (0)\phi_2 + \ldots = \theta_0\). + Remember that the intercept term is excluded from the constraint - + this is so we avoid the situation where we always predict 0. +\end{itemize} + +Let's summarize what we have seen. + +\subsection{L1 (LASSO) Regularization}\label{l1-lasso-regularization} + +How do we actually apply our constraint +\(\sum_{i=1}^p |\theta_i| \leq Q\)? We will do so by modifying the +\emph{objective function} that we seek to minimize when fitting a model. + +Recall our ordinary least squares objective function: our goal was to +find parameters that minimize the model's mean squared error: + +\[\frac{1}{n} \sum_{i=1}^n (y_i - \hat{y}_i)^2 = \frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2\] + +To apply our constraint, we need to rephrase our minimization goal as: + +\[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2\:\text{such that} \sum_{i=1}^p |\theta_i| \leq Q\] + +Unfortunately, we can't directly use this formulation as our objective +function -- it's not easy to mathematically optimize over a constraint. +Instead, we will apply the magic of the +\href{https://en.wikipedia.org/wiki/Duality_(optimization)}{Lagrangian +Duality}. The details of this are out of scope (take EECS 127 if you're +interested in learning more), but the end result is very useful. It +turns out that minimizing the following \emph{augmented} objective +function is \emph{equivalent} to our minimization goal above. + +\[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2 + \lambda \sum_{i=1}^p \vert \theta_i \vert\] +\[ = \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda \sum_{i=1}^p |\theta_i|\] +\[ = \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda || \theta ||_1\] + +The last two expressions include the MSE expressed using vector +notation, and the last expression writes \(\sum_{i=1}^p |\theta_i|\) as +it's \textbf{L1 norm} equivalent form, \(|| \theta ||_1\). + +Notice that we've replaced the constraint with a second term in our +objective function. We're now minimizing a function with an additional +regularization term that \emph{penalizes large coefficients}. In order +to minimize this new objective function, we'll end up balancing two +components: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Keeping the model's error on the training data low, represented by the + term + \(\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 x_{i, 1} + \theta_2 x_{i, 2} + \ldots + \theta_p x_{i, p}))^2\) +\item + Keeping the magnitudes of model parameters low, represented by the + term \(\lambda \sum_{i=1}^p |\theta_i|\) +\end{enumerate} + +The \(\lambda\) factor controls the degree of regularization. Roughly +speaking, \(\lambda\) is related to our \(Q\) constraint from before by +the rule \(\lambda \approx \frac{1}{Q}\). To understand why, let's +consider two extreme examples. Recall that our goal is to minimize the +cost function: +\(\frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda || \theta ||_1\). + +\begin{itemize} +\item + Assume \(\lambda \rightarrow \infty\). Then, + \(\lambda || \theta ||_1\) dominates the cost function. In order to + neutralize the \(\infty\) and minimize this term, we set + \(\theta_j = 0\) for all \(j \ge 1\). This is a very constrained model + that is mathematically equivalent to the constant model +\item + Assume \(\lambda \rightarrow 0\). Then, \(\lambda || \theta ||_1=0\). + Minimizing the cost function is equivalent to minimizing + \(\frac{1}{n} || Y - X\theta ||_2^2\), our usual MSE loss function. + The act of minimizing MSE loss is just our familiar OLS, and the + optimal solution is the global minimum + \(\hat{\theta} = \hat\theta_{No Reg.}\). +\end{itemize} + +We call \(\lambda\) the \textbf{regularization penalty hyperparameter}; +it needs to be determined \emph{prior} to training the model, so we must +find the best value via cross-validation. + +The process of finding the optimal \(\hat{\theta}\) to minimize our new +objective function is called \textbf{L1 regularization}. It is also +sometimes known by the acronym ``LASSO'', which stands for ``Least +Absolute Shrinkage and Selection Operator.'' + +Unlike ordinary least squares, which can be solved via the closed-form +solution +\(\hat{\theta}_{OLS} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\), +\textbf{there is no closed-form solution for the optimal parameter +vector under L1 regularization}. Instead, we use the \texttt{Lasso} +model class of \texttt{sklearn}. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ sklearn.linear\_model }\ImportTok{as}\NormalTok{ lm} + +\CommentTok{\# The alpha parameter represents our lambda term} +\NormalTok{lasso\_model }\OperatorTok{=}\NormalTok{ lm.Lasso(alpha}\OperatorTok{=}\DecValTok{2}\NormalTok{)} +\NormalTok{lasso\_model.fit(X\_train, Y\_train)} + +\NormalTok{lasso\_model.coef\_} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([-2.54932056e-01, -9.48597165e-04, 8.91976284e-06, -1.22872290e-08]) +\end{verbatim} + +Notice that all model coefficients are very small in magnitude. In fact, +some of them are so small that they are essentially 0. An important +characteristic of L1 regularization is that many model parameters are +set to 0. In other words, LASSO effectively \textbf{selects only a +subset} of the features. The reason for this comes back to our loss +surface and allowed ``diamond'' regions from earlier -- we can often get +closer to the lowest loss contour at a corner of the diamond than along +an edge. + +When a model parameter is set to 0 or close to 0, its corresponding +feature is essentially removed from the model. We say that L1 +regularization performs \textbf{feature selection} because, by setting +the parameters of unimportant features to 0, LASSO ``selects'' which +features are more useful for modeling. L1 regularization indicates that +the features with non-zero parameters are more informative for modeling +than those with parameters set to zero. + +\subsection{Scaling Features for +Regularization}\label{scaling-features-for-regularization} + +The regularization procedure we just performed had one subtle issue. To +see what it is, let's take a look at the design matrix for our +\texttt{lasso\_model}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{X\_train.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& hp & hp\^{}2 & hp\^{}3 & hp\^{}4 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +259 & 85.0 & 7225.0 & 614125.0 & 52200625.0 \\ +129 & 67.0 & 4489.0 & 300763.0 & 20151121.0 \\ +207 & 102.0 & 10404.0 & 1061208.0 & 108243216.0 \\ +302 & 70.0 & 4900.0 & 343000.0 & 24010000.0 \\ +71 & 97.0 & 9409.0 & 912673.0 & 88529281.0 \\ +\end{longtable} + +Our features -- \texttt{hp}, \texttt{hp\^{}2}, \texttt{hp\^{}3}, and +\texttt{hp\^{}4} -- are on drastically different numeric scales! The +values contained in \texttt{hp\^{}4} are orders of magnitude larger than +those contained in \texttt{hp}. This can be a problem because the value +of \texttt{hp\^{}4} will naturally contribute more to each predicted +\(\hat{y}\) because it is so much greater than the values of the other +features. For \texttt{hp} to have much of an impact at all on the +prediction, it must be scaled by a large model parameter. + +By inspecting the fitted parameters of our model, we see that this is +the case -- the parameter for \texttt{hp} is much larger in magnitude +than the parameter for \texttt{hp\^{}4}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.DataFrame(\{}\StringTok{"Feature"}\NormalTok{:X\_train.columns, }\StringTok{"Parameter"}\NormalTok{:lasso\_model.coef\_\})} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& Feature & Parameter \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & hp & -2.549321e-01 \\ +1 & hp\^{}2 & -9.485972e-04 \\ +2 & hp\^{}3 & 8.919763e-06 \\ +3 & hp\^{}4 & -1.228723e-08 \\ +\end{longtable} + +Recall that by applying regularization, we give our a model a ``budget'' +for how it can allocate the values of model parameters. For \texttt{hp} +to have much of an impact on each prediction, LASSO is forced to +``spend'' more of this budget on the parameter for \texttt{hp}. + +We can avoid this issue by \textbf{scaling} the data before +regularizing. This is a process where we convert all features to the +same numeric scale. A common way to scale data is to perform +\textbf{standardization} such that all features have mean 0 and standard +deviation 1; essentially, we replace everything with its Z-score. + +\[z_i = \frac{x_i - \mu}{\sigma}\] + +\subsection{L2 (Ridge) Regularization}\label{l2-ridge-regularization} + +In all of our work above, we considered the constraint +\(\sum_{i=1}^p |\theta_i| \leq Q\) to limit the complexity of the model. +What if we had applied a different constraint? + +In \textbf{L2 regularization}, also known as \textbf{ridge regression}, +we constrain the model such that the sum of the \emph{squared} +parameters must be less than some number \(Q\). This constraint takes +the form: + +\[\sum_{i=1}^p \theta_i^2 \leq Q\] + +As before, \textbf{we typically do not regularize the intercept term}. + +In our 2D example, the constraint becomes +\(\theta_1^2 + \theta_2^2 \leq Q\). Can you see how this is similar to +the equation for a circle, \(x^2 + y^2 = r^2\)? The allowed region of +parameters for a given value of \(Q\) is now shaped like a ball. + +If we modify our objective function like before, we find that our new +goal is to minimize the function: +\[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2\:\text{such that} \sum_{i=1}^p \theta_i^2 \leq Q\] + +Notice that all we have done is change the constraint on the model +parameters. The first term in the expression, the MSE, has not changed. + +Using Lagrangian Duality (again, out of scope for Data 100), we can +re-express our objective function as: +\[\frac{1}{n} \sum_{i=1}^n (y_i - (\theta_0 + \theta_1 \phi_{i, 1} + \theta_2 \phi_{i, 2} + \ldots + \theta_p \phi_{i, p}))^2 + \lambda \sum_{i=1}^p \theta_i^2\] +\[= \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda \sum_{i=1}^p \theta_i^2\] +\[= \frac{1}{n}||\mathbb{Y} - \mathbb{X}\theta||_2^2 + \lambda || \theta ||_2^2\] + +The last two expressions include the MSE expressed using vector +notation, and the last expression writes \(\sum_{i=1}^p \theta_i^2\) as +it's \textbf{L2 norm} equivalent form, \(|| \theta ||_2^2\). + +When applying L2 regularization, our goal is to minimize this updated +objective function. + +Unlike L1 regularization, L2 regularization \emph{does} have a +closed-form solution for the best parameter vector when regularization +is applied: + +\[\hat\theta_{\text{ridge}} = (\mathbb{X}^{\top}\mathbb{X} + n\lambda I)^{-1}\mathbb{X}^{\top}\mathbb{Y}\] + +This solution exists \textbf{even if \(\mathbb{X}\) is not full column +rank}. This is a major reason why L2 regularization is often used -- it +can produce a solution even when there is colinearity in the features. +We will discuss the concept of colinearity in a future lecture, but we +will not derive this result in Data 100, as it involves a fair bit of +matrix calculus. + +In \texttt{sklearn}, we perform L2 regularization using the +\texttt{Ridge} class. It runs gradient descent to minimize the L2 +objective function. Notice that we scale the data before regularizing. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{ridge\_model }\OperatorTok{=}\NormalTok{ lm.Ridge(alpha}\OperatorTok{=}\DecValTok{1}\NormalTok{) }\CommentTok{\# alpha represents the hyperparameter lambda} +\NormalTok{ridge\_model.fit(X\_train, Y\_train)} + +\NormalTok{ridge\_model.coef\_} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([ 5.89130559e-02, -6.42445915e-03, 4.44468157e-05, -8.83981945e-08]) +\end{verbatim} + +\section{Regression Summary}\label{regression-summary} + +Our regression models are summarized below. Note the objective function +is what the gradient descent optimizer minimizes. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.0605}} + >{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.1423}} + >{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.0534}} + >{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.0569}} + >{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.3238}} + >{\raggedright\arraybackslash}p{(\columnwidth - 10\tabcolsep) * \real{0.3630}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +Type +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Model +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Loss +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Regularization +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Objective Function +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +Solution +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +OLS & \(\hat{\mathbb{Y}} = \mathbb{X}\theta\) & MSE & None & +\(\frac{1}{n} \|\mathbb{Y}-\mathbb{X} \theta\|^2_2\) & +\(\hat{\theta}_{OLS} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\) +if \(\mathbb{X}\) is full column rank \\ +Ridge & \(\hat{\mathbb{Y}} = \mathbb{X} \theta\) & MSE & L2 & +\(\frac{1}{n} \|\mathbb{Y}-\mathbb{X}\theta\|^2_2 + \lambda \sum_{i=1}^p \theta_i^2\) +& +\(\hat{\theta}_{ridge} = (\mathbb{X}^{\top}\mathbb{X} + n \lambda I)^{-1}\mathbb{X}^{\top}\mathbb{Y}\) \\ +LASSO & \(\hat{\mathbb{Y}} = \mathbb{X} \theta\) & MSE & L1 & +\(\frac{1}{n} \|\mathbb{Y}-\mathbb{X}\theta\|^2_2 + \lambda \sum_{i=1}^p \vert \theta_i \vert\) +& No closed form solution \\ +\end{longtable} + +\bookmarksetup{startatroot} + +\chapter{Random Variables}\label{random-variables} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Define a random variable in terms of its distribution +\item + Compute the expectation and variance of a random variable +\item + Gain familiarity with the Bernoulli and binomial random variables +\end{itemize} + +\end{tcolorbox} + +In the past few lectures, we've examined the role of complexity in +influencing model performance. We've considered model complexity in the +context of a tradeoff between two competing factors: model variance and +training error. + +So far, our analysis has been mostly qualitative. We've acknowledged +that our choice of model complexity needs to strike a balance between +model variance and training error, but we haven't yet discussed +\emph{why} exactly this tradeoff exists. + +To better understand the origin of this tradeoff, we will need to dive +into \textbf{random variables}. The next two course notes on probability +will be a brief digression from our work on modeling so we can build up +the concepts needed to understand this so-called \textbf{bias-variance +tradeoff}. In specific, we will cover: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Random Variables: introduce random variables, considering the concepts + of expectation, variance, and covariance +\item + Estimators, Bias, and Variance: re-express the ideas of model variance + and training error in terms of random variables and use this new + perspective to investigate our choice of model complexity +\end{enumerate} + +We'll go over just enough probability to help you understand its +implications for modeling, but if you want to go a step further, take +Data 140, CS 70, and/or EECS 126. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Data 8 Recap}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Recall the following concepts from Data 8: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Sample mean: The mean of the random sample +\item + Central Limit Theorem: If you draw a large random sample with + replacement, then, regardless of the population distribution, the + probability distribution of the sample mean + + \begin{enumerate} + \def\labelenumii{\alph{enumii}.} + \item + is roughly normal + \item + is centered at the population mean + \item + has an + \(SD = \frac{\text{population SD}}{\sqrt{\text{sample size}}}\) + \end{enumerate} +\end{enumerate} + +\end{tcolorbox} + +In Data 100, we want to understand the broader relationship between the +following: + +\begin{itemize} +\tightlist +\item + \textbf{Population parameter}: a number that describes something about + the population +\item + \textbf{Sample statistic}: an estimate of the number computed on a + sample +\end{itemize} + +\section{Random Variables and +Distributions}\label{random-variables-and-distributions} + +Suppose we generate a set of random data, like a random sample from some +population. A \textbf{random variable} is a \emph{function} from the +outcome of a random event to a number. + +It is \emph{random} since our sample was drawn at random; it is +\emph{variable} because its exact value depends on how this random +sample came out. As such, the domain or input of our random variable is +all possible outcomes for some random event in a \emph{sample space}, +and its range or output is the real number line. We typically denote +random variables with uppercase letters, such as \(X\) or \(Y\). In +contrast, note that regular variables tend to be denoted using lowercase +letters. Sometimes we also use uppercase letters to refer to matrices +(such as your design matrix \(\mathbb{X}\)), but we will do our best to +be clear with the notation. + +To motivate what this (rather abstract) definition means, let's consider +the following examples: + +\subsection{Example: Tossing a Coin}\label{example-tossing-a-coin} + +Let's formally define a fair coin toss. A fair coin can land on heads +(\(H\)) or tails (\(T\)), each with a probability of 0.5. With these +possible outcomes, we can define a random variable \(X\) as: +\[X = \begin{cases} + 1, \text{if the coin lands heads} \\ + 0, \text{if the coin lands tails} + \end{cases}\] + +\(X\) is a function with a domain, or input, of \(\{H, T\}\) and a +range, or output, of \(\{1, 0\}\). In practice, while we don't use the +following function notation, you could write the above as +\[X = \begin{cases} X(H) = 1 \\ X(T) = 0 \end{cases}\] + +\subsection{Example: Sampling Data 100 +Students}\label{example-sampling-data-100-students} + +Suppose we draw a random sample \(s\) of size 3 from all students +enrolled in Data 100. + +We can define \(Y\) as the number of data science students in our +sample. Its domain is all possible samples of size 3, and its range is +\(\{0, 1, 2, 3\}\). + +Note that we can use random variables in mathematical expressions to +create new random variables. + +For example, let's say we sample 3 students at random from lecture and +look at their midterm scores. Let \(X_1\), \(X_2\), and \(X_3\) +represent each student's midterm grade. + +We can use these random variables to create a new random variable, +\(Y\), which represents the average of the 3 scores: +\(Y = (X_1 + X_2 + X_3)/3\). + +As we're creating this random variable, a few questions arise: + +\begin{itemize} +\tightlist +\item + What can we say about the distribution of \(Y\)? +\item + How does it depend on the distribution of \(X_1\), \(X_2\), and + \(X_3\)? +\end{itemize} + +But, what exactly is a distribution? Let's dive into this! + +\subsection{Distributions}\label{distributions} + +To define any random variable \(X\), we need to be able to specify 2 +things: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \textbf{Possible values}: the set of values the random variable can + take on. +\item + \textbf{Probabilities}: the set of probabilities describing how the + total probability of 100\% is split over the possible values. +\end{enumerate} + +If \(X\) is discrete (has a finite number of possible values), the +probability that a random variable \(X\) takes on the value \(x\) is +given by \(P(X=x)\), and probabilities must sum to 1: +\(\sum_{\text{all } x} P(X=x) = 1\), + +We can often display this using a \textbf{probability distribution +table}. In the coin toss example, the probability distribution table of +\(X\) is given by. + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +\(x\) & \(P(X=x)\) \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & \(\frac{1}{2}\) \\ +1 & \(\frac{1}{2}\) \\ +\end{longtable} + +The \textbf{distribution} of a random variable \(X\) describes how the +total probability of 100\% is split across all the possible values of +\(X\), and it fully defines a random variable. If you know the +distribution of a random variable you can: + +\begin{itemize} +\tightlist +\item + compute properties of the random variables and derived variables +\item + simulate the random variables by randomly picking values of \(X\) + according to its distribution using \texttt{np.random.choice}, + \texttt{df.sample}, or + \texttt{scipy.stats.\textless{}dist\textgreater{}.rvs(...)} +\end{itemize} + +The distribution of a discrete random variable can also be represented +using a histogram. If a variable is \textbf{continuous}, meaning it can +take on infinitely many values, we can illustrate its distribution using +a density curve. + +We often don't know the (true) distribution and instead compute an +empirical distribution. If you flip a coin 3 times and get \{H, H, T\}, +you may ask ------ what is the probability that the coin will land +heads? We can come up with an \textbf{empirical estimate} of +\(\frac{2}{3}\), though the true probability might be \(\frac{1}{2}\). + +Probabilities are areas. For discrete random variables, the \emph{area +of the red bars} represents the probability that a discrete random +variable \(X\) falls within those values. For continuous random +variables, the \emph{area under the curve} represents the probability +that a discrete random variable \(Y\) falls within those values. + +If we sum up the total area of the bars/under the density curve, we +should get 100\%, or 1. + +We can show the distribution of \(Y\) in the following tables. The table +on the left lists all possible samples of \(s\) and the number of times +they can appear (\(Y(s)\)). We can use this to calculate the values for +the table on the right, a \textbf{probability distribution table}. + +Rather than fully write out a probability distribution or show a +histogram, there are some common distributions that come up frequently +when doing data science. These distributions are specified by some +\textbf{parameters}, which are constants that specify the shape of the +distribution. In terms of notation, the `\textasciitilde{}' means ``has +the probability distribution of''. + +These common distributions are listed below: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Bernoulli(\(p\)): If \(X\) \textasciitilde{} Bernoulli(\(p\)), then + \(X\) takes on a value 1 with probability \(p\), and 0 with + probability \(1 - p\). Bernoulli random variables are also termed the + ``indicator'' random variables. +\item + Binomial(\(n\), \(p\)): If \(X\) \textasciitilde{} Binomial(\(n\), + \(p\)), then \(X\) counts the number of 1s in \(n\) independent + Bernoulli(\(p\)) trials. +\item + Categorical(\(p_1, ..., p_k\)) of values: The probability of each + value is 1 / (number of possible values). +\item + Uniform on the unit interval (0, 1): The density is flat at 1 on (0, + 1) and 0 elsewhere. We won't get into what density means as much here, + but intuitively, this is saying that there's an equally likely chance + of getting any value on the interval (0, 1). +\item + Normal(\(\mu\), \(\sigma^2\)): The probability density is specified by + \(\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}\frac{(x-\mu)^2}{\sigma^2}}\). + This bell-shaped distribution comes up fairly often in data, in part + due to the Central Limit Theorem you saw back in Data 8. +\end{enumerate} + +\section{Expectation and Variance}\label{expectation-and-variance} + +There are several ways to describe a random variable. The methods shown +above ------ a table of all samples \(s, X(s)\), distribution table +\(P(X=x)\), and histograms ------ are all definitions that \emph{fully +describe} a random variable. Often, it is easier to describe a random +variable using some \emph{numerical summary} rather than fully defining +its distribution. These numerical summaries are numbers that +characterize some properties of the random variable. Because they give a +``summary'' of how the variable tends to behave, they are \emph{not} +random. Instead, think of them as a static number that describes a +certain property of the random variable. In Data 100, we will focus our +attention on the expectation and variance of a random variable. + +\subsection{Expectation}\label{expectation} + +The \textbf{expectation} of a random variable \(X\) is the +\textbf{weighted average} of the values of \(X\), where the weights are +the probabilities of each value occurring. There are two equivalent ways +to compute the expectation: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Apply the weights one \emph{sample} at a time: + \[\mathbb{E}[X] = \sum_{\text{all possible } s} X(s) P(s)\]. +\item + Apply the weights one possible \emph{value} at a time: + \[\mathbb{E}[X] = \sum_{\text{all possible } x} x P(X=x)\] +\end{enumerate} + +The latter is more commonly used as we are usually just given the +distribution, not all possible samples. + +We want to emphasize that the expectation is a \emph{number}, not a +random variable. Expectation is a generalization of the average, and it +has the same units as the random variable. It is also the center of +gravity of the probability distribution histogram, meaning if we +simulate the variable many times, it is the long-run average of the +simulated values. + +\subsubsection{Example 1: Coin Toss}\label{example-1-coin-toss} + +Going back to our coin toss example, we define a random variable \(X\) +as: \[X = \begin{cases} + 1, \text{if the coin lands heads} \\ + 0, \text{if the coin lands tails} + \end{cases}\] + +We can calculate its expectation \(\mathbb{E}[X]\) using the second +method of applying the weights one possible value at a time: +\[\begin{align} + \mathbb{E}[X] &= \sum_{x} x P(X=x) \\ + &= 1 * 0.5 + 0 * 0.5 \\ + &= 0.5 +\end{align}\] + +Note that \(\mathbb{E}[X] = 0.5\) is not a possible value of \(X\); it's +an average. \textbf{The expectation of X does not need to be a possible +value of X}. + +\subsubsection{Example 2}\label{example-2-1} + +Consider the random variable \(X\): + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +\(x\) & \(P(X=x)\) \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +3 & 0.1 \\ +4 & 0.2 \\ +6 & 0.4 \\ +8 & 0.3 \\ +\end{longtable} + +To calculate it's expectation, \[\begin{align} + \mathbb{E}[X] &= \sum_{x} x P(X=x) \\ + &= 3 * 0.1 + 4 * 0.2 + 6 * 0.4 + 8 * 0.3 \\ + &= 0.3 + 0.8 + 2.4 + 2.4 \\ + &= 5.9 +\end{align}\] + +Again, note that \(\mathbb{E}[X] = 5.9\) is not a possible value of +\(X\); it's an average. \textbf{The expectation of X does not need to be +a possible value of X}. + +\subsection{Variance}\label{variance} + +The \textbf{variance} of a random variable is a measure of its chance +error. It is defined as the expected squared deviation from the +expectation of \(X\). Put more simply, variance asks: how far does \(X\) +typically vary from its average value, just by chance? What is the +spread of \(X\)'s distribution? + +\[\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2]\] + +The units of variance are the square of the units of \(X\). To get it +back to the right scale, use the standard deviation of \(X\): +\[\text{SD}(X) = \sqrt{\text{Var}(X)}\] + +Like with expectation, \textbf{variance and standard deviation are +numbers, not random variables}! Variance helps us describe the +variability of a random variable. It is the expected squared error +between the random variable and its expected value. As you will see +shortly, we can use variance to help us quantify the chance error that +arises when using a sample \(X\) to estimate the population mean. + +By +\href{https://www.inferentialthinking.com/chapters/14/2/Variability.html\#Chebychev's-Bounds}{Chebyshev's +inequality}, which you saw in Data 8, no matter what the shape of the +distribution of \(X\) is, the vast majority of the probability lies in +the interval ``expectation plus or minus a few SDs.'' + +If we expand the square and use properties of expectation, we can +re-express variance as the \textbf{computational formula for variance}. + +\[\text{Var}(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2\] + +This form is often more convenient to use when computing the variance of +a variable by hand, and it is also useful in Mean Squared Error +calculations, as \(\mathbb{E}[X^2] = \text{Var}(X)\) if \(X\) is +centered and \(E(X)=0\). + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Proof}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\[\begin{align} + \text{Var}(X) &= \mathbb{E}[(X-\mathbb{E}[X])^2] \\ + &= \mathbb{E}(X^2 - 2X\mathbb{E}(X) + (\mathbb{E}(X))^2) \\ + &= \mathbb{E}(X^2) - 2 \mathbb{E}(X)\mathbb{E}(X) +( \mathbb{E}(X))^2\\ + &= \mathbb{E}[X^2] - (\mathbb{E}[X])^2 +\end{align}\] + +\end{tcolorbox} + +How do we compute \(\mathbb{E}[X^2]\)? Any function of a random variable +is \emph{also} a random variable. That means that by squaring \(X\), +we've created a new random variable. To compute \(\mathbb{E}[X^2]\), we +can simply apply our definition of expectation to the random variable +\(X^2\). + +\[\mathbb{E}[X^2] = \sum_{x} x^2 P(X = x)\] + +\subsection{Example: Die}\label{example-die} + +Let \(X\) be the outcome of a single fair die roll. \(X\) is a random +variable defined as \[X = \begin{cases} + \frac{1}{6}, \text{if } x \in \{1,2,3,4,5,6\} \\ + 0, \text{otherwise} + \end{cases}\] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-caution-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-caution-color}{\faFire}\hspace{0.5em}{What's the expectation, \(\mathbb{E}[X]?\)}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-caution-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\[ \begin{align} + \mathbb{E}[X] &= 1\big(\frac{1}{6}\big) + 2\big(\frac{1}{6}\big) + 3\big(\frac{1}{6}\big) + 4\big(\frac{1}{6}\big) + 5\big(\frac{1}{6}\big) + 6\big(\frac{1}{6}\big) \\ + &= \big(\frac{1}{6}\big)( 1 + 2 + 3 + 4 + 5 + 6) \\ + &= \frac{7}{2} + \end{align}\] + +\end{tcolorbox} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-caution-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-caution-color}{\faFire}\hspace{0.5em}{What's the variance, \(\text{Var}(X)?\)}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-caution-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Using Approach 1 (definition): \[\begin{align} + \text{Var}(X) &= \big(\frac{1}{6}\big)((1 - \frac{7}{2})^2 + (2 - \frac{7}{2})^2 + (3 - \frac{7}{2})^2 + (4 - \frac{7}{2})^2 + (5 - \frac{7}{2})^2 + (6 - \frac{7}{2})^2) \\ + &= \frac{35}{12} + \end{align}\] + +Using Approach 2 (property): +\[\mathbb{E}[X^2] = \sum_{x} x^2 P(X = x) = \frac{91}{6}\] +\[\text{Var}(X) = \frac{91}{6} - (\frac{7}{2})^2 = \frac{35}{12}\] + +\end{tcolorbox} + +We can summarize our discussion so far in the following diagram: + +\section{Sums of Random Variables}\label{sums-of-random-variables} + +Often, we will work with multiple random variables at the same time. A +function of a random variable is also a random variable. If you create +multiple random variables based on your sample, then functions of those +random variables are also random variables. + +For example, if \(X_1, X_2, ..., X_n\) are random variables, then so are +all of these: + +\begin{itemize} +\tightlist +\item + \(X_n^2\) +\item + \(\#\{i : X_i > 10\}\) +\item + \(\text{max}(X_1, X_2, ..., X_n)\) +\item + \(\frac{1}{n} \sum_{i=1}^n (X_i - c)^2\) +\item + \(\frac{1}{n} \sum_{i=1}^n X_i\) +\end{itemize} + +Many functions of random variables that we are interested in (e.g., +counts, means) involve sums of random variables, so let's dive deeper +into the properties of sums of random variables. + +\subsection{Properties of Expectation}\label{properties-of-expectation} + +Instead of simulating full distributions, we often just compute +expectation and variance directly. Recall the definition of expectation: +\[\mathbb{E}[X] = \sum_{x} x P(X=x)\] + +From it, we can derive some useful properties: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \textbf{Linearity of expectation}. The expectation of the linear + transformation \(aX+b\), where \(a\) and \(b\) are constants, is: +\end{enumerate} + +\[\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Proof}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\[\begin{align} + \mathbb{E}[aX+b] &= \sum_{x} (ax + b) P(X=x) \\ + &= \sum_{x} (ax P(X=x) + bP(X=x)) \\ + &= a\sum_{x}P(X=x) + b\sum_{x}P(X=x)\\ + &= a\mathbb{E}(X) + b * 1 + \end{align}\] + +\end{tcolorbox} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{1} +\tightlist +\item + Expectation is also linear in \emph{sums} of random variables. +\end{enumerate} + +\[\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]\] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Proof}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\[\begin{align} + \mathbb{E}[X+Y] &= \sum_{s} (X+Y)(s) P(s) \\ + &= \sum_{s} (X(s)P(s) + Y(s)P(s)) \\ + &= \sum_{s} X(s)P(s) + \sum_{s} Y(s)P(s)\\ + &= \mathbb{E}[X] + \mathbb{E}[Y] +\end{align}\] + +\end{tcolorbox} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{2} +\tightlist +\item + If \(g\) is a non-linear function, then in general, + \[\mathbb{E}[g(X)] \neq g(\mathbb{E}[X])\] For example, if \(X\) is -1 + or 1 with equal probability, then \(\mathbb{E}[X] = 0\), but + \(\mathbb{E}[X^2] = 1 \neq 0\). +\end{enumerate} + +\subsection{Properties of Variance}\label{properties-of-variance} + +Let's now get into the properties of variance. Recall the definition of +variance: \[\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2]\] + +Combining it with the properties of expectation, we can derive some +useful properties: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Unlike expectation, variance is \emph{non-linear}. The variance of the + linear transformation \(aX+b\) is: + \[\text{Var}(aX+b) = a^2 \text{Var}(X)\] +\end{enumerate} + +\begin{itemize} +\tightlist +\item + Subsequently, \[\text{SD}(aX+b) = |a| \text{SD}(X)\] +\item + The full proof of this fact can be found using the definition of + variance. As general intuition, consider that \(aX+b\) scales the + variable \(X\) by a factor of \(a\), then shifts the distribution of + \(X\) by \(b\) units. +\end{itemize} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Proof}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +We know that \[\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\] + +In order to compute \(\text{Var}(aX+b)\), consider that a shift by \(b\) +units does not affect spread, so \(\text{Var}(aX+b) = \text{Var}(aX)\). + +Then, \[\begin{align} + \text{Var}(aX+b) &= \text{Var}(aX) \\ + &= E((aX)^2) - (E(aX))^2 \\ + &= E(a^2 X^2) - (aE(X))^2\\ + &= a^2 (E(X^2) - (E(X))^2) \\ + &= a^2 \text{Var}(X) +\end{align}\] + +\end{tcolorbox} + +\begin{itemize} +\tightlist +\item + Shifting the distribution by \(b\) \emph{does not} impact the + \emph{spread} of the distribution. Thus, + \(\text{Var}(aX+b) = \text{Var}(aX)\). +\item + Scaling the distribution by \(a\) \emph{does} impact the spread of the + distribution. +\end{itemize} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{1} +\tightlist +\item + Variance of sums of random variables is affected by the (in)dependence + of the random variables. + \[\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) + 2\text{cov}(X,Y)\] + \[\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) \qquad \text{if } X, Y \text{ independent}\] +\end{enumerate} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Proof}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +The variance of a sum is affected by the dependence between the two +random variables that are being added. Let's expand the definition of +\(\text{Var}(X + Y)\) to see what's going on. + +To simplify the math, let \(\mu_x = \mathbb{E}[X]\) and +\(\mu_y = \mathbb{E}[Y]\). + +\[ \begin{align} +\text{Var}(X + Y) &= \mathbb{E}[(X+Y- \mathbb{E}(X+Y))^2] \\ +&= \mathbb{E}[((X - \mu_x) + (Y - \mu_y))^2] \\ +&= \mathbb{E}[(X - \mu_x)^2 + 2(X - \mu_x)(Y - \mu_y) + (Y - \mu_y)^2] \\ +&= \mathbb{E}[(X - \mu_x)^2] + \mathbb{E}[(Y - \mu_y)^2] + \mathbb{E}[(X - \mu_x)(Y - \mu_y)] \\ +&= \text{Var}(X) + \text{Var}(Y) + \mathbb{E}[(X - \mu_x)(Y - \mu_y)] +\end{align}\] + +\end{tcolorbox} + +\subsection{Covariance and +Correlation}\label{covariance-and-correlation} + +We define the \textbf{covariance} of two random variables as the +expected product of deviations from expectation. Put more simply, +covariance is a generalization of variance to variance: + +\[\text{Cov}(X, X) = \mathbb{E}[(X - \mathbb{E}[X])^2] = \text{Var}(X)\] + +\[\text{Cov}(X, Y) = \mathbb{E}[(X - \mathbb{E}[X])(Y - \mathbb{E}[Y])]\] + +We can treat the covariance as a measure of association. Remember the +definition of correlation given when we first established SLR? + +\[r(X, Y) = \mathbb{E}\left[\left(\frac{X-\mathbb{E}[X]}{\text{SD}(X)}\right)\left(\frac{Y-\mathbb{E}[Y]}{\text{SD}(Y)}\right)\right] = \frac{\text{Cov}(X, Y)}{\text{SD}(X)\text{SD}(Y)}\] + +It turns out we've been quietly using covariance for some time now! If +\(X\) and \(Y\) are independent, then \(\text{Cov}(X, Y) =0\) and +\(r(X, Y) = 0\). Note, however, that the converse is not always true: +\(X\) and \(Y\) could have \(\text{Cov}(X, Y) = r(X, Y) = 0\) but not be +independent. + +\subsection{Equal vs.~Identically Distributed +vs.~i.i.d}\label{equal-vs.-identically-distributed-vs.-i.i.d} + +Suppose that we have two random variables \(X\) and \(Y\): + +\begin{itemize} +\tightlist +\item + \(X\) and \(Y\) are \textbf{equal} if \(X(s) = Y(s)\) for every sample + \(s\). Regardless of the exact sample drawn, \(X\) is always equal to + \(Y\). +\item + \(X\) and \(Y\) are \textbf{identically distributed} if the + distribution of \(X\) is equal to the distribution of \(Y\). We say + ``\(X\) and \(Y\) are equal in distribution.'' That is, \(X\) and + \(Y\) take on the same set of possible values, and each of these + possible values is taken with the same probability. On any specific + sample \(s\), identically distributed variables do \emph{not} + necessarily share the same value. If \(X = Y\), then \(X\) and \(Y\) + are identically distributed; however, the converse is not true (ex: + \(Y = 7 - X\), \(X\) is a die) +\item + \(X\) and \(Y\) are \textbf{independent and identically distributed + (i.i.d)} if + + \begin{enumerate} + \def\labelenumi{\arabic{enumi}.} + \tightlist + \item + The variables are identically distributed. + \item + Knowing the outcome of one variable does not influence our belief of + the outcome of the other. + \end{enumerate} +\end{itemize} + +Note that in Data 100, you'll never be expected to prove that random +variables are i.i.d. + +Now let's walk through an example. Say \(X_1\) and \(X_2\) be numbers on +rolls of two fair die. \(X_1\) and \(X_2\) are i.i.d, so \(X_1\) and +\(X_2\) have the same distribution. However, the sums +\(Y = X_1 + X_1 = 2X_1\) and \(Z=X_1+X_2\) have different distributions +but the same expectation. + +However, \(Y = X_1\) has a larger variance. + +\subsection{Example: Bernoulli Random +Variable}\label{example-bernoulli-random-variable} + +To get some practice with the formulas discussed so far, let's derive +the expectation and variance for a Bernoulli(\(p\)) random variable. If +\(X\) \textasciitilde{} Bernoulli(\(p\)), + +\(\mathbb{E}[X] = 1 \cdot p + 0 \cdot (1 - p) = p\) + +To compute the variance, we will use the computational formula. We first +find that: \(\mathbb{E}[X^2] = 1^2 \cdot p + 0^2 \cdot (1 - p) = p\) + +From there, let's calculate our variance: +\(\text{Var}(X) = \mathbb{E}[X^2] - \mathbb{E}[X]^2 = p - p^2 = p(1-p)\) + +\subsection{Example: Binomial Random +Variable}\label{example-binomial-random-variable} + +Let \(Y\) \textasciitilde{} Binomial(\(n\), \(p\)). We can think of +\(Y\) as being the sum of \(n\) i.i.d. Bernoulli(\(p\)) random +variables. Mathematically, this translates to + +\[Y = \sum_{i=1}^n X_i\] + +where \(X_i\) is the indicator of a success on trial \(i\). + +Using linearity of expectation, + +\[\mathbb{E}[Y] = \sum_{i=1}^n \mathbb{E}[X_i] = np\] + +For the variance, since each \(X_i\) is independent of the other, +\(\text{Cov}(X_i, X_j) = 0\), + +\[\text{Var}(Y) = \sum_{i=1}^n \text{Var}[X_i] = np(1-p)\] + +\subsection{Summary}\label{summary-2} + +\begin{itemize} +\tightlist +\item + Let \(X\) be a random variable with distribution \(P(X=x)\). + + \begin{itemize} + \tightlist + \item + \(\mathbb{E}[X] = \sum_{x} x P(X=x)\) + \item + \(\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2] = \mathbb{E}[X^2] - (\mathbb{E}[X])^2\) + \end{itemize} +\item + Let \(a\) and \(b\) be scalar values. + + \begin{itemize} + \tightlist + \item + \(\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\) + \item + \(\text{Var}(aX+b) = a^2 \text{Var}(X)\) + \end{itemize} +\item + Let \(Y\) be another random variable. + + \begin{itemize} + \tightlist + \item + \(\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]\) + \item + \(\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) + 2\text{Cov}(X,Y)\) + \end{itemize} +\end{itemize} + +Note that \(\text{Cov}(X,Y)\) would equal 0 if \(X\) and \(Y\) are +independent. + +\bookmarksetup{startatroot} + +\chapter{Estimators, Bias, and +Variance}\label{estimators-bias-and-variance} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Explore commonly seen random variables like Bernoulli and Binomial + distributions +\item + Apply the Central Limit Theorem to approximate parameters of a + population +\item + Use sampled data to model an estimation of and infer the true + underlying distribution +\item + Estimate the true population distribution from a sample using the + bootstrapping technique +\end{itemize} + +\end{tcolorbox} + +Last time, we introduced the idea of random variables: numerical +functions of a sample. Most of our work in the last lecture was done to +build a background in probability and statistics. Now that we've +established some key ideas, we're in a good place to apply what we've +learned to our original goal -- understanding how the randomness of a +sample impacts the model design process. + +In this lecture, we will delve more deeply into the idea of fitting a +model to a sample. We'll explore how to re-express our modeling process +in terms of random variables and use this new understanding to steer +model complexity. + +\section{Common Random Variables}\label{common-random-variables} + +There are several cases of random variables that appear often and have +useful properties. Below are the ones we will explore further in this +course. The numbers in parentheses are the parameters of a random +variable, which are constants. Parameters define a random variable's +shape (i.e., distribution) and its values. For this lecture, we'll focus +more heavily on the bolded random variables and their special +properties, but you should familiarize yourself with all the ones listed +below: + +\begin{itemize} +\tightlist +\item + \textbf{Bernoulli(\(p\))} + + \begin{itemize} + \tightlist + \item + Takes on value 1 with probability \(p\), and 0 with probability + \((1 - p)\). + \item + AKA the ``indicator'' random variable. + \item + Let \(X\) be a Bernoulli(\(p\)) random variable. + + \begin{itemize} + \tightlist + \item + \(\mathbb{E}[X] = 1 * p + 0 * (1-p) = p\) + + \begin{itemize} + \tightlist + \item + \(\mathbb{E}[X^2] = 1^2 * p + 0 * (1-p) = p\) + \end{itemize} + \item + \(\text{Var}(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2 = p - p^2 = p(1-p)\) + \end{itemize} + \end{itemize} +\item + \textbf{Binomial(\(n\), \(p\))} + + \begin{itemize} + \tightlist + \item + Number of 1s in \(n\) independent Bernoulli(\(p\)) trials. + \item + Let \(Y\) be a Binomial(\(n\), \(p\)) random variable. + + \begin{itemize} + \tightlist + \item + The distribution of \(Y\) is given by the binomial formula, and we + can write \(Y = \sum_{i=1}^n X_i\) where: + + \begin{itemize} + \tightlist + \item + \(X_i\) s the indicator of success on trial i. \(X_i = 1\) if + trial i is a success, else 0. + \item + All \(X_i\) are i.i.d. and Bernoulli(\(p\)). + \end{itemize} + \item + \(\mathbb{E}[Y] = \sum_{i=1}^n \mathbb{E}[X_i] = np\) + \item + \(\text{Var}(X) = \sum_{i=1}^n \text{Var}(X_i) = np(1-p)\) + + \begin{itemize} + \tightlist + \item + \(X_i\)'s are independent, so \(\text{Cov}(X_i, X_j) = 0\) for + all i, j. + \end{itemize} + \end{itemize} + \end{itemize} +\item + Uniform on a finite set of values + + \begin{itemize} + \tightlist + \item + The probability of each value is + \(\frac{1}{\text{(number of possible values)}}\). + \item + For example, a standard/fair die. + \end{itemize} +\item + Uniform on the unit interval (0, 1) + + \begin{itemize} + \tightlist + \item + Density is flat at 1 on (0, 1) and 0 elsewhere. + \end{itemize} +\item + Normal(\(\mu, \sigma^2\)), a.k.a Gaussian + + \begin{itemize} + \tightlist + \item + \(f(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left( -\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^{\!2}\,\right)\) + \end{itemize} +\end{itemize} + +\subsection{Example}\label{example} + +Suppose you win cash based on the number of heads you get in a series of +20 coin flips. Let \(X_i = 1\) if the \(i\)-th coin is heads, \(0\) +otherwise. Which payout strategy would you choose? + +A. \(Y_A = 10 * X_1 + 10 * X_2\) + +B. \(Y_B = \sum_{i=1}^{20} X_i\) + +C. \(Y_C = 20 * X_1\) + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-caution-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-caution-color}{\faFire}\hspace{0.5em}{Solution}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-caution-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Let \(X_1, X_2, ... X_{20}\) be 20 i.i.d Bernoulli(0.5) random +variables. Since the \(X_i\)'s are independent, +\(\text{Cov}(X_i, X_j) = 0\) for all pairs \(i, j\). Additionally, Since +\(X_i\) is Bernoulli(0.5), we know that \(\mathbb{E}[X] = p = 0.5\) and +\(\text{Var}(X) = p(1-p) = 0.25\). We can calculate the following for +each scenario: + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}} + >{\raggedright\arraybackslash}p{(\columnwidth - 6\tabcolsep) * \real{0.2500}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +A. \(Y_A = 10 * X_1 + 10 * X_2\) +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +B. \(Y_B = \sum_{i=1}^{20} X_i\) +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +C. \(Y_C = 20 * X_1\) +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +Expectation & \(\mathbb{E}[Y_A] = 10 (0.5) + 10(0.5) = 10\) & +\(\mathbb{E}[Y_B] = 0.5 + ... + 0.5 = 10\) & +\(\mathbb{E}[Y_C] = 20(0.5) = 10\) \\ +Variance & \(\text{Var}(Y_A) = 10^2 (0.25) + 10^2 (0.25) = 50\) & +\(\text{Var}(Y_B) = 0.25 + ... + 0.25 = 5\) & +\(\text{Var}(Y_C) = 20^2 (0.25) = 100\) \\ +Standard Deviation & \(\text{SD}(Y_A) \approx 7.07\) & +\(\text{SD}(Y_B) \approx 2.24\) & \(\text{SD}(Y_C) = 10\) \\ +\end{longtable} + +As we can see, all the scenarios have the same expected value but +different variances. The higher the variance, the greater the risk and +uncertainty, so the ``right'' strategy depends on your personal +preference. Would you choose the ``safest'' option B, the most ``risky'' +option C, or somewhere in the middle (option A)? + +\end{tcolorbox} + +\section{Sample Statistics}\label{sample-statistics} + +Today, we've talked extensively about populations; if we know the +distribution of a random variable, we can reliably compute expectation, +variance, functions of the random variable, etc. Note that: + +\begin{itemize} +\tightlist +\item + The distribution of a \emph{population} describes how a random + variable behaves across \emph{all} individuals of interest. +\item + The distribution of a \emph{sample} describes how a random variable + behaves in a \emph{specific sample} from the population. +\end{itemize} + +In Data Science, however, we often do not have access to the whole +population, so we don't know its distribution. As such, we need to +collect a sample and use its distribution to estimate or infer +properties of the population. In cases like these, we can take several +samples of size \(n\) from the population (an easy way to do this is +using \texttt{df.sample(n,\ replace=True)}), and compute the mean of +each \emph{sample}. When sampling, we make the (big) assumption that we +sample uniformly at random \emph{with replacement} from the population; +each observation in our sample is a random variable drawn i.i.d from our +population distribution. Remember that our sample mean is a random +variable since it depends on our randomly drawn sample! On the other +hand, our population mean is simply a number (a fixed value). + +\subsection{Sample Mean}\label{sample-mean} + +Consider an i.i.d. sample \(X_1, X_2, ..., X_n\) drawn from a population +with mean 𝜇 and SD 𝜎. We define the sample mean as +\[\bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i\] + +The expectation of the sample mean is given by: \[\begin{align} + \mathbb{E}[\bar{X}_n] &= \frac{1}{n} \sum_{i=1}^n \mathbb{E}[X_i] \\ + &= \frac{1}{n} (n \mu) \\ + &= \mu +\end{align}\] + +The variance is given by: \[\begin{align} + \text{Var}(\bar{X}_n) &= \frac{1}{n^2} \text{Var}( \sum_{i=1}^n X_i) \\ + &= \frac{1}{n^2} \left( \sum_{i=1}^n \text{Var}(X_i) \right) \\ + &= \frac{1}{n^2} (n \sigma^2) = \frac{\sigma^2}{n} +\end{align}\] + +\(\bar{X}_n\) is approximately normally distributed by the Central Limit +Theorem (CLT). + +\subsection{Central Limit Theorem}\label{central-limit-theorem} + +In +\href{https://inferentialthinking.com/chapters/14/4/Central_Limit_Theorem.html?}{Data +8} and in the previous lecture, you encountered the \textbf{Central +Limit Theorem (CLT)}. This is a powerful theorem for estimating the +distribution of a population with mean \(\mu\) and standard deviation +\(\sigma\) from a collection of smaller samples. The CLT tells us that +if an i.i.d sample of size \(n\) is large, then the probability +distribution of the \textbf{sample mean} is \textbf{roughly normal} with +mean \(\mu\) and SD of \(\frac{\sigma}{\sqrt{n}}\). More generally, any +theorem that provides the rough distribution of a statistic and +\textbf{doesn't need the distribution of the population} is valuable to +data scientists! This is because we rarely know a lot about the +population. + +Importantly, the CLT assumes that each observation in our samples is +drawn i.i.d from the distribution of the population. In addition, the +CLT is accurate only when \(n\) is ``large'', but what counts as a +``large'' sample size depends on the specific distribution. If a +population is highly symmetric and unimodal, we could need as few as +\(n=20\); if a population is very skewed, we need a larger \(n\). If in +doubt, you can bootstrap the sample mean and see if the bootstrapped +distribution is bell-shaped. Classes like Data 140 investigate this idea +in great detail. + +For a more in-depth demo, check out +\href{https://onlinestatbook.com/stat_sim/sampling_dist/}{onlinestatbook}. + +\subsection{Using the Sample Mean to Estimate the Population +Mean}\label{using-the-sample-mean-to-estimate-the-population-mean} + +Now let's say we want to use the sample mean to \textbf{estimate} the +population mean, for example, the average height of Cal undergraduates. +We can typically collect a \textbf{single sample}, which has just one +average. However, what if we happened, by random chance, to draw a +sample with a different mean or spread than that of the population? We +might get a skewed view of how the population behaves (consider the +extreme case where we happen to sample the exact same value \(n\) +times!). + +For example, notice the difference in variation between these two +distributions that are different in sample size. The distribution with a +bigger sample size (\(n=800\)) is tighter around the mean than the +distribution with a smaller sample size (\(n=200\)). Try plugging in +these values into the standard deviation equation for the sample mean to +make sense of this! + +Applying the CLT allows us to make sense of all of this and resolve this +issue. By drawing many samples, we can consider how the sample +distribution varies across multiple subsets of the data. This allows us +to approximate the properties of the population without the need to +survey every single member. + +Given this potential variance, it is also important that we consider the +\textbf{average value and spread} of all possible sample means, and what +this means for how big \(n\) should be. For every sample size, the +expected value of the sample mean is the population mean: +\[\mathbb{E}[\bar{X}_n] = \mu\] We call the sample mean an +\textbf{unbiased estimator} of the population mean and will explore this +idea more in the next lecture. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Data 8 Recap: Square Root Law}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +The square root law +(\href{https://inferentialthinking.com/chapters/14/5/Variability_of_the_Sample_Mean.html\#the-square-root-law}{Data +8}) states that if you increase the sample size by a factor, the SD of +the sample mean decreases by the square root of the factor. +\[\text{SD}(\bar{X_n}) = \frac{\sigma}{\sqrt{n}}\] The sample mean is +more likely to be close to the population mean if we have a larger +sample size. + +\end{tcolorbox} + +\section{Prediction and Inference}\label{prediction-and-inference-1} + +At this point in the course, we've spent a great deal of time working +with models. When we first introduced the idea of modeling a few weeks +ago, we did so in the context of \textbf{prediction}: using models to +make \emph{accurate predictions} about unseen data. Another reason we +might build models is to better understand complex phenomena in the +world around us. \textbf{Inference} is the task of using a model to +infer the true underlying relationships between the feature and response +variables. For example, if we are working with a set of housing data, +\emph{prediction} might ask: given the attributes of a house, how much +is it worth? \emph{Inference} might ask: how much does having a local +park impact the value of a house? + +A major goal of inference is to draw conclusions about the full +population of data given only a random sample. To do this, we aim to +estimate the value of a \textbf{parameter}, which is a numerical +function of the \emph{population} (for example, the population mean +\(\mu\)). We use a collected sample to construct a \textbf{statistic}, +which is a numerical function of the random \emph{sample} (for example, +the sample mean \(\bar{X}_n\)). It's helpful to think ``p'' for +``parameter'' and ``population,'' and ``s'' for ``sample'' and +``statistic.'' + +Since the sample represents a \emph{random} subset of the population, +any statistic we generate will likely deviate from the true population +parameter, and it \emph{could have been different}. We say that the +sample statistic is an \textbf{estimator} of the true population +parameter. Notationally, the population parameter is typically called +\(\theta\), while its estimator is denoted by \(\hat{\theta}\). + +To address our inference question, we aim to construct estimators that +closely estimate the value of the population parameter. We evaluate how +``good'' an estimator is by answering three questions: + +\begin{itemize} +\tightlist +\item + How close is our answer to the parameter? \textbf{(Risk / MSE)} + \[ MSE(\hat{\theta}) = E[(\hat{\theta} - \theta)]^2\] +\item + Do we get the right answer for the parameter, on average? + \textbf{(Bias)} + \[\text{Bias}(\hat{\theta}) = E[\hat{\theta} - \theta] = E[\hat{\theta}] - \theta\] +\item + How variable is the answer? \textbf{(Variance)} + \[Var(\hat{\theta}) = E[(\theta - E[\theta])^2] \] +\end{itemize} + +This relationship can be illustrated with an archery analogy. Imagine +that the center of the target is the \(\theta\) and each arrow +corresponds to a separate parameter estimate \(\hat{\theta}\) + +Ideally, we want our estimator to have low bias and low variance, but +how can we mathematically quantify that? See +Section~\ref{sec-bias-variance-tradeoff} for more detail. + +\subsection{Prediction as Estimation}\label{prediction-as-estimation} + +Now that we've established the idea of an estimator, let's see how we +can apply this learning to the modeling process. To do so, we'll take a +moment to formalize our data collection and models in the language of +random variables. + +Say we are working with an input variable, \(x\), and a response +variable, \(Y\). We assume that \(Y\) and \(x\) are linked by some +relationship \(g\); in other words, \(Y = g(x)\) where \(g\) represents +some ``universal truth'' or ``law of nature'' that defines the +underlying relationship between \(x\) and \(Y\). In the image below, +\(g\) is denoted by the red line. + +As data scientists, however, we have no way of directly ``seeing'' the +underlying relationship \(g\). The best we can do is collect observed +data out in the real world to try to understand this relationship. +Unfortunately, the data collection process will always have some +inherent error (think of the randomness you might encounter when taking +measurements in a scientific experiment). We say that each observation +comes with some random error or \textbf{noise} term, \(\epsilon\) (read: +``epsilon''). This error is assumed to be a random variable with +expectation \(\mathbb{E}(\epsilon)=0\), variance +\(\text{Var}(\epsilon) = \sigma^2\), and be i.i.d. across each +observation. The existence of this random noise means that our +observations, \(Y(x)\), are \emph{random variables}. + +We can only observe our random sample of data, represented by the blue +points above. From this sample, we want to estimate the true +relationship \(g\). We do this by constructing the model \(\hat{Y}(x)\) +to estimate \(g\). + +\[\text{True relationship: } g(x)\] + +\[\text{Observed relationship: }Y = g(x) + \epsilon\] + +\[\text{Prediction: }\hat{Y}(x)\] + +When building models, it is also important to note that our choice of +features will also significantly impact our estimation. In the plot +below, you can see how the different models (green and purple) can lead +to different estimates. + +\subsubsection{Estimating a Linear +Relationship}\label{estimating-a-linear-relationship} + +If we assume that the true relationship \(g\) is linear, we can express +the response as \(Y = f_{\theta}(x)\), where our true relationship is +modeled by \[Y = g(x) + \epsilon\] +\[ f_{\theta}(x) = Y = \theta_0 + \sum_{j=1}^p \theta_j x_j + \epsilon\] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-warning-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-warning-color}{\faExclamationTriangle}\hspace{0.5em}{Which expressions are random?}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-warning-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +In our two equations above, the true relationship +\(g(x) = \theta_0 + \sum_{j=1}^p \theta_j x_j\) is not random, but +\(\epsilon\) is random. Hence, \(Y = f_{\theta}(x)\) is also random. + +\end{tcolorbox} + +This true relationship has true, unobservable parameters \(\theta\), and +it has random noise \(\epsilon\), so we can never observe the true +relationship. Instead, the next best thing we can do is obtain a sample +\(\Bbb{X}\), \(\Bbb{Y}\) of \(n\) observed relationships, \((x, Y)\) and +use it to train a model and obtain an estimate of \(\hat{\theta}\) +\[\hat{Y}(x) = f_{\hat{\theta}}(x) = \hat{\theta_0} + \sum_{j=1}^p \hat{\theta_j} x_j\] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-warning-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-warning-color}{\faExclamationTriangle}\hspace{0.5em}{Which expressions are random?}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-warning-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +In our estimating equation above, our sample \(\Bbb{X}\), \(\Bbb{Y}\) +are random (often due to human error). Hence, the estimates we calculate +from our samples \(\hat{\theta}\) are also random, so our predictor +\(\hat{Y}(x)\) is also random. + +\end{tcolorbox} + +Now taking a look at our original equations, we can see that they both +have differing sources of randomness. For our observed relationship, +\(Y = g(x) + \epsilon\), \(\epsilon\) represents errors which occur +during or after the observation or measurement process. For the +estimation model, the data we have is a random sample collected from the +population, which was constructed from decisions made before the +measurement process. + +\section{Bias-Variance Tradeoff}\label{sec-bias-variance-tradeoff} + +Recall the model and the data we generated from that model in the last +section: + +\[\text{True relationship: } g(x)\] + +\[\text{Observed relationship: }Y = g(x) + \epsilon\] + +\[\text{Prediction: }\hat{Y}(x)\] + +With this reformulated modeling goal, we can now revisit the +Bias-Variance Tradeoff from two lectures ago (shown below): + +In today's lecture, we'll explore a more mathematical version of the +graph you see above by introducing the terms model risk, observation +variance, model bias, and model variance. Eventually, we'll work our way +up to an updated version of the Bias-Variance Tradeoff graph that you +see below + +\subsection{Model Risk}\label{model-risk} + +\textbf{Model risk} is defined as the mean square prediction error of +the random variable \(\hat{Y}\). It is an expectation across \emph{all} +samples we could have possibly gotten when fitting the model, which we +can denote as random variables \(X_1, X_2, \ldots, X_n, Y\). Model risk +considers the model's performance on any sample that is theoretically +possible, rather than the specific data that we have collected. + +\[\text{model risk }=E\left[(Y-\hat{Y(x)})^2\right]\] + +What is the origin of the error encoded by model risk? Note that there +are two types of errors: + +\begin{itemize} +\tightlist +\item + Chance errors: happen due to randomness alone + + \begin{itemize} + \tightlist + \item + Source 1 \textbf{(Observation Variance)}: randomness in new + observations \(Y\) due to random noise \(\epsilon\) + \item + Source 2 \textbf{(Model Variance)}: randomness in the sample we used + to train the models, as samples \(X_1, X_2, \ldots, X_n, Y\) are + random + \end{itemize} +\item + \textbf{(Model Bias)}: non-random error due to our model being + different from the true underlying function \(g\) +\end{itemize} + +Recall the data-generating process we established earlier. There is a +true underlying relationship \(g\), observed data (with random noise) +\(Y\), and model \(\hat{Y}\). + +To better understand model risk, we'll zoom in on a single data point in +the plot above. + +Remember that \(\hat{Y}(x)\) is a random variable -- it is the +prediction made for \(x\) after being fit on the specific sample used +for training. If we had used a different sample for training, a +different prediction might have been made for this value of \(x\). To +capture this, the diagram above considers both the prediction +\(\hat{Y}(x)\) made for a particular random training sample, and the +\emph{expected} prediction across all possible training samples, +\(E[\hat{Y}(x)]\). + +We can use this simplified diagram to break down the prediction error +into smaller components. First, start by considering the error on a +single prediction, \(Y(x)-\hat{Y}(x)\). + +We can identify three components of this error. + +That is, the error can be written as: + +\[Y(x)-\hat{Y}(x) = \epsilon + \left(g(x)-E\left[\hat{Y}(x)\right]\right) + \left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)\] +\[\newline \] + +The model risk is the expected square of the expression above, +\(E\left[(Y(x)-\hat{Y}(x))^2\right]\). If we square both sides and then +take the expectation, we will get the following decomposition of model +risk: + +\[E\left[(Y(x)-\hat{Y}(x))^2\right] = E[\epsilon^2] + \left(g(x)-E\left[\hat{Y}(x)\right]\right)^2 + E\left[\left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)^2\right]\] + +It looks like we are missing some cross-product terms when squaring the +right-hand side, but it turns out that all of those cross-product terms +are zero. The detailed derivation is out of scope for this class, but a +proof is included at the end of this note for your reference. + +This expression may look complicated at first glance, but we've actually +already defined each term earlier in this lecture! Let's look at them +term by term. + +\subsubsection{Observation Variance}\label{observation-variance} + +The first term in the above decomposition is \(E[\epsilon^2]\). Remember +\(\epsilon\) is the random noise when observing \(Y\), with expectation +\(\mathbb{E}(\epsilon)=0\) and variance +\(\text{Var}(\epsilon) = \sigma^2\). We can show that \(E[\epsilon^2]\) +is the variance of \(\epsilon\): \[ +\begin{align*} +\text{Var}(\epsilon) &= E[\epsilon^2] + \left(E[\epsilon]\right)^2\\ +&= E[\epsilon^2] + 0^2\\ +&= \sigma^2. +\end{align*} +\] + +This term describes how variable the random error \(\epsilon\) (and +\(Y\)) is for each observation. This is called the \textbf{observation +variance}. It exists due to the randomness in our observations \(Y\). It +is a form of \emph{chance error} we talked about in the Sampling +lecture. + +\[\text{observation variance} = \text{Var}(\epsilon) = \sigma^2.\] + +The observation variance results from measurement errors when observing +data or missing information that acts like noise. To reduce this +observation variance, we could try to get more precise measurements, but +it is often beyond the control of data scientists. Because of this, the +observation variance \(\sigma^2\) is sometimes called ``irreducible +error.'' + +\subsubsection{Model Variance}\label{model-variance} + +We will then look at the last term: +\(E\left[\left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)^2\right]\). +If you recall the definition of variance from the last lecture, this is +precisely \(\text{Var}(\hat{Y}(x))\). We call this the \textbf{model +variance}. + +It describes how much the prediction \(\hat{Y}(x)\) tends to vary when +we fit the model on different samples. Remember the sample we collect +can come out very differently, thus the prediction \(\hat{Y}(x)\) will +also be different. The model variance describes this variability due to +the randomness in our sampling process. Like observation variance, it is +also a form of \emph{chance error}---even though the sources of +randomness are different. + +\[\text{model variance} = \text{Var}(\hat{Y}(x)) = E\left[\left(\hat{Y}(x) - E\left[\hat{Y}(x)\right]\right)^2\right]\] + +The main reason for the large model variance is because of +\textbf{overfitting}: we paid too much attention to the details in our +sample that small differences in our random sample lead to large +differences in the fitted model. To remediate this, we try to reduce +model complexity (e.g.~take out some features and limit the magnitude of +estimated model coefficients) and not fit our model on the noises. + +\subsubsection{Model Bias}\label{model-bias} + +Finally, the second term is +\(\left(g(x)-E\left[\hat{Y}(x)\right]\right)^2\). What is this? The term +\(E\left[\hat{Y}(x)\right] - g(x)\) is called the \textbf{model bias}. + +Remember that \(g(x)\) is the fixed underlying truth and \(\hat{Y}(x)\) +is our fitted model, which is random. Model bias therefore measures how +far off \(g(x)\) and \(\hat{Y}(x)\) are on average over all possible +samples. + +\[\text{model bias} = E\left[\hat{Y}(x) - g(x)\right] = E\left[\hat{Y}(x)\right] - g(x)\] + +The model bias is not random; it's an average measure for a specific +individual \(x\). If bias is positive, our model tends to overestimate +\(g(x)\); if it's negative, our model tends to underestimate \(g(x)\). +And if it's 0, we can say that our model is \textbf{unbiased}. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Unbiased Estimators}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +An \textbf{unbiased model} has a \(\text{model bias } = 0\). In other +words, our model predicts \(g(x)\) on average. + +Similarly, we can define bias for estimators like the mean. The sample +mean is an \textbf{unbiased estimator} of the population mean, as by +CLT, \(\mathbb{E}[\bar{X}_n] = \mu\). Therefore, the +\(\text{estimator bias } = \mathbb{E}[\bar{X}_n] - \mu = 0\). + +\end{tcolorbox} + +There are two main reasons for large model biases: + +\begin{itemize} +\tightlist +\item + Underfitting: our model is too simple for the data +\item + Lack of domain knowledge: we don't understand what features are useful + for the response variable +\end{itemize} + +To fix this, we increase model complexity (but we don't want to +overfit!) or consult domain experts to see which models make sense. You +can start to see a tradeoff here: if we increase model complexity, we +decrease the model bias, but we also risk increasing the model variance. + +\subsection{The Decomposition}\label{the-decomposition} + +To summarize: + +\begin{itemize} +\tightlist +\item + The \textbf{model risk}, + \(\mathbb{E}\left[(Y(x)-\hat{Y}(x))^2\right]\), is the mean squared + prediction error of the model. It is an expectation and is therefore a + \textbf{fixed number} (for a given x). +\item + The \textbf{observation variance}, \(\sigma^2\), is the variance of + the random noise in the observations. It describes how variable the + random error \(\epsilon\) is for each observation and \textbf{cannot + be addressed by modeling}. +\item + The \textbf{model bias}, \(\mathbb{E}\left[\hat{Y}(x)\right]-g(x)\), + is how ``off'' the \(\hat{Y}(x)\) is as an estimator of the true + underlying relationship \(g(x)\). +\item + The \textbf{model variance}, \(\text{Var}(\hat{Y}(x))\), describes how + much the prediction \(\hat{Y}(x)\) tends to vary when we fit the model + on different samples. +\end{itemize} + +The above definitions enable us to simplify the decomposition of model +risk before as: + +\[ E[(Y(x) - \hat{Y}(x))^2] = \sigma^2 + (E[\hat{Y}(x)] - g(x))^2 + \text{Var}(\hat{Y}(x)) \] +\[\text{model risk } = \text{observation variance} + (\text{model bias})^2 \text{+ model variance}\] + +This is known as the \textbf{bias-variance tradeoff}. What does it mean? +Remember that the model risk is a measure of the model's performance. +Our goal in building models is to keep model risk low; this means that +we will want to ensure that each component of model risk is kept at a +small value. + +Observation variance is an inherent, random part of the data collection +process. We aren't able to reduce the observation variance, so we'll +focus our attention on the model bias and model variance. + +In the Feature Engineering lecture, we considered the issue of +overfitting. We saw that the model's error or bias tends to decrease as +model complexity increases --- if we design a highly complex model, it +will tend to make predictions that are closer to the true relationship +\(g\). At the same time, model variance tends to \emph{increase} as +model complexity increases; a complex model may overfit to the training +data, meaning that small differences in the random samples used for +training lead to large differences in the fitted model. We have a +problem. To decrease model bias, we could increase the model's +complexity, which would lead to overfitting and an increase in model +variance. Alternatively, we could decrease model variance by decreasing +the model's complexity at the cost of increased model bias due to +underfitting. + +We need to strike a balance. Our goal in model creation is to use a +complexity level that is high enough to keep bias low, but not so high +that model variance is large. + +\section{{[}Bonus{]} Proof of Bias-Variance +Decomposition}\label{bonus-proof-of-bias-variance-decomposition} + +This section walks through the detailed derivation of the Bias-Variance +Decomposition in the Bias-Variance Tradeoff section above, and this +content is out of scope. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-color-frame, left=2mm, breakable, rightrule=.15mm, bottomrule=.15mm, opacityback=0, toprule=.15mm, leftrule=.75mm, arc=.35mm, colback=white] + +\vspace{-3mm}\textbf{Click to show}\vspace{3mm} + +We want to prove that the model risk can be decomposed as + +\[ +\begin{align*} +E\left[(Y(x)-\hat{Y}(x))^2\right] &= E[\epsilon^2] + \left(g(x)-E\left[\hat{Y}(x)\right]\right)^2 + E\left[\left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right)^2\right]. +\end{align*} +\] + +To prove this, we will first need the following lemma: + +If \(V\) and \(W\) are independent random variables then +\(E[VW] = E[V]E[W]\). + +We will prove this in the discrete finite case. Trust that it's true in +greater generality. + +The job is to calculate the weighted average of the values of \(VW\), +where the weights are the probabilities of those values. Here goes. + +\begin{align*} +E[VW] ~ &= ~ \sum_v\sum_w vwP(V=v \text{ and } W=w) \\ +&= ~ \sum_v\sum_w vwP(V=v)P(W=w) ~~~~ \text{by independence} \\ +&= ~ \sum_v vP(V=v)\sum_w wP(W=w) \\ +&= ~ E[V]E[W] +\end{align*} + +Now we go into the actual proof: + +\subsection{Goal}\label{goal} + +Decompose the model risk into recognizable components. + +\subsection{Step 1}\label{step-1} + +\[ +\begin{align*} +\text{model risk} ~ &= ~ E\left[\left(Y - \hat{Y}(x)\right)^2 \right] \\ +&= ~ E\left[\left(g(x) + \epsilon - \hat{Y}(x)\right)^2 \right] \\ +&= ~ E\left[\left(\epsilon + \left(g(x)- \hat{Y}(x)\right)\right)^2 \right] \\ +&= ~ E\left[\epsilon^2\right] + 2E\left[\epsilon \left(g(x)- \hat{Y}(x)\right)\right] + E\left[\left(g(x) - \hat{Y}(x)\right)^2\right]\\ +\end{align*} +\] + +On the right hand side: + +\begin{itemize} +\tightlist +\item + The first term is the observation variance \(\sigma^2\). +\item + The cross product term is 0 because \(\epsilon\) is independent of + \(g(x) - \hat{Y}(x)\) and \(E(\epsilon) = 0\) +\item + The last term is the mean squared difference between our predicted + value and the value of the true function at \(x\) +\end{itemize} + +\subsection{Step 2}\label{step-2} + +At this stage we have + +\[ +\text{model risk} ~ = ~ E\left[\epsilon^2\right] + E\left[\left(g(x) - \hat{Y}(x)\right)^2\right] +\] + +We don't yet have a good understanding of \(g(x) - \hat{Y}(x)\). But we +do understand the deviation +\(D_{\hat{Y}(x)} = \hat{Y}(x) - E\left[\hat{Y}(x)\right]\). We know that + +\begin{itemize} +\tightlist +\item + \(E\left[D_{\hat{Y}(x)}\right] ~ = ~ 0\) +\item + \(E\left[D_{\hat{Y}(x)}^2\right] ~ = ~ \text{model variance}\) +\end{itemize} + +So let's add and subtract \(E\left[\hat{Y}(x)\right]\) and see if that +helps. + +\[ +g(x) - \hat{Y}(x) ~ = ~ \left(g(x) - E\left[\hat{Y}(x)\right] \right) + \left(E\left[\hat{Y}(x)\right] - \hat{Y}(x)\right) +\] + +The first term on the right hand side is the model bias at \(x\). The +second term is \(-D_{\hat{Y}(x)}\). So + +\[ +g(x) - \hat{Y}(x) ~ = ~ \text{model bias} - D_{\hat{Y}(x)} +\] + +\subsection{Step 3}\label{step-3} + +Remember that the model bias at \(x\) is a constant, not a random +variable. Think of it as your favorite number, say 10. Then \[ +\begin{align*} +E\left[ \left(g(x) - \hat{Y}(x)\right)^2 \right] ~ &= ~ \text{model bias}^2 - 2(\text{model bias})E\left[D_{\hat{Y}(x)}\right] + E\left[D_{\hat{Y}(x)}^2\right] \\ +&= ~ \text{model bias}^2 - 0 + \text{model variance} \\ +&= ~ \text{model bias}^2 + \text{model variance} +\end{align*} +\] + +Again, the cross-product term is \(0\) because +\(E\left[D_{\hat{Y}(x)}\right] ~ = ~ 0\). + +\subsection{Step 4: Bias-Variance +Decomposition}\label{step-4-bias-variance-decomposition} + +In Step 2, we had: + +\[ +\text{model risk} ~ = ~ \text{observation variance} + E\left[\left(g(x) - \hat{Y}(x)\right)^2\right] +\] + +Step 3 showed: + +\[ +E\left[ \left(g(x) - \hat{Y}(x)\right)^2 \right] ~ = ~ \text{model bias}^2 + \text{model variance} +\] + +Thus, we have proven the bias-variance decomposition: + +\[ +\text{model risk} = \text{observation variance} + \text{model bias}^2 + \text{model variance}. +\] + +That is, + +\[ +E\left[(Y(x)-\hat{Y}(x))^2\right] = \sigma^2 + \left(E\left[\hat{Y}(x)\right] - g(x)\right)^2 + E\left[\left(\hat{Y}(x)-E\left[\hat{Y}(x)\right]\right)^2\right] +\] + +\end{tcolorbox} + +\bookmarksetup{startatroot} + +\chapter{Causal Inference and +Confounding}\label{causal-inference-and-confounding} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Construct confidence intervals for hypothesis testing using + bootstrapping +\item + Understand the assumptions we make and their impact on our regression + inference +\item + Explore ways to overcome issues of multicollinearity +\item + Compare regression correlation and causation +\end{itemize} + +\end{tcolorbox} + +Last time, we introduced the idea of random variables and how they +affect the data and model we construct. We also demonstrated the +decomposition of model risk from a fitted model and dived into the +bias-variance tradeoff. + +In this lecture, we will explore regression inference via hypothesis +testing, understand how to use bootstrapping under the right +assumptions, and consider the environment of understanding causality in +theory and in practice. + +\section{Parameter Inference: Interpreting Regression +Coefficients}\label{parameter-inference-interpreting-regression-coefficients} + +There are two main reasons why we build models: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \textbf{Prediction}: using our model to make accurate predictions + about unseen data +\item + \textbf{Inference}: using our model to draw conclusions about the + underlying relationship(s) between our features and response. We want + to understand the complex phenomena occurring in the world we live in. + While training is the process of fitting a model, inference is the + \emph{process of making predictions}. +\end{enumerate} + +Recall the framework we established in the last lecture. The +relationship between datapoints is given by \(Y = g(x) + \epsilon\), +where \(g(x)\) is the \emph{true underlying relationship}, and +\(\epsilon\) represents randomness. If we assume \(g(x)\) is linear, we +can express this relationship in terms of the unknown, true model +parameters \(\theta\). + +\[f_{\theta}(x) = g(x) + \epsilon = \theta_0 + \theta_1 x_1 + \ldots + \theta_p x_p + \epsilon\] + +Our model attempts to estimate each true population parameter +\(\theta_i\) using the sample estimates \(\hat{\theta}_i\) calculated +from the design matrix \(\Bbb{X}\) and response vector \(\Bbb{Y}\). + +\[f_{\hat{\theta}}(x) = \hat{\theta}_0 + \hat{\theta}_1 x_1 + \ldots + \hat{\theta}_p x_p\] + +Let's pause for a moment. At this point, we're very used to working with +the idea of a model parameter. But what exactly does each coefficient +\(\theta_i\) actually \emph{mean}? We can think of each \(\theta_i\) as +a \emph{slope} of the linear model. If all other variables are held +constant, a unit change in \(x_i\) will result in a \(\theta_i\) change +in \(f_{\theta}(x)\). Broadly speaking, a large value of \(\theta_i\) +means that the feature \(x_i\) has a large effect on the response; +conversely, a small value of \(\theta_i\) means that \(x_i\) has little +effect on the response. In the extreme case, if the true parameter +\(\theta_i\) is 0, then the feature \(x_i\) has \textbf{no effect} on +\(Y(x)\). + +If the true parameter \(\theta_i\) for a particular feature is 0, this +tells us something pretty significant about the world: there is no +underlying relationship between \(x_i\) and \(Y(x)\)! But how can we +test if a parameter is actually 0? As a baseline, we go through our +usual process of drawing a sample, using this data to fit a model, and +computing an estimate \(\hat{\theta}_i\). However, we also need to +consider that if our random sample comes out differently, we may find a +different result for \(\hat{\theta}_i\). To infer if the true parameter +\(\theta_i\) is 0, we want to draw our conclusion from the distribution +of \(\hat{\theta}_i\) estimates we could have drawn across all other +random samples. This is where +\href{https://inferentialthinking.com/chapters/11/Testing_Hypotheses.html}{hypothesis +testing} comes in handy! + +To test if the true parameter \(\theta_i\) is 0, we construct a +\textbf{hypothesis test} where our null hypothesis states that the true +parameter \(\theta_i\) is 0, and the alternative hypothesis states that +the true parameter \(\theta_i\) is \emph{not} 0. If our p-value is +smaller than our cutoff value (usually p = 0.05), we reject the null +hypothesis in favor of the alternative hypothesis. + +\section{Review: Bootstrap +Resampling}\label{review-bootstrap-resampling} + +To determine the properties (e.g., variance) of the sampling +distribution of an estimator, we'd need access to the population. +Ideally, we'd want to consider all possible samples in the population, +compute an estimate for each sample, and study the distribution of those +estimates. + +However, this can be quite expensive and time-consuming. Even more +importantly, we don't have access to the population ------ we only have +\emph{one} random sample from the population. How can we consider all +possible samples if we only have one? + +Bootstrapping comes in handy here! With bootstrapping, we treat our +random sample as a ``population'' and resample from it \emph{with +replacement}. Intuitively, a random sample resembles the population (if +it is big enough), so a random \emph{resample} also resembles a random +sample of the population. When sampling, there are a couple things to +keep in mind: + +\begin{itemize} +\tightlist +\item + We need to sample the same way we constructed the original sample. + Typically, this involves taking a simple random sample with + replacement. +\item + New samples must be the same size as the original sample. We need to + accurately model the variability of our estimates. +\end{itemize} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-warning-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-warning-color}{\faExclamationTriangle}\hspace{0.5em}{Why must we resample \emph{with replacement}?}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-warning-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Given an original sample of size \(n\), we want a resample that has the +same size \(n\) as the original. Sampling \emph{without} replacement +will give us the original sample with shuffled rows. Hence, when we +calculate summary statistics like the average, our sample \emph{without} +replacement will always have the same average as the original sample, +defeating the purpose of a bootstrap. + +\end{tcolorbox} + +Bootstrap resampling is a technique for estimating the sampling +distribution of an estimator. To execute it, we can follow the +pseudocode below: + +\begin{verbatim} +collect a random sample of size n (called the bootstrap population) + +initiate a list of estimates + +repeat 10,000 times: + resample with replacement from the bootstrap population + apply estimator f to the resample + store in list + +list of estimates is the bootstrapped sampling distribution of f +\end{verbatim} + +How well does bootstrapping actually represent our population? The +bootstrapped sampling distribution of an estimator does not exactly +match the sampling distribution of that estimator, but it is often +close. Similarly, the variance of the bootstrapped distribution is often +close to the true variance of the estimator. The example below displays +the results of different bootstraps from a \emph{known} population using +a sample size of \(n=50\). + +In the real world, we don't know the population distribution. The center +of the bootstrapped distribution is the estimator applied to our +original sample, so we have no way of understanding the estimator's true +expected value; the center and spread of our bootstrap are +\emph{approximations}. The quality of our bootstrapped distribution also +depends on the quality of our original sample. If our original sample +was not representative of the population (like Sample 5 in the image +above), then the bootstrap is next to useless. In general, bootstrapping +works better for \emph{large samples}, when the population distribution +is \emph{not heavily skewed} (no outliers), and when the estimator is +\emph{``low variance''} (insensitive to extreme values). + +Although our bootstrapped sample distribution does not exactly match the +sampling distribution of the population, we can see that it is +relatively close. This demonstrates the benefit of bootstrapping ------ +without knowing the actual population distribution, we can still roughly +approximate the true slope for the model by using only a single random +sample of 20 cars. + +\section{Collinearity}\label{collinearity} + +\subsection{Hypothesis Testing Through Bootstrap: Snowy Plover +Demo}\label{hypothesis-testing-through-bootstrap-snowy-plover-demo} + +We can conduct the hypothesis testing described earlier through +\textbf{bootstrapping} (this equivalence can be proven through the +\href{https://stats.stackexchange.com/questions/179902/confidence-interval-p-value-duality-vs-frequentist-interpretation-of-cis}{duality +argument}, which is out of scope for this class). We use bootstrapping +to compute approximate 95\% confidence intervals for each \(\theta_i\). +If the interval doesn't contain 0, we reject the null hypothesis at the +p=5\% level. Otherwise, the data is consistent with the null, as the +true parameter \emph{could possibly} be 0. + +To show an example of this hypothesis testing process, we'll work with +the \href{https://www.audubon.org/field-guide/bird/snowy-plover}{snowy +plover} dataset throughout this section. The data are about the eggs and +newly hatched chicks of the Snowy Plover. The data were collected at the +Point Reyes National Seashore by a former +\href{https://openlibrary.org/books/OL2038693M/BLSS_the_Berkeley_interactive_statistical_system}{student +at Berkeley}. Here's a +\href{http://cescos.fau.edu/jay/eps/articles/snowyplover.html}{parent +bird and some eggs}. + +Note that \texttt{Egg\ Length} and \texttt{Egg\ Breadth} (widest +diameter) are measured in millimeters, and \texttt{Egg\ Weight} and +\texttt{Bird\ Weight} are measured in grams. For reference, a standard +paper clip weighs about one gram. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\NormalTok{eggs }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/snowy\_plover.csv"}\NormalTok{)} +\NormalTok{eggs.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& egg\_weight & egg\_length & egg\_breadth & bird\_weight \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 7.4 & 28.80 & 21.84 & 5.2 \\ +1 & 7.7 & 29.04 & 22.45 & 5.4 \\ +2 & 7.9 & 29.36 & 22.48 & 5.6 \\ +3 & 7.5 & 30.10 & 21.71 & 5.3 \\ +4 & 8.3 & 30.17 & 22.75 & 5.9 \\ +\end{longtable} + +Our goal will be to predict the weight of a newborn plover chick, which +we assume follows the true relationship \(Y = f_{\theta}(x)\) below. + +\[\text{bird\_weight} = \theta_0 + \theta_1 \text{egg\_weight} + \theta_2 \text{egg\_length} + \theta_3 \text{egg\_breadth} + \epsilon\] + +Note that for each \(i\), the parameter \(\theta_i\) is a fixed number, +but it is unobservable. We can only estimate it. The random error +\(\epsilon\) is also unobservable, but it is assumed to have expectation +0 and be independent and identically distributed across eggs. + +Say we wish to determine if the \texttt{egg\_weight} impacts the +\texttt{bird\_weight} of a chick -- we want to infer if \(\theta_1\) is +equal to 0. + +First, we define our hypotheses: + +\begin{itemize} +\tightlist +\item + \textbf{Null hypothesis}: the true parameter \(\theta_1\) is 0; any + variation is due to random chance. +\item + \textbf{Alternative hypothesis}: the true parameter \(\theta_1\) is + not 0. +\end{itemize} + +Next, we use our data to fit a model \(\hat{Y} = f_{\hat{\theta}}(x)\) +that approximates the relationship above. This gives us the +\textbf{observed value} of \(\hat{\theta}_1\) from our data. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sklearn.linear\_model }\ImportTok{import}\NormalTok{ LinearRegression} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} + +\NormalTok{X }\OperatorTok{=}\NormalTok{ eggs[[}\StringTok{"egg\_weight"}\NormalTok{, }\StringTok{"egg\_length"}\NormalTok{, }\StringTok{"egg\_breadth"}\NormalTok{]]} +\NormalTok{Y }\OperatorTok{=}\NormalTok{ eggs[}\StringTok{"bird\_weight"}\NormalTok{]} + +\NormalTok{model }\OperatorTok{=}\NormalTok{ LinearRegression()} +\NormalTok{model.fit(X, Y)} + +\CommentTok{\# This gives an array containing the fitted model parameter estimates} +\NormalTok{thetas }\OperatorTok{=}\NormalTok{ model.coef\_} + +\CommentTok{\# Put the parameter estimates in a nice table for viewing} +\NormalTok{display(pd.DataFrame(} +\NormalTok{ [model.intercept\_] }\OperatorTok{+} \BuiltInTok{list}\NormalTok{(model.coef\_),} +\NormalTok{ columns}\OperatorTok{=}\NormalTok{[}\StringTok{\textquotesingle{}theta\_hat\textquotesingle{}}\NormalTok{],} +\NormalTok{ index}\OperatorTok{=}\NormalTok{[}\StringTok{\textquotesingle{}intercept\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}egg\_weight\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}egg\_length\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}egg\_breadth\textquotesingle{}}\NormalTok{]} +\NormalTok{))} + +\BuiltInTok{print}\NormalTok{(}\StringTok{"RMSE"}\NormalTok{, np.mean((Y }\OperatorTok{{-}}\NormalTok{ model.predict(X)) }\OperatorTok{**} \DecValTok{2}\NormalTok{))} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& theta\_hat \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +intercept & -4.605670 \\ +egg\_weight & 0.431229 \\ +egg\_length & 0.066570 \\ +egg\_breadth & 0.215914 \\ +\end{longtable} + +\begin{verbatim} +RMSE 0.045470853802757547 +\end{verbatim} + +Our single sample of data gives us the value of +\(\hat{\theta}_1=0.431\). To get a sense of how this estimate might vary +if we were to draw different random samples, we will use +\href{https://inferentialthinking.com/chapters/13/2/Bootstrap.html?}{bootstrapping}. +As a refresher, to construct a bootstrap sample, we will draw a resample +from the collected data that: + +\begin{itemize} +\tightlist +\item + Has the same sample size as the collected data +\item + Is drawn with replacement (this ensures that we don't draw the exact + same sample every time!) +\end{itemize} + +We draw a bootstrap sample, use this sample to fit a model, and record +the result for \(\hat{\theta}_1\) on this bootstrapped sample. We then +repeat this process many times to generate a \textbf{bootstrapped +empirical distribution} of \(\hat{\theta}_1\). This gives us an estimate +of what the true distribution of \(\hat{\theta}_1\) across all possible +samples might look like. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Set a random seed so you generate the same random sample as staff} +\CommentTok{\# In the "real world", we wouldn\textquotesingle{}t do this} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\NormalTok{np.random.seed(}\DecValTok{1337}\NormalTok{)} + +\CommentTok{\# Set the sample size of each bootstrap sample} +\NormalTok{n }\OperatorTok{=} \BuiltInTok{len}\NormalTok{(eggs)} + +\CommentTok{\# Create a list to store all the bootstrapped estimates} +\NormalTok{estimates }\OperatorTok{=}\NormalTok{ []} + +\CommentTok{\# Generate a bootstrap resample from \textasciigrave{}eggs\textasciigrave{} and find an estimate for theta\_1 using this sample. } +\CommentTok{\# Repeat 10000 times.} +\ControlFlowTok{for}\NormalTok{ i }\KeywordTok{in} \BuiltInTok{range}\NormalTok{(}\DecValTok{10000}\NormalTok{):} + \CommentTok{\# draw a bootstrap sample} +\NormalTok{ bootstrap\_resample }\OperatorTok{=}\NormalTok{ eggs.sample(n, replace}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{ X\_bootstrap }\OperatorTok{=}\NormalTok{ bootstrap\_resample[[}\StringTok{"egg\_weight"}\NormalTok{, }\StringTok{"egg\_length"}\NormalTok{, }\StringTok{"egg\_breadth"}\NormalTok{]]} +\NormalTok{ Y\_bootstrap }\OperatorTok{=}\NormalTok{ bootstrap\_resample[}\StringTok{"bird\_weight"}\NormalTok{]} + + \CommentTok{\# use bootstrapped sample to fit a model} +\NormalTok{ bootstrap\_model }\OperatorTok{=}\NormalTok{ LinearRegression()} +\NormalTok{ bootstrap\_model.fit(X\_bootstrap, Y\_bootstrap)} +\NormalTok{ bootstrap\_thetas }\OperatorTok{=}\NormalTok{ bootstrap\_model.coef\_} + + \CommentTok{\# record the result for theta\_1} +\NormalTok{ estimates.append(bootstrap\_thetas[}\DecValTok{0}\NormalTok{])} + +\CommentTok{\# calculate the 95\% confidence interval } +\NormalTok{lower }\OperatorTok{=}\NormalTok{ np.percentile(estimates, }\FloatTok{2.5}\NormalTok{, axis}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\NormalTok{upper }\OperatorTok{=}\NormalTok{ np.percentile(estimates, }\FloatTok{97.5}\NormalTok{, axis}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\NormalTok{conf\_interval }\OperatorTok{=}\NormalTok{ (lower, upper)} +\NormalTok{conf\_interval} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(np.float64(-0.2586481195684874), np.float64(1.103424385420405)) +\end{verbatim} + +Our bootstrapped 95\% confidence interval for \(\theta_1\) is +\([-0.259, 1.103]\). Immediately, we can see that 0 \emph{is} indeed +contained in this interval -- this means that we \emph{cannot} conclude +that \(\theta_1\) is non-zero! More formally, we fail to reject the null +hypothesis (that \(\theta_1\) is 0) under a 5\% p-value cutoff. + +We can repeat this process to construct 95\% confidence intervals for +the other parameters of the model. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{np.random.seed(}\DecValTok{1337}\NormalTok{)} + +\NormalTok{theta\_0\_estimates }\OperatorTok{=}\NormalTok{ []} +\NormalTok{theta\_1\_estimates }\OperatorTok{=}\NormalTok{ []} +\NormalTok{theta\_2\_estimates }\OperatorTok{=}\NormalTok{ []} +\NormalTok{theta\_3\_estimates }\OperatorTok{=}\NormalTok{ []} + + +\ControlFlowTok{for}\NormalTok{ i }\KeywordTok{in} \BuiltInTok{range}\NormalTok{(}\DecValTok{10000}\NormalTok{):} +\NormalTok{ bootstrap\_resample }\OperatorTok{=}\NormalTok{ eggs.sample(n, replace}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{ X\_bootstrap }\OperatorTok{=}\NormalTok{ bootstrap\_resample[[}\StringTok{"egg\_weight"}\NormalTok{, }\StringTok{"egg\_length"}\NormalTok{, }\StringTok{"egg\_breadth"}\NormalTok{]]} +\NormalTok{ Y\_bootstrap }\OperatorTok{=}\NormalTok{ bootstrap\_resample[}\StringTok{"bird\_weight"}\NormalTok{]} + +\NormalTok{ bootstrap\_model }\OperatorTok{=}\NormalTok{ LinearRegression()} +\NormalTok{ bootstrap\_model.fit(X\_bootstrap, Y\_bootstrap)} +\NormalTok{ bootstrap\_theta\_0 }\OperatorTok{=}\NormalTok{ bootstrap\_model.intercept\_} +\NormalTok{ bootstrap\_theta\_1, bootstrap\_theta\_2, bootstrap\_theta\_3 }\OperatorTok{=}\NormalTok{ bootstrap\_model.coef\_} + +\NormalTok{ theta\_0\_estimates.append(bootstrap\_theta\_0)} +\NormalTok{ theta\_1\_estimates.append(bootstrap\_theta\_1)} +\NormalTok{ theta\_2\_estimates.append(bootstrap\_theta\_2)} +\NormalTok{ theta\_3\_estimates.append(bootstrap\_theta\_3)} + +\NormalTok{theta\_0\_lower, theta\_0\_upper }\OperatorTok{=}\NormalTok{ np.percentile(theta\_0\_estimates, }\FloatTok{2.5}\NormalTok{), np.percentile(theta\_0\_estimates, }\FloatTok{97.5}\NormalTok{)} +\NormalTok{theta\_1\_lower, theta\_1\_upper }\OperatorTok{=}\NormalTok{ np.percentile(theta\_1\_estimates, }\FloatTok{2.5}\NormalTok{), np.percentile(theta\_1\_estimates, }\FloatTok{97.5}\NormalTok{)} +\NormalTok{theta\_2\_lower, theta\_2\_upper }\OperatorTok{=}\NormalTok{ np.percentile(theta\_2\_estimates, }\FloatTok{2.5}\NormalTok{), np.percentile(theta\_2\_estimates, }\FloatTok{97.5}\NormalTok{)} +\NormalTok{theta\_3\_lower, theta\_3\_upper }\OperatorTok{=}\NormalTok{ np.percentile(theta\_3\_estimates, }\FloatTok{2.5}\NormalTok{), np.percentile(theta\_3\_estimates, }\FloatTok{97.5}\NormalTok{)} + +\CommentTok{\# Make a nice table to view results} +\NormalTok{pd.DataFrame(\{}\StringTok{"lower"}\NormalTok{:[theta\_0\_lower, theta\_1\_lower, theta\_2\_lower, theta\_3\_lower], }\StringTok{"upper"}\NormalTok{:[theta\_0\_upper, }\OperatorTok{\textbackslash{}} +\NormalTok{ theta\_1\_upper, theta\_2\_upper, theta\_3\_upper]\}, index}\OperatorTok{=}\NormalTok{[}\StringTok{"theta\_0"}\NormalTok{, }\StringTok{"theta\_1"}\NormalTok{, }\StringTok{"theta\_2"}\NormalTok{, }\StringTok{"theta\_3"}\NormalTok{])} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& lower & upper \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +theta\_0 & -15.278542 & 5.161473 \\ +theta\_1 & -0.258648 & 1.103424 \\ +theta\_2 & -0.099138 & 0.208557 \\ +theta\_3 & -0.257141 & 0.758155 \\ +\end{longtable} + +Something's off here. Notice that 0 is included in the 95\% confidence +interval for \emph{every} parameter of the model. Using the +interpretation we outlined above, this would suggest that we can't say +for certain that \emph{any} of the input variables impact the response +variable! This makes it seem like our model can't make any predictions +-- and yet, each model we fit in our bootstrap experiment above could +very much make predictions of \(Y\). + +How can we explain this result? Think back to how we first interpreted +the parameters of a linear model. We treated each \(\theta_i\) as a +slope, where a unit increase in \(x_i\) leads to a \(\theta_i\) increase +in \(Y\), \textbf{if all other variables are held constant}. It turns +out that this last assumption is very important. If variables in our +model are somehow related to one another, then it might not be possible +to have a change in one of them while holding the others constant. This +means that our interpretation framework is no longer valid! In the +models we fit above, we incorporated \texttt{egg\_length}, +\texttt{egg\_breadth}, and \texttt{egg\_weight} as input variables. +These variables are very likely related to one another -- an egg with +large \texttt{egg\_length} and \texttt{egg\_breadth} will likely be +heavy in \texttt{egg\_weight}. This means that the model parameters +cannot be meaningfully interpreted as slopes. + +To support this conclusion, we can visualize the relationships between +our feature variables. Notice the strong positive association between +the features. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\NormalTok{sns.pairplot(eggs[[}\StringTok{"egg\_length"}\NormalTok{, }\StringTok{"egg\_breadth"}\NormalTok{, }\StringTok{"egg\_weight"}\NormalTok{, }\StringTok{\textquotesingle{}bird\_weight\textquotesingle{}}\NormalTok{]])}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{inference_causality/inference_causality_files/figure-pdf/cell-6-output-1.pdf} + +This issue is known as \textbf{collinearity}, sometimes also called +\textbf{multicollinearity}. Collinearity occurs when one feature can be +predicted fairly accurately by a linear combination of the other +features, which happens when one feature is highly correlated with the +others. + +Why is collinearity a problem? Its consequences span several aspects of +the modeling process: + +\begin{itemize} +\tightlist +\item + \textbf{Inference}: Slopes can't be interpreted for an inference task. +\item + \textbf{Model Variance}: If features strongly influence one another, + even small changes in the sampled data can lead to large changes in + the estimated slopes. +\item + \textbf{Unique Solution}: If one feature is a linear combination of + the other features, the design matrix will not be full rank, and + \(\mathbb{X}^{\top}\mathbb{X}\) is not invertible. This means that + least squares does not have a unique solution. See + \href{https://ds100.org/course-notes/ols/ols.html\#bonus-uniqueness-of-the-solution}{this + section} of Course Note 12 for more on this. +\end{itemize} + +The take-home point is that we need to be careful with what features we +select for modeling. If two features likely encode similar information, +it is often a good idea to choose only one of them as an input variable. + +\subsection{A Simpler Model}\label{a-simpler-model} + +Let us now consider a more interpretable model: we instead assume a true +relationship using only egg weight: + +\[f_\theta(x) = \theta_0 + \theta_1 \text{egg\_weight} + \epsilon\] + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sklearn.linear\_model }\ImportTok{import}\NormalTok{ LinearRegression} +\NormalTok{X\_int }\OperatorTok{=}\NormalTok{ eggs[[}\StringTok{"egg\_weight"}\NormalTok{]]} +\NormalTok{Y\_int }\OperatorTok{=}\NormalTok{ eggs[}\StringTok{"bird\_weight"}\NormalTok{]} + +\NormalTok{model\_int }\OperatorTok{=}\NormalTok{ LinearRegression()} + +\NormalTok{model\_int.fit(X\_int, Y\_int)} + +\CommentTok{\# This gives an array containing the fitted model parameter estimates} +\NormalTok{thetas\_int }\OperatorTok{=}\NormalTok{ model\_int.coef\_} + +\CommentTok{\# Put the parameter estimates in a nice table for viewing} +\NormalTok{pd.DataFrame(\{}\StringTok{"theta\_hat"}\NormalTok{:[model\_int.intercept\_, thetas\_int[}\DecValTok{0}\NormalTok{]]\}, index}\OperatorTok{=}\NormalTok{[}\StringTok{"theta\_0"}\NormalTok{, }\StringTok{"theta\_1"}\NormalTok{])} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +& theta\_hat \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +theta\_0 & -0.058272 \\ +theta\_1 & 0.718515 \\ +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} + +\CommentTok{\# Set a random seed so you generate the same random sample as staff} +\CommentTok{\# In the "real world", we wouldn\textquotesingle{}t do this} +\NormalTok{np.random.seed(}\DecValTok{1337}\NormalTok{)} + +\CommentTok{\# Set the sample size of each bootstrap sample} +\NormalTok{n }\OperatorTok{=} \BuiltInTok{len}\NormalTok{(eggs)} + +\CommentTok{\# Create a list to store all the bootstrapped estimates} +\NormalTok{estimates\_int }\OperatorTok{=}\NormalTok{ []} + +\CommentTok{\# Generate a bootstrap resample from \textasciigrave{}eggs\textasciigrave{} and find an estimate for theta\_1 using this sample. } +\CommentTok{\# Repeat 10000 times.} +\ControlFlowTok{for}\NormalTok{ i }\KeywordTok{in} \BuiltInTok{range}\NormalTok{(}\DecValTok{10000}\NormalTok{):} +\NormalTok{ bootstrap\_resample\_int }\OperatorTok{=}\NormalTok{ eggs.sample(n, replace}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{ X\_bootstrap\_int }\OperatorTok{=}\NormalTok{ bootstrap\_resample\_int[[}\StringTok{"egg\_weight"}\NormalTok{]]} +\NormalTok{ Y\_bootstrap\_int }\OperatorTok{=}\NormalTok{ bootstrap\_resample\_int[}\StringTok{"bird\_weight"}\NormalTok{]} + +\NormalTok{ bootstrap\_model\_int }\OperatorTok{=}\NormalTok{ LinearRegression()} +\NormalTok{ bootstrap\_model\_int.fit(X\_bootstrap\_int, Y\_bootstrap\_int)} +\NormalTok{ bootstrap\_thetas\_int }\OperatorTok{=}\NormalTok{ bootstrap\_model\_int.coef\_} + +\NormalTok{ estimates\_int.append(bootstrap\_thetas\_int[}\DecValTok{0}\NormalTok{])} + +\NormalTok{plt.figure(dpi}\OperatorTok{=}\DecValTok{120}\NormalTok{)} +\NormalTok{sns.histplot(estimates\_int, stat}\OperatorTok{=}\StringTok{"density"}\NormalTok{)} +\NormalTok{plt.xlabel(}\VerbatimStringTok{r"$\textbackslash{}hat\{\textbackslash{}theta\}\_1$"}\NormalTok{)} +\NormalTok{plt.title(}\VerbatimStringTok{r"Bootstrapped estimates $\textbackslash{}hat\{\textbackslash{}theta\}\_1$ Under the Interpretable Model"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{inference_causality/inference_causality_files/figure-pdf/cell-8-output-1.pdf} + +Notice how the interpretable model performs almost as well as our other +model: + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sklearn.metrics }\ImportTok{import}\NormalTok{ mean\_squared\_error} + +\NormalTok{rmse }\OperatorTok{=}\NormalTok{ mean\_squared\_error(Y, model.predict(X))} +\NormalTok{rmse\_int }\OperatorTok{=}\NormalTok{ mean\_squared\_error(Y\_int, model\_int.predict(X\_int))} +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f\textquotesingle{}RMSE of Original Model: }\SpecialCharTok{\{}\NormalTok{rmse}\SpecialCharTok{\}}\SpecialStringTok{\textquotesingle{}}\NormalTok{)} +\BuiltInTok{print}\NormalTok{(}\SpecialStringTok{f\textquotesingle{}RMSE of Interpretable Model: }\SpecialCharTok{\{}\NormalTok{rmse\_int}\SpecialCharTok{\}}\SpecialStringTok{\textquotesingle{}}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +RMSE of Original Model: 0.045470853802757547 +RMSE of Interpretable Model: 0.04649394137555684 +\end{verbatim} + +Yet, the confidence interval for the true parameter \(\theta_{1}\) does +not contain zero. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{lower\_int }\OperatorTok{=}\NormalTok{ np.percentile(estimates\_int, }\FloatTok{2.5}\NormalTok{)} +\NormalTok{upper\_int }\OperatorTok{=}\NormalTok{ np.percentile(estimates\_int, }\FloatTok{97.5}\NormalTok{)} + +\NormalTok{conf\_interval\_int }\OperatorTok{=}\NormalTok{ (lower\_int, upper\_int)} +\NormalTok{conf\_interval\_int} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(np.float64(0.6029335250209632), np.float64(0.8208401738546208)) +\end{verbatim} + +In retrospect, it's no surprise that the weight of an egg best predicts +the weight of a newly-hatched chick. + +A model with highly correlated variables prevents us from interpreting +how the variables are related to the prediction. + +\subsection{Reminder: Assumptions +Matter}\label{reminder-assumptions-matter} + +Keep the following in mind: All inference assumes that the regression +model holds. + +\begin{itemize} +\tightlist +\item + If the model doesn't hold, the inference might not be valid. +\item + If the + \href{https://inferentialthinking.com/chapters/13/3/Confidence_Intervals.html?highlight=p\%20value\%20confidence\%20interval\#care-in-using-the-bootstrap-percentile-method}{assumptions + of the bootstrap} don't hold\ldots{} + + \begin{itemize} + \tightlist + \item + Sample size n is large + \item + Sample is representative of population distribution (drawn i.i.d., + unbiased) + \end{itemize} + + \ldots then the results of the bootstrap might not be valid. +\end{itemize} + +\section{{[}Bonus Content{]}}\label{bonus-content} + +Note: the content in this section is out of scope. + +\subsection{Prediction vs Causation}\label{prediction-vs-causation} + +The difference between correlation/prediction vs.~causation is best +illustrated through examples. + +Some questions about \textbf{correlation / prediction} include: + +\begin{itemize} +\tightlist +\item + Are homes with granite countertops worth more money? +\item + Is college GPA higher for students who win a certain scholarship? +\item + Are breastfed babies less likely to develop asthma? +\item + Do cancer patients given some aggressive treatment have a higher + 5-year survival rate? +\item + Are people who smoke more likely to get cancer? +\end{itemize} + +While these may sound like causal questions, they are not! Questions +about \textbf{causality} are about the \textbf{effects} of +\textbf{interventions} (not just passive observation). For example: + +\begin{itemize} +\tightlist +\item + How much do granite countertops \textbf{raise} the value of a house? +\item + Does getting the scholarship \textbf{improve} students' GPAs? +\item + Does breastfeeding \textbf{protect} babies against asthma? +\item + Does the treatment \textbf{improve} cancer survival? +\item + Does smoking \textbf{cause} cancer? +\end{itemize} + +Note, however, that regression coefficients are sometimes called +``effects'', which can be deceptive! + +When using data alone, \textbf{predictive questions} (i.e., are +breastfed babies healthier?) can be answered, but \textbf{causal +questions} (i.e., does breastfeeding improve babies' health?) cannot. +The reason for this is that there are many possible causes for our +predictive question. For example, possible explanations for why +breastfed babies are healthier on average include: + +\begin{itemize} +\tightlist +\item + \textbf{Causal effect:} breastfeeding makes babies healthier +\item + \textbf{Reverse causality:} healthier babies more likely to + successfully breastfeed +\item + \textbf{Common cause:} healthier / richer parents have healthier + babies and are more likely to breastfeed +\end{itemize} + +We cannot tell which explanations are true (or to what extent) just by +observing (\(x\),\(y\)) pairs. Additionally, causal questions implicitly +involve \textbf{counterfactuals}, events that didn't happen. For +example, we could ask, \textbf{would} the \textbf{same} breastfed babies +have been less healthy \textbf{if} they hadn't been breastfed? +Explanation 1 from above implies they would be, but explanations 2 and 3 +do not. + +\subsection{Confounders}\label{confounders} + +Let T represent a treatment (for example, alcohol use) and Y represent +an outcome (for example, lung cancer). + +A \textbf{confounder} is a variable that affects both T and Y, +distorting the correlation between them. Using the example above, rich +parents could be a confounder for breastfeeding and a baby's health. +Confounders can be a measured covariate (a feature) or an unmeasured +variable we don't know about, and they generally cause problems, as the +relationship between T and Y is affected by data we cannot see. We +commonly \emph{assume that all confounders are observed} (this is also +called \textbf{ignorability}). + +\subsection{How to perform causal +inference?}\label{how-to-perform-causal-inference} + +In a \textbf{randomized experiment}, participants are randomly assigned +into two groups: treatment and control. A treatment is applied +\emph{only} to the treatment group. We assume ignorability and gather as +many measurements as possible so that we can compare them between the +control and treatment groups to determine whether or not the treatment +has a true effect or is just a confounding factor. + +However, often, randomly assigning treatments is impractical or +unethical. For example, assigning a treatment of cigarettes to test the +effect of smoking on the lungs would not only be impractical but also +unethical. + +An alternative to bypass this issue is to utilize \textbf{observational +studies}. This can be done by obtaining two participant groups separated +based on some identified treatment variable. Unlike randomized +experiments, however, we cannot assume ignorability here: the +participants could have separated into two groups based on other +covariates! In addition, there could also be unmeasured confounders. + +\bookmarksetup{startatroot} + +\chapter{SQL I}\label{sql-i} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Recognizing situations where we need ``bigger'' tools for manipulating + data +\item + Write basic SQL queries using \texttt{SELECT}, \texttt{FROM}, + \texttt{WHERE}, \texttt{ORDER\ BY}, \texttt{LIMIT}, and + \texttt{OFFSET} +\item + Perform aggregations using \texttt{GROUP\ BY} +\end{itemize} + +\end{tcolorbox} + +So far in the course, we have made our way through the entire data +science lifecycle: we learned how to load and explore a dataset, +formulate questions, and use the tools of prediction and inference to +come up with answers. For the remaining weeks of the semester, we are +going to make a second pass through the lifecycle, this time with a +different set of tools, ideas, and abstractions. + +\section{Databases}\label{databases} + +With this goal in mind, let's go back to the very beginning of the +lifecycle. We first started our work in data analysis by looking at the +\texttt{pandas} library, which offered us powerful tools to manipulate +tabular data stored in (primarily) CSV files. CSVs work well when +analyzing relatively small datasets (less than 10GB) that don't need to +be shared across many users. In research and industry, however, data +scientists often need to access enormous bodies of data that cannot be +easily stored in a CSV format. Collaborating with others when working +with CSVs can also be tricky ------ a real-world data scientist may run +into problems when multiple users try to make modifications or more dire +security issues arise regarding who should and should not have access to +the data. + +A \textbf{database} is a large, organized collection of data. Databases +are administered by \textbf{Database Management Systems (DBMS)}, which +are software systems that store, manage, and facilitate access to one or +more databases. Databases help mitigate many of the issues that come +with using CSVs for data storage: they provide reliable storage that can +survive system crashes or disk failures, are optimized to compute on +data that does not fit into memory, and contain special data structures +to improve performance. Using databases rather than CSVs offers further +benefits from the standpoint of data management. A DBMS can apply +settings that configure how data is organized, block certain data +anomalies (for example, enforcing non-negative weights or ages), and +determine who is allowed access to the data. It can also ensure safe +concurrent operations where multiple users reading and writing to the +database will not lead to fatal errors. Below, you can see the +functionality of the different types of data storage and management +architectures. In data science, common large-scale DBMS systems used are +Google BigQuery, Amazon Redshift, Snowflake, Databricks, Microsoft SQL +Server, and more. To learn more about these, consider taking +\href{https://www.data101.org/sp24/}{Data 101}! + +As you may have guessed, we can't use our usual \texttt{pandas} methods +to work with data in a database. Instead, we'll turn to Structured Query +Language. + +\section{Intro to SQL}\label{intro-to-sql} + +\textbf{Structured Query Language}, or \textbf{SQL} (commonly pronounced +``sequel,'' though this is the subject of +\href{https://patorjk.com/blog/2012/01/26/pronouncing-sql-s-q-l-or-sequel/}{fierce +debate}), is a special programming language designed to communicate with +databases, and it is the dominant language/technology for working with +data. You may have encountered it in classes like CS 61A or Data C88C +before, and you likely will encounter it in the future. It is a language +of tables: all inputs and outputs are tables. Unlike Python, it is a +\textbf{declarative programming language} -- this means that rather than +writing the exact logic needed to complete a task, a piece of SQL code +``declares'' what the desired final output should be and leaves the +program to determine what logic should be implemented. This logic +differs depending on the SQL code itself or on the system it's running +on (ie. \href{https://www.mongodb.com/}{MongoDB}, +\href{https://www.sqlite.org/}{SQLite}, +\href{https://duckdb.org/}{DuckDB}, etc.). Most systems don't follow the +standards, and every system you work with will be a little different. + +For the purposes of Data 100, we use SQLite or DuckDB. SQLite is an +easy-to-use library that allows users to directly manipulate a database +file or an in-memory database with a simplified version of SQL. It's +commonly used to store data for small apps on mobile devices and is +optimized for simplicity and speed of simple data tasks. DuckDB is an +easy-to-use library that lets you directly manipulate a database file, +collection of table formatted files (e.g., CSV), or in-memory +\texttt{pandas} \texttt{DataFrame}s using a more complete version of +SQL. It's optimized for simplicity and speed of advanced data analysis +tasks and is becoming increasingly popular for data analysis tasks on +large datasets. + +It is important to reiterate that SQL is an entirely different language +from Python. However, Python \emph{does} have special engines that allow +us to run SQL code in a Jupyter notebook. While this is typically not +how SQL is used outside of an educational setting, we will use this +workflow to illustrate how SQL queries are constructed using the tools +we've already worked with this semester. You will learn more about how +to run SQL queries in Jupyter in an upcoming lab and homework. + +The syntax below will seem unfamiliar to you; for now, just focus on +understanding the output displayed. We will clarify the SQL code in a +bit. + +To start, we'll look at a database called \texttt{example\_duck.db} and +connect to it using DuckDB. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Load the SQL Alchemy Python library and DuckDB} +\ImportTok{import}\NormalTok{ sqlalchemy} +\ImportTok{import}\NormalTok{ duckdb} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Load \%\%sql cell magic} +\OperatorTok{\%}\NormalTok{load\_ext sql} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Connect to the database} +\OperatorTok{\%}\NormalTok{sql duckdb:}\OperatorTok{///}\NormalTok{data}\OperatorTok{/}\NormalTok{example\_duck.db }\OperatorTok{{-}{-}}\NormalTok{alias duck} +\end{Highlighting} +\end{Shaded} + +Now that we're connected, let's make some queries! + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*}\NormalTok{ FROM Dragon}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +Thanks to the \texttt{pandas} magic, the resulting return data is +displayed in a format almost identical to our \texttt{pandas} tables but +without an index. + +\section{Tables and Schema}\label{tables-and-schema} + +Looking at the \texttt{Dragon} table above, we can see that it contains +contains three columns. The first of these, \texttt{"name"}, contains +text data. The \texttt{"year"} column contains integer data, with the +constraint that year values must be greater than or equal to 2000. The +final column, \texttt{"cute"}, contains integer data with no +restrictions on allowable values. + +Now, let's look at the \textbf{schema} of our database. A schema +describes the logical structure of a table. Whenever a new table is +created, the creator must declare its schema. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM sqlite\_master} +\NormalTok{WHERE }\BuiltInTok{type}\OperatorTok{=}\StringTok{\textquotesingle{}table\textquotesingle{}} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +type & name & tbl\_name & rootpage & sql \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +The summary above displays information about the database; it contains +four tables named \texttt{sqlite\_sequence}, \texttt{Dragon}, +\texttt{Dish}, and \texttt{Scene}. The rightmost column above lists the +command that was used to construct each table. + +Let's look more closely at the command used to create the +\texttt{Dragon} table (the second entry above). + +\begin{verbatim} +CREATE TABLE Dragon (name TEXT PRIMARY KEY, + year INTEGER CHECK (year >= 2000), + cute INTEGER) +\end{verbatim} + +The statement \texttt{CREATE\ TABLE} is used to specify the +\textbf{schema} of the table -- a description of what logic is used to +organize the table. Schema follows a set format: + +\begin{itemize} +\item + \texttt{ColName}: the name of a column +\item + \texttt{DataType}: the type of data to be stored in a column. Some of + the most common SQL data types are: + + \begin{itemize} + \tightlist + \item + \texttt{INT} (integers) + \item + \texttt{FLOAT} (floating point numbers) + \item + \texttt{TEXT} (strings) + \item + \texttt{BLOB} (arbitrary data, such as audio/video files) + \item + \texttt{DATETIME} (a date and time) + \end{itemize} +\item + \texttt{Constraint}: some restriction on the data to be stored in the + column. Common constraints are: + + \begin{itemize} + \tightlist + \item + \texttt{CHECK} (data must obey a certain condition) + \item + \texttt{PRIMARY\ KEY} (designate a column as the table's primary + key) + \item + \texttt{NOT\ NULL} (data cannot be null) + \item + \texttt{DEFAULT} (a default fill value if no specific entry is + given) + \end{itemize} +\end{itemize} + +Note that different implementations of SQL (e.g., +\href{https://duckdb.org/docs/sql/data_types/overview.html}{DuckDB}, +\href{https://www.sqlite.org/datatype3.html}{SQLite}, +\href{https://dev.mysql.com/doc/refman/8.0/en/data-types.html}{MySQL}) +will support different types. In Data 100, we'll primarily use DuckDB. + +Database tables (also referred to as \textbf{relations}) are structured +much like \texttt{DataFrame}s in \texttt{pandas}. Each row, sometimes +called a \textbf{tuple}, represents a single record in the dataset. Each +column, sometimes called an \textbf{attribute} or \textbf{field}, +describes some feature of the record. + +\subsection{Primary Keys}\label{primary-keys} + +The \textbf{primary key} is a set of column(s) that uniquely identify +each record in the table. In the \texttt{Dragon} table, the +\texttt{"name"} column is its primary key that \emph{uniquely} +identifies each entry in the table. Because \texttt{"name"} is the +primary key of the table, no two entries in the table can have the same +name -- a given value of \texttt{"name"} is unique to each dragon. +Primary keys are used to ensure data integrity and to optimize data +access. + +\subsection{Foreign Keys}\label{foreign-keys} + +A foreign key is a column or set of columns that references a +\emph{primary key in another table}. A foreign key constraint ensures +that a primary key exists in the referenced table. For example, let's +say we have 2 tables, \texttt{student} and \texttt{assignment}, with the +following schemas: + +\begin{verbatim} +CREATE TABLE student ( + student_id INTEGER PRIMARY KEY, + name VARCHAR, + email VARCHAR +); + +CREATE TABLE assignment ( + assignment_id INTEGER PRIMARY KEY, + description VARCHAR +); +\end{verbatim} + +Note that each table has a primary key that uniquely identifies each +student and assignment. + +Say we want to create the table \texttt{grade} to store the score each +student got on each assignment. Naturally, this will depend on the +information in \texttt{student} and \texttt{assignment}; we should not +be saving the grade for a nonexisistent student nor a nonexisistent +assignment. Hence, we can create the columns \texttt{student\_id} and +\texttt{assignment\_id} that reference foreign tables \texttt{student} +and \texttt{assignment}, respectively. This way, we ensure that the data +in \texttt{grade} is always up-to-date with the other tables. + +\begin{verbatim} +CREATE TABLE grade ( + student_id INTEGER, + assignment_id INTEGER, + score REAL, + FOREIGN KEY (student_id) REFERENCES student(student_id), + FOREIGN KEY (assignment_id) REFERENCES assignment(assignment_id) +); +\end{verbatim} + +\section{Basic Queries}\label{basic-queries} + +To extract and manipulate data stored in a SQL table, we will need to +familiarize ourselves with the syntax to write pieces of SQL code, which +we call \textbf{queries}. + +\subsection{\texorpdfstring{\texttt{SELECT}ing From +Tables}{SELECTing From Tables}}\label{selecting-from-tables} + +The basic unit of a SQL query is the \texttt{SELECT} statement. +\texttt{SELECT} specifies what columns we would like to extract from a +given table. We use \texttt{FROM} to tell SQL the table from which we +want to \texttt{SELECT} our data. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Dragon}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +In SQL, \texttt{*} means ``everything.'' The query above grabs +\emph{all} the columns in \texttt{Dragon} and displays them in the +outputted table. We can also specify a specific subset of columns to be +\texttt{SELECT}ed.~Notice that the outputted columns appear in the order +they were \texttt{SELECT}ed. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT cute, year} +\NormalTok{FROM Dragon}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +cute & year \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +\textbf{Every} SQL query must include both a \texttt{SELECT} and +\texttt{FROM} statement. Intuitively, this makes sense ------ we know +that we'll want to extract some piece of information from the table; to +do so, we also need to indicate what table we want to consider. + +It is important to note that SQL enforces a strict ``order of +operations'' ------ SQL clauses must \emph{always} follow the same +sequence. For example, the \texttt{SELECT} statement must always precede +\texttt{FROM}. This means that any SQL query will follow the same +structure. + +\begin{verbatim} +SELECT +FROM +[additional clauses] +\end{verbatim} + +The additional clauses we use depend on the specific task we're trying +to achieve. We may refine our query to filter on a certain condition, +aggregate a particular column, or join several tables together. We will +spend the rest of this note outlining some useful clauses to build up +our understanding of the order of operations. + +\subsubsection{SQL Style Conventions}\label{sql-style-conventions} + +And just like that, we've already written two SQL queries. There are a +few things to note in the queries above. Firstly, notice that every +``verb'' is written in uppercase. It is convention to write SQL +operations in capital letters, but your code will run just fine even if +you choose to keep things in lowercase. Second, the query above +separates each statement with a new line. SQL queries are not impacted +by whitespace within the query; this means that SQL code is typically +written with a new line after each statement to make things more +readable. The semicolon (\texttt{;}) indicates the end of a query. There +are some ``flavors'' of SQL in which a query will not run if no +semicolon is present; however, in Data 100, the SQL version we will use +works with or without an ending semicolon. Queries in these notes will +end with semicolons to build up good habits. + +\subsubsection{\texorpdfstring{Aliasing with +\texttt{AS}}{Aliasing with AS}}\label{aliasing-with-as} + +The \texttt{AS} keyword allows us to give a column a new name (called an +\textbf{alias}) after it has been \texttt{SELECT}ed.~The general syntax +is: + +\begin{verbatim} +SELECT column_in_input_table AS new_name_in_output_table +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT cute AS cuteness, year AS birth} +\NormalTok{FROM Dragon}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +cuteness & birth \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +\subsubsection{\texorpdfstring{Uniqueness with +\texttt{DISTINCT}}{Uniqueness with DISTINCT}}\label{uniqueness-with-distinct} + +To \texttt{SELECT} only the \emph{unique} values in a column, we use the +\texttt{DISTINCT} keyword. This will cause any any duplicate entries in +a column to be removed. If we want to find only the unique years in +\texttt{Dragon}, without any repeats, we would write: + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT DISTINCT year} +\NormalTok{FROM Dragon}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}l@{}} +\toprule\noalign{} +year \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +\subsection{\texorpdfstring{Applying \texttt{WHERE} +Conditions}{Applying WHERE Conditions}}\label{applying-where-conditions} + +The \texttt{WHERE} keyword is used to select only some rows of a table, +filtered on a given Boolean condition. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT name, year} +\NormalTok{FROM Dragon} +\NormalTok{WHERE cute }\OperatorTok{\textgreater{}} \DecValTok{0}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +name & year \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +We can add complexity to the \texttt{WHERE} condition using the keywords +\texttt{AND}, \texttt{OR}, and \texttt{NOT}, much like we would in +Python. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT name, year} +\NormalTok{FROM Dragon} +\NormalTok{WHERE cute }\OperatorTok{\textgreater{}} \DecValTok{0}\NormalTok{ OR year }\OperatorTok{\textgreater{}} \DecValTok{2013}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +name & year \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +To spare ourselves needing to write complicated logical expressions by +combining several conditions, we can also filter for entries that are +\texttt{IN} a specified list of values. This is similar to the use of +\texttt{in} or \texttt{.isin} in Python. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT name, year} +\NormalTok{FROM Dragon} +\NormalTok{WHERE name IN (}\StringTok{\textquotesingle{}hiccup\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}puff\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +name & year \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +\subsubsection{Strings in SQL}\label{strings-in-sql} + +In \texttt{Python}, there is no distinction between double \texttt{""} +and single quotes \texttt{\textquotesingle{}\textquotesingle{}}. SQL, on +the other hand, distinguishes double quotes \texttt{""} as \emph{column +names} and single quotes \texttt{\textquotesingle{}\textquotesingle{}} +as \emph{strings}. For example, we can make the call + +\begin{verbatim} +SELECT "birth weight" +FROM patient +WHERE "first name" = 'Joey' +\end{verbatim} + +to select the column \texttt{"birth\ weight"} from the \texttt{patient} +table and only select rows where the column \texttt{"first\ name"} is +equal to \texttt{\textquotesingle{}Joey\textquotesingle{}}. + +\subsubsection{\texorpdfstring{\texttt{WHERE} WITH \texttt{NULL} +Values}{WHERE WITH NULL Values}}\label{where-with-null-values} + +You may have noticed earlier that our table actually has a missing +value. In SQL, missing data is given the special value \texttt{NULL}. +\texttt{NULL} behaves in a fundamentally different way to other data +types. We can't use the typical operators (=, \textgreater, and +\textless) on \texttt{NULL} values (in fact, \texttt{NULL\ ==\ NULL} +returns \texttt{False}!). Instead, we check to see if a value +\texttt{IS} or \texttt{IS\ NOT} \texttt{NULL}. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT name, cute} +\NormalTok{FROM Dragon} +\NormalTok{WHERE cute IS NOT NULL}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +name & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +\subsection{Sorting and Restricting +Output}\label{sorting-and-restricting-output} + +\subsubsection{\texorpdfstring{Sorting with +\texttt{ORDER\ BY}}{Sorting with ORDER BY}}\label{sorting-with-order-by} + +What if we want the output table to appear in a certain order? The +\texttt{ORDER\ BY} keyword behaves similarly to \texttt{.sort\_values()} +in \texttt{pandas}. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Dragon} +\NormalTok{ORDER BY cute}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +By default, \texttt{ORDER\ BY} will display results in ascending order +(\texttt{ASC}) with the lowest values at the top of the table. To sort +in descending order, we use the \texttt{DESC} keyword after specifying +the column to be used for ordering. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Dragon} +\NormalTok{ORDER BY cute DESC}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +We can also tell SQL to \texttt{ORDER\ BY} two columns at once. This +will sort the table by the first listed column, then use the values in +the second listed column to break any ties. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Dragon} +\NormalTok{ORDER BY year, cute DESC}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +Note that in this example, \texttt{year} is sorted in ascending order +and \texttt{cute} in descending order. If you want \texttt{year} to be +ordered in descending order as well, you need to specify +\texttt{year\ DESC,\ cute\ DESC;}. + +\subsubsection{\texorpdfstring{\texttt{LIMIT} +vs.~\texttt{OFFSET}}{LIMIT vs.~OFFSET}}\label{limit-vs.-offset} + +In many instances, we are only concerned with a certain number of rows +in the output table (for example, wanting to find the first two dragons +in the table). The \texttt{LIMIT} keyword restricts the output to a +specified number of rows. It serves a function similar to that of +\texttt{.head()} in \texttt{pandas}. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Dragon} +\NormalTok{LIMIT }\DecValTok{2}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +The \texttt{OFFSET} keyword indicates the index at which \texttt{LIMIT} +should start. In other words, we can use \texttt{OFFSET} to shift where +the \texttt{LIMIT}ing begins by a specified number of rows. For example, +we might care about the dragons that are at positions 2 and 3 in the +table. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Dragon} +\NormalTok{LIMIT }\DecValTok{2} +\NormalTok{OFFSET }\DecValTok{1}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/example_duck.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +With these keywords in hand, let's update our SQL order of operations. +Remember: \emph{every} SQL query must list clauses in this order. + +\begin{verbatim} +SELECT +FROM
+[WHERE ] +[ORDER BY ] +[LIMIT ] +[OFFSET ]; +\end{verbatim} + +\section{Summary}\label{summary-3} + +Let's summarize what we've learned so far. We know that \texttt{SELECT} +and \texttt{FROM} are the fundamental building blocks of any SQL query. +We can augment these two keywords with additional clauses to refine the +data in our output table. + +Any clauses that we include must follow a strict ordering within the +query: + +\begin{verbatim} +SELECT +FROM
+[WHERE ] +[ORDER BY ] +[LIMIT ] +[OFFSET ] +\end{verbatim} + +Here, any clause contained in square brackets \texttt{{[}\ {]}} is +optional ------ we only need to use the keyword if it is relevant to the +table operation we want to perform. Also note that by convention, we use +all caps for keywords in SQL statements and use newlines to make code +more readable. + +\bookmarksetup{startatroot} + +\chapter{SQL II}\label{sql-ii} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Perform aggregations using \texttt{GROUP\ BY} +\item + Introduce the ability to filter groups +\item + Perform data cleaning and text manipulation in SQL +\item + Join data across tables +\end{itemize} + +\end{tcolorbox} + +In this lecture, we'll continue our work from last time to introduce +some advanced SQL syntax. + +First, let's load in the \texttt{basic\_examples.db} database. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Load the SQL Alchemy Python library and DuckDB} +\ImportTok{import}\NormalTok{ sqlalchemy} +\ImportTok{import}\NormalTok{ duckdb} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Load \%\%sql cell magic} +\OperatorTok{\%}\NormalTok{load\_ext sql} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Connect to the database} +\OperatorTok{\%}\NormalTok{sql duckdb:}\OperatorTok{///}\NormalTok{data}\OperatorTok{/}\NormalTok{basic\_examples.db }\OperatorTok{{-}{-}}\NormalTok{alias basic} +\end{Highlighting} +\end{Shaded} + +\section{\texorpdfstring{Aggregating with +\texttt{GROUP\ BY}}{Aggregating with GROUP BY}}\label{aggregating-with-group-by} + +At this point, we've seen that SQL offers much of the same functionality +that was given to us by \texttt{pandas}. We can extract data from a +table, filter it, and reorder it to suit our needs. + +In \texttt{pandas}, much of our analysis work relied heavily on being +able to use \texttt{.groupby()} to aggregate across the rows of our +dataset. SQL's answer to this task is the (very conveniently named) +\texttt{GROUP\ BY} clause. While the outputs of \texttt{GROUP\ BY} are +similar to those of \texttt{.groupby()} ------ in both cases, we obtain +an output table where some column has been used for grouping ------ the +syntax and logic used to group data in SQL are fairly different to the +\texttt{pandas} implementation. + +To illustrate \texttt{GROUP\ BY}, we will consider the \texttt{Dish} +table from our database. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Dish}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & type & cost \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +Notice that there are multiple dishes of the same \texttt{type}. What if +we wanted to find the total costs of dishes of a certain \texttt{type}? +To accomplish this, we would write the following code. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\BuiltInTok{type}\NormalTok{, SUM(cost)} +\NormalTok{FROM Dish} +\NormalTok{GROUP BY }\BuiltInTok{type}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +type & sum("cost") \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +What is going on here? The statement \texttt{GROUP\ BY\ type} tells SQL +to group the data based on the value contained in the \texttt{type} +column (whether a record is an appetizer, entree, or dessert). +\texttt{SUM(cost)} sums up the costs of dishes in each \texttt{type} and +displays the result in the output table. + +You may be wondering: why does \texttt{SUM(cost)} come before the +command to \texttt{GROUP\ BY\ type}? Don't we need to form groups before +we can count the number of entries in each? Remember that SQL is a +\emph{declarative} programming language ------ a SQL programmer simply +states what end result they would like to see, and leaves the task of +figuring out \emph{how} to obtain this result to SQL itself. This means +that SQL queries sometimes don't follow what a reader sees as a +``logical'' sequence of thought. Instead, SQL requires that we follow +its set order of operations when constructing queries. So long as we +follow this order, SQL will handle the underlying logic. + +In practical terms: our goal with this query was to output the total +\texttt{cost}s of each \texttt{type}. To communicate this to SQL, we say +that we want to \texttt{SELECT} the \texttt{SUM}med \texttt{cost} values +for each \texttt{type} group. + +There are many aggregation functions that can be used to aggregate the +data contained in each group. Some common examples are: + +\begin{itemize} +\tightlist +\item + \texttt{COUNT}: count the number of rows associated with each group +\item + \texttt{MIN}: find the minimum value of each group +\item + \texttt{MAX}: find the maximum value of each group +\item + \texttt{SUM}: sum across all records in each group +\item + \texttt{AVG}: find the average value of each group +\end{itemize} + +We can easily compute multiple aggregations all at once (a task that was +very tricky in \texttt{pandas}). + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\BuiltInTok{type}\NormalTok{, SUM(cost), MIN(cost), MAX(name)} +\NormalTok{FROM Dish} +\NormalTok{GROUP BY }\BuiltInTok{type}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +type & sum("cost") & min("cost") & max("name") \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +To count the number of rows associated with each group, we use the +\texttt{COUNT} keyword. Calling \texttt{COUNT(*)} will compute the total +number of rows in each group, including rows with null values. Its +\texttt{pandas} equivalent is \texttt{.groupby().size()}. + +Recall the \texttt{Dragon} table from the previous lecture: + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\OperatorTok{*}\NormalTok{ FROM Dragon}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +name & year & cute \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +Notice that \texttt{COUNT(*)} and \texttt{COUNT(cute)} result in +different outputs. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT year, COUNT(}\OperatorTok{*}\NormalTok{)} +\NormalTok{FROM Dragon} +\NormalTok{GROUP BY year}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +year & count\_star() \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT year, COUNT(cute)} +\NormalTok{FROM Dragon} +\NormalTok{GROUP BY year}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +year & count(cute) \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +With this definition of \texttt{GROUP\ BY} in hand, let's update our SQL +order of operations. Remember: \emph{every} SQL query must list clauses +in this order. + +\begin{verbatim} +SELECT +FROM
+[WHERE ] +[GROUP BY ] +[ORDER BY ] +[LIMIT ] +[OFFSET ]; +\end{verbatim} + +Note that we can use the \texttt{AS} keyword to rename columns during +the selection process and that column expressions may include +aggregation functions (\texttt{MAX}, \texttt{MIN}, etc.). + +\section{Filtering Groups}\label{filtering-groups} + +Now, what if we only want groups that meet a certain condition? +\texttt{HAVING} filters groups by applying some condition across all +rows in each group. We interpret it as a way to keep only the groups +\texttt{HAVING} some condition. Note the difference between +\texttt{WHERE} and \texttt{HAVING}: we use \texttt{WHERE} to filter +rows, whereas we use \texttt{HAVING} to filter \emph{groups}. +\texttt{WHERE} precedes \texttt{HAVING} in terms of how SQL executes a +query. + +Let's take a look at the \texttt{Dish} table to see how we can use +\texttt{HAVING}. Say we want to group dishes with a cost greater than 4 +by \texttt{type} and only keep groups where the max cost is less than +10. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT }\BuiltInTok{type}\NormalTok{, COUNT(}\OperatorTok{*}\NormalTok{)} +\NormalTok{FROM Dish} +\NormalTok{WHERE cost }\OperatorTok{\textgreater{}} \DecValTok{4} +\NormalTok{GROUP BY }\BuiltInTok{type} +\NormalTok{HAVING MAX(cost) }\OperatorTok{\textless{}} \DecValTok{10}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +Done. +\end{verbatim} + +\begin{longtable}[]{@{}ll@{}} +\toprule\noalign{} +type & count\_star() \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +\end{longtable} + +Here, we first use \texttt{WHERE} to filter for rows with a cost greater +than 4. We then group our values by \texttt{type} before applying the +\texttt{HAVING} operator. With \texttt{HAVING}, we can filter our groups +based on if the max cost is less than 10. + +\section{Summary: SQL}\label{summary-sql} + +With this definition of \texttt{GROUP\ BY} and \texttt{HAVING} in hand, +let's update our SQL order of operations. Remember: \emph{every} SQL +query must list clauses in this order. + +\begin{verbatim} +SELECT +FROM
+[WHERE ] +[GROUP BY ] +[ORDER BY ] +[LIMIT ] +[OFFSET ]; +\end{verbatim} + +Note that we can use the \texttt{AS} keyword to rename columns during +the selection process and that column expressions may include +aggregation functions (\texttt{MAX}, \texttt{MIN}, etc.). + +\section{EDA in SQL}\label{eda-in-sql} + +In the last lecture, we mostly worked under the assumption that our data +had already been cleaned. However, as we saw in our first pass through +the data science lifecycle, we're very unlikely to be given data that is +free of formatting issues. With this in mind, we'll want to learn how to +clean and transform data in SQL. + +Our typical workflow when working with ``big data'' is: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Use SQL to query data from a database +\item + Use Python (with \texttt{pandas}) to analyze this data in detail +\end{enumerate} + +We can, however, still perform simple data cleaning and re-structuring +using SQL directly. To do so, we'll use the \texttt{Title} table from +the \texttt{imdb\_duck} database, which contains information about +movies and actors. + +Let's load in the \texttt{imdb\_duck} database. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ os} +\NormalTok{os.environ[}\StringTok{"TQDM\_DISABLE"}\NormalTok{] }\OperatorTok{=} \StringTok{"1"} +\ControlFlowTok{if}\NormalTok{ os.path.exists(}\StringTok{"/home/jovyan/shared/sql/imdb\_duck.db"}\NormalTok{):} +\NormalTok{ imdbpath }\OperatorTok{=} \StringTok{"duckdb:////home/jovyan/shared/sql/imdb\_duck.db"} +\ControlFlowTok{elif}\NormalTok{ os.path.exists(}\StringTok{"data/imdb\_duck.db"}\NormalTok{):} +\NormalTok{ imdbpath }\OperatorTok{=} \StringTok{"duckdb:///data/imdb\_duck.db"} +\ControlFlowTok{else}\NormalTok{:} + \ImportTok{import}\NormalTok{ gdown} +\NormalTok{ url }\OperatorTok{=} \StringTok{\textquotesingle{}https://drive.google.com/uc?id=10tKOHGLt9QoOgq5Ii{-}FhxpB9lDSQgl1O\textquotesingle{}} +\NormalTok{ output\_path }\OperatorTok{=} \StringTok{\textquotesingle{}data/imdb\_duck.db\textquotesingle{}} +\NormalTok{ gdown.download(url, output\_path, quiet}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\NormalTok{ imdbpath }\OperatorTok{=} \StringTok{"duckdb:///data/imdb\_duck.db"} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sqlalchemy }\ImportTok{import}\NormalTok{ create\_engine} +\NormalTok{imdb\_engine }\OperatorTok{=}\NormalTok{ create\_engine(imdbpath, connect\_args}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}read\_only\textquotesingle{}}\NormalTok{: }\VariableTok{True}\NormalTok{\})} +\OperatorTok{\%}\NormalTok{sql imdb\_engine }\OperatorTok{{-}{-}}\NormalTok{alias imdb} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.ParserException) Parser Error: syntax error at or near "imdb_engine" +[SQL: imdb_engine] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +Since we'll be working with the \texttt{Title} table, let's take a quick +look at what it contains. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql imdb } + +\NormalTok{SELECT }\OperatorTok{*} +\NormalTok{FROM Title} +\NormalTok{WHERE primaryTitle IN (}\StringTok{\textquotesingle{}Ginny \& Georgia\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}What If...?\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Succession\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Veep\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Tenet\textquotesingle{}}\NormalTok{)} +\NormalTok{LIMIT }\DecValTok{10}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.ParserException) Parser Error: syntax error at or near "imdb" +[SQL: imdb + +SELECT * +FROM Title +WHERE primaryTitle IN ('Ginny & Georgia', 'What If...?', 'Succession', 'Veep', 'Tenet') +LIMIT 10;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +\subsection{\texorpdfstring{Matching Text using +\texttt{LIKE}}{Matching Text using LIKE}}\label{matching-text-using-like} + +One common task we encountered in our first look at EDA was needing to +match string data. For example, we might want to remove entries +beginning with the same prefix as part of the data cleaning process. + +In SQL, we use the \texttt{LIKE} operator to (you guessed it) look for +strings that are \emph{like} a given string pattern. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT titleType, primaryTitle} +\NormalTok{FROM Title} +\NormalTok{WHERE primaryTitle LIKE }\StringTok{\textquotesingle{}Star Wars: Episode I {-} The Phantom Menace\textquotesingle{}} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 2: FROM Title + ^ +[SQL: SELECT titleType, primaryTitle +FROM Title +WHERE primaryTitle LIKE 'Star Wars: Episode I - The Phantom Menace'] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +What if we wanted to find \emph{all} Star Wars movies? \texttt{\%} is +the wildcard operator, it means ``look for any character, any number of +times''. This makes it helpful for identifying strings that are similar +to our desired pattern, even when we don't know the full text of what we +aim to extract. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT titleType, primaryTitle} +\NormalTok{FROM Title} +\NormalTok{WHERE primaryTitle LIKE }\StringTok{\textquotesingle{}\%Star Wars\%\textquotesingle{}} +\NormalTok{LIMIT }\DecValTok{10}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 2: FROM Title + ^ +[SQL: SELECT titleType, primaryTitle +FROM Title +WHERE primaryTitle LIKE '%Star Wars%' +LIMIT 10;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +Alternatively, we can use RegEx! DuckDB and most real DBMSs allow for +this. Note that here, we have to use the \texttt{SIMILAR\ TO} operater +rather than \texttt{LIKE}. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT titleType, primaryTitle} +\NormalTok{FROM Title} +\NormalTok{WHERE primaryTitle SIMILAR TO }\StringTok{\textquotesingle{}.*Star Wars*.\textquotesingle{}} +\NormalTok{LIMIT }\DecValTok{10}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 2: FROM Title + ^ +[SQL: SELECT titleType, primaryTitle +FROM Title +WHERE primaryTitle SIMILAR TO '.*Star Wars*.' +LIMIT 10;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +\subsection{\texorpdfstring{\texttt{CAST}ing Data +Types}{CASTing Data Types}}\label{casting-data-types} + +A common data cleaning task is converting data to the correct variable +type. The \texttt{CAST} keyword is used to generate a new output column. +Each entry in this output column is the result of converting the data in +an existing column to a new data type. For example, we may wish to +convert numeric data stored as a string to an integer. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT primaryTitle, CAST(runtimeMinutes AS INT)} +\NormalTok{FROM Title}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 2: FROM Title; + ^ +[SQL: SELECT primaryTitle, CAST(runtimeMinutes AS INT) +FROM Title;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +We use \texttt{CAST} when \texttt{SELECT}ing colunns for our output +table. In the example above, we want to \texttt{SELECT} the columns of +integer year and runtime data that is created by the \texttt{CAST}. + +SQL will automatically name a new column according to the command used +to \texttt{SELECT} it, which can lead to unwieldy column names. We can +rename the \texttt{CAST}ed column using the \texttt{AS} keyword. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{SELECT primaryTitle AS title, CAST(runtimeMinutes AS INT) AS minutes, CAST(startYear AS INT) AS year} +\NormalTok{FROM Title} +\NormalTok{LIMIT }\DecValTok{5}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 2: FROM Title + ^ +[SQL: SELECT primaryTitle AS title, CAST(runtimeMinutes AS INT) AS minutes, CAST(startYear AS INT) AS year +FROM Title +LIMIT 5;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +\subsection{\texorpdfstring{Using Conditional Statements with +\texttt{CASE}}{Using Conditional Statements with CASE}}\label{using-conditional-statements-with-case} + +When working with \texttt{pandas}, we often ran into situations where we +wanted to generate new columns using some form of conditional statement. +For example, say we wanted to describe a film title as ``old,'' +``mid-aged,'' or ``new,'' depending on the year of its release. + +In SQL, conditional operations are performed using a \texttt{CASE} +clause. Conceptually, \texttt{CASE} behaves much like the \texttt{CAST} +operation: it creates a new column that we can then \texttt{SELECT} to +appear in the output. The syntax for a \texttt{CASE} clause is as +follows: + +\begin{verbatim} +CASE WHEN THEN + WHEN THEN + ... + ELSE + END +\end{verbatim} + +Scanning through the skeleton code above, you can see that the logic is +similar to that of an \texttt{if} statement in Python. The conditional +statement is first opened by calling \texttt{CASE}. Each new condition +is specified by \texttt{WHEN}, with \texttt{THEN} indicating what value +should be filled if the condition is met. \texttt{ELSE} specifies the +value that should be filled if no other conditions are met. Lastly, +\texttt{END} indicates the end of the conditional statement; once +\texttt{END} has been called, SQL will continue evaluating the query as +usual. + +Let's see this in action. In the example below, we give the new column +created by the \texttt{CASE} statement the name \texttt{movie\_age}. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\OperatorTok{/*}\NormalTok{ If a movie was filmed before }\DecValTok{1950}\NormalTok{, it }\KeywordTok{is} \StringTok{"old"} +\NormalTok{Otherwise, }\ControlFlowTok{if}\NormalTok{ a movie was filmed before }\DecValTok{2000}\NormalTok{, it }\KeywordTok{is} \StringTok{"mid{-}aged"} +\NormalTok{Else, a movie }\KeywordTok{is} \StringTok{"new"} \OperatorTok{*/} + +\NormalTok{SELECT titleType, startYear,} +\NormalTok{CASE WHEN startYear }\OperatorTok{\textless{}} \DecValTok{1950}\NormalTok{ THEN }\StringTok{\textquotesingle{}old\textquotesingle{}} +\NormalTok{ WHEN startYear }\OperatorTok{\textless{}} \DecValTok{2000}\NormalTok{ THEN }\StringTok{\textquotesingle{}mid{-}aged\textquotesingle{}} +\NormalTok{ ELSE }\StringTok{\textquotesingle{}new\textquotesingle{}} +\NormalTok{ END AS movie\_age} +\NormalTok{FROM Title}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 10: FROM Title; + ^ +[SQL: /* If a movie was filmed before 1950, it is "old" +Otherwise, if a movie was filmed before 2000, it is "mid-aged" +Else, a movie is "new" */ + +SELECT titleType, startYear, +CASE WHEN startYear < 1950 THEN 'old' + WHEN startYear < 2000 THEN 'mid-aged' + ELSE 'new' + END AS movie_age +FROM Title;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +\section{\texorpdfstring{\texttt{JOIN}ing +Tables}{JOINing Tables}}\label{joining-tables-1} + +At this point, we're well-versed in using SQL as a tool to clean, +manipulate, and transform data in a table. Notice that this sentence +referred to one \emph{table}, specifically. What happens if the data we +need is distributed across multiple tables? This is an important +consideration when using SQL ------ recall that we first introduced SQL +as a language to query from databases. Databases often store data in a +multidimensional structure. In other words, information is stored across +several tables, with each table containing a small subset of all the +data housed by the database. + +A common way of organizing a database is by using a \textbf{star +schema}. A star schema is composed of two types of tables. A +\textbf{fact table} is the central table of the database ------ it +contains the information needed to link entries across several +\textbf{dimension tables}, which contain more detailed information about +the data. + +Say we were working with a database about boba offerings in Berkeley. +The dimension tables of the database might contain information about tea +varieties and boba toppings. The fact table would be used to link this +information across the various dimension tables. + +If we explicitly mark the relationships between tables, we start to see +the star-like structure of the star schema. + +To join data across multiple tables, we'll use the (creatively named) +\texttt{JOIN} keyword. We'll make things easier for now by first +considering the simpler \texttt{cats} dataset, which consists of the +tables \texttt{s} and \texttt{t}. + +To perform a join, we amend the \texttt{FROM} clause. You can think of +this as saying, ``\texttt{SELECT} my data \texttt{FROM} tables that have +been \texttt{JOIN}ed together.'' + +Remember: SQL does not consider newlines or whitespace when interpreting +queries. The indentation given in the example below is to help improve +readability. If you wish, you can write code that does not follow this +formatting. + +\begin{verbatim} +SELECT +FROM table_1 + JOIN table_2 + ON key_1 = key_2; +\end{verbatim} + +We also need to specify what column from each table should be used to +determine matching entries. By defining these keys, we provide SQL with +the information it needs to pair rows of data together. + +The most commonly used type of SQL \texttt{JOIN} is the \textbf{inner +join}. It turns out you're already familiar with what an inner join +does, and how it works -- this is the type of join we've been using in +\texttt{pandas} all along! In an inner join, we combine every row in our +first table with its matching entry in the second table. If a row from +either table does not have a match in the other table, it is omitted +from the output. + +In a \textbf{cross join}, \emph{all} possible combinations of rows +appear in the output table, regardless of whether or not rows share a +matching key. Because all rows are joined, even if there is no matching +key, it is not necessary to specify what keys to consider in an +\texttt{ON} statement. A cross join is also known as a cartesian +product. + +Conceptually, we can interpret an inner join as a cross join, followed +by removing all rows that do not share a matching key. Notice that the +output of the inner join above contains all rows of the cross join +example that contain a single color across the entire row. + +In a \textbf{left outer join}, \emph{all} rows in the left table are +kept in the output table. If a row in the right table shares a match +with the left table, this row will be kept; otherwise, the rows in the +right table are omitted from the output. We can fill in any missing +values with \texttt{NULL}. + +A \textbf{right outer join} keeps all rows in the right table. Rows in +the left table are only kept if they share a match in the right table. +Again, we can fill in any missing values with \texttt{NULL}. + +In a \textbf{full outer join}, all rows that have a match between the +two tables are joined together. If a row has no match in the second +table, then the values of the columns for that second table are filled +with \texttt{NULL}. In other words, a full outer join performs an inner +join \emph{while still keeping} rows that have no match in the other +table. This is best understood visually: + +We have kept the same output achieved using an inner join, with the +addition of partially null rows for entries in \texttt{s} and \texttt{t} +that had no match in the second table. + +\subsection{\texorpdfstring{Aliasing in +\texttt{JOIN}s}{Aliasing in JOINs}}\label{aliasing-in-joins} + +When joining tables, we often create aliases for table names (similarly +to what we did with column names in the last lecture). We do this as it +is typically easier to refer to aliases, especially when we are working +with long table names. We can even reference columns using aliased table +names! + +Let's say we want to determine the average rating of various movies. +We'll need to \texttt{JOIN} the \texttt{Title} and \texttt{Rating} +tables and can create aliases for both tables. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} + +\NormalTok{SELECT primaryTitle, averageRating} +\NormalTok{FROM Title AS T INNER JOIN Rating AS R} +\NormalTok{ON T.tconst }\OperatorTok{=}\NormalTok{ R.tconst}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 2: FROM Title AS T INNER JOIN Rating AS R + ^ +[SQL: SELECT primaryTitle, averageRating +FROM Title AS T INNER JOIN Rating AS R +ON T.tconst = R.tconst;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +Note that the \texttt{AS} is actually optional! We can create aliases +for our tables even without it, but we usually include it for clarity. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} + +\NormalTok{SELECT primaryTitle, averageRating} +\NormalTok{FROM Title T INNER JOIN Rating R} +\NormalTok{ON T.tconst }\OperatorTok{=}\NormalTok{ R.tconst}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 2: FROM Title T INNER JOIN Rating R + ^ +[SQL: SELECT primaryTitle, averageRating +FROM Title T INNER JOIN Rating R +ON T.tconst = R.tconst;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +\subsection{Common Table Expressions}\label{common-table-expressions} + +For more sophisticated data problems, the queries can become very +complex. Common table expressions (CTEs) allow us to break down these +complex queries into more manageable parts. To do so, we create +temporary tables corresponding to different aspects of the problem and +then reference them in the final query: + +\begin{verbatim} +WITH +table_name1 AS ( + SELECT ... +), +table_name2 AS ( + SELECT ... +) +SELECT ... +FROM +table_name1, +table_name2, ... +\end{verbatim} + +Let's say we want to identify the top 10 action movies that are highly +rated (with an average rating greater than 7) and popular (having more +than 5000 votes), along with the primary actors who are the most +popular. We can use CTEs to break this query down into separate +problems. Initially, we can filter to find good action movies and +prolific actors separately. This way, in our final join, we only need to +change the order. + +\begin{Shaded} +\begin{Highlighting}[] +\OperatorTok{\%\%}\NormalTok{sql} +\NormalTok{WITH } +\NormalTok{good\_action\_movies AS (} +\NormalTok{ SELECT }\OperatorTok{*} +\NormalTok{ FROM Title T JOIN Rating R ON T.tconst }\OperatorTok{=}\NormalTok{ R.tconst } +\NormalTok{ WHERE genres LIKE }\StringTok{\textquotesingle{}\%Action\%\textquotesingle{}}\NormalTok{ AND averageRating }\OperatorTok{\textgreater{}} \DecValTok{7}\NormalTok{ AND numVotes }\OperatorTok{\textgreater{}} \DecValTok{5000} +\NormalTok{),} +\NormalTok{prolific\_actors AS (} +\NormalTok{ SELECT N.nconst, primaryName, COUNT(}\OperatorTok{*}\NormalTok{) }\ImportTok{as}\NormalTok{ numRoles} +\NormalTok{ FROM Name N JOIN Principal P ON N.nconst }\OperatorTok{=}\NormalTok{ P.nconst} +\NormalTok{ WHERE category }\OperatorTok{=} \StringTok{\textquotesingle{}actor\textquotesingle{}} +\NormalTok{ GROUP BY N.nconst, primaryName} +\NormalTok{)} +\NormalTok{SELECT primaryTitle, primaryName, numRoles, ROUND(averageRating) AS rating} +\NormalTok{FROM good\_action\_movies m, prolific\_actors a, principal p} +\NormalTok{WHERE p.tconst }\OperatorTok{=}\NormalTok{ m.tconst AND p.nconst }\OperatorTok{=}\NormalTok{ a.nconst} +\NormalTok{ORDER BY rating DESC, numRoles DESC} +\NormalTok{LIMIT }\DecValTok{10}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} + * duckdb:///data/basic_examples.db +(duckdb.duckdb.CatalogException) Catalog Error: Table with name Title does not exist! +Did you mean "temp.information_schema.tables"? +LINE 4: F... + ^ +[SQL: WITH +good_action_movies AS ( + SELECT * + FROM Title T JOIN Rating R ON T.tconst = R.tconst + WHERE genres LIKE '%Action%' AND averageRating > 7 AND numVotes > 5000 +), +prolific_actors AS ( + SELECT N.nconst, primaryName, COUNT(*) as numRoles + FROM Name N JOIN Principal P ON N.nconst = P.nconst + WHERE category = 'actor' + GROUP BY N.nconst, primaryName +) +SELECT primaryTitle, primaryName, numRoles, ROUND(averageRating) AS rating +FROM good_action_movies m, prolific_actors a, principal p +WHERE p.tconst = m.tconst AND p.nconst = a.nconst +ORDER BY rating DESC, numRoles DESC +LIMIT 10;] +(Background on this error at: https://sqlalche.me/e/20/f405) +\end{verbatim} + +\bookmarksetup{startatroot} + +\chapter{Logistic Regression I}\label{logistic-regression-i} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Understand the difference between regression and classification +\item + Derive the logistic regression model for classifying data +\item + Quantify the error of our logistic regression model with cross-entropy + loss +\end{itemize} + +\end{tcolorbox} + +Up until this point in the class , we've focused on \textbf{regression} +tasks - that is, predicting an \emph{unbounded numerical quantity} from +a given dataset. We discussed optimization, feature engineering, and +regularization all in the context of performing regression to predict +some quantity. + +Now that we have this deep understanding of the modeling process, let's +expand our knowledge of possible modeling tasks. + +\section{Classification}\label{classification} + +In the next two lectures, we'll tackle the task of +\textbf{classification}. A classification problem aims to classify data +into \emph{categories}. Unlike in regression, where we predicted a +numeric output, classification involves predicting some +\textbf{categorical variable}, or \textbf{response}, \(y\). Examples of +classification tasks include: + +\begin{itemize} +\tightlist +\item + Predicting which team won from its turnover percentage +\item + Predicting the day of the week of a meal from the total restaurant + bill +\item + Predicting the model of car from its horsepower +\end{itemize} + +There are a couple of different types of classification: + +\begin{itemize} +\tightlist +\item + \textbf{Binary classification}: classify data into two classes, and + responses \(y\) are either 0 or 1 +\item + \textbf{Multiclass classification}: classify data into multiple + classes (e.g., image labeling, next word in a sentence, etc.) +\end{itemize} + +We can further combine multiple related classfication predictions (e.g., +translation, voice recognition, etc.) to tackle complex problems through +structured prediction tasks. + +In Data 100, we will mostly deal with \textbf{binary classification}, +where we are attempting to classify data into one of two classes. + +\subsection{Modeling Process}\label{modeling-process} + +To build a classification model, we need to modify our modeling workflow +slightly. Recall that in regression we: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Created a design matrix of numeric features +\item + Defined our model as a linear combination of these numeric features +\item + Used the model to output numeric predictions +\end{enumerate} + +In classification, however, we no longer want to output numeric +predictions; instead, we want to predict the class to which a datapoint +belongs. This means that we need to update our workflow. To build a +classification model, we will: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Create a design matrix of numeric features. +\item + Define our model as a linear combination of these numeric features, + transformed by a non-linear \textbf{sigmoid function}. This outputs a + numeric quantity. +\item + Apply a \textbf{decision rule} to interpret the outputted quantity and + decide a classification. +\item + Output a predicted class. +\end{enumerate} + +There are two key differences: as we'll soon see, we need to incorporate +a non-linear transformation to capture the non-linear relationships +hidden in our data. We do so by applying the sigmoid function to a +linear combination of the features. Secondly, we must apply a decision +rule to convert the numeric quantities computed by our model into an +actual class prediction. This can be as simple as saying that any +datapoint with a feature greater than some number \(x\) belongs to Class +1. + +\textbf{Regression:} + +\textbf{Classification:} + +This was a very high-level overview. Let's walk through the process in +detail to clarify what we mean. + +\section{Deriving the Logistic Regression +Model}\label{deriving-the-logistic-regression-model} + +Throughout this lecture, we will work with the \texttt{games} dataset, +which contains information about games played in the NBA basketball +league. Our goal will be to use a basketball team's +\texttt{"GOAL\_DIFF"} to predict whether or not a given team won their +game (\texttt{"WON"}). If a team wins their game, we'll say they belong +to Class 1. If they lose, they belong to Class 0. + +For those who are curious, \texttt{"GOAL\_DIFF"} represents the +difference in successful field goal percentages between the two +competing teams. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ warnings} +\NormalTok{warnings.filterwarnings(}\StringTok{"ignore"}\NormalTok{)} + +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\NormalTok{np.seterr(divide}\OperatorTok{=}\StringTok{\textquotesingle{}ignore\textquotesingle{}}\NormalTok{)} + +\NormalTok{games }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/games"}\NormalTok{).dropna()} +\NormalTok{games.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllllll@{}} +\toprule\noalign{} +& GAME\_ID & TEAM\_NAME & MATCHUP & WON & GOAL\_DIFF & AST \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 21701216 & Dallas Mavericks & DAL vs. PHX & 0 & -0.251 & 20 \\ +1 & 21700846 & Phoenix Suns & PHX @ GSW & 0 & -0.237 & 13 \\ +2 & 21700071 & San Antonio Spurs & SAS @ ORL & 0 & -0.234 & 19 \\ +3 & 21700221 & New York Knicks & NYK @ TOR & 0 & -0.234 & 17 \\ +4 & 21700306 & Miami Heat & MIA @ NYK & 0 & -0.222 & 21 \\ +\end{longtable} + +Let's visualize the relationship between \texttt{"GOAL\_DIFF"} and +\texttt{"WON"} using the Seaborn function \texttt{sns.stripplot}. A +strip plot automatically introduces a small amount of random noise to +\textbf{jitter} the data. Recall that all values in the \texttt{"WON"} +column are either 1 (won) or 0 (lost) -- if we were to directly plot +them without jittering, we would see severe overplotting. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} + +\NormalTok{sns.stripplot(data}\OperatorTok{=}\NormalTok{games, x}\OperatorTok{=}\StringTok{"GOAL\_DIFF"}\NormalTok{, y}\OperatorTok{=}\StringTok{"WON"}\NormalTok{, orient}\OperatorTok{=}\StringTok{"h"}\NormalTok{, hue}\OperatorTok{=}\StringTok{\textquotesingle{}WON\textquotesingle{}}\NormalTok{, alpha}\OperatorTok{=}\FloatTok{0.7}\NormalTok{)} +\CommentTok{\# By default, sns.stripplot plots 0, then 1. We invert the y axis to reverse this behavior} +\NormalTok{plt.gca().invert\_yaxis()}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-3-output-1.pdf} + +This dataset is unlike anything we've seen before -- our target variable +contains only two unique values! (Remember that each y value is either 0 +or 1; the plot above jitters the y data slightly for ease of reading.) + +The regression models we have worked with always assumed that we were +attempting to predict a continuous target. If we apply a linear +regression model to this dataset, something strange happens. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ sklearn.linear\_model }\ImportTok{as}\NormalTok{ lm} + +\NormalTok{X, Y }\OperatorTok{=}\NormalTok{ games[[}\StringTok{"GOAL\_DIFF"}\NormalTok{]], games[}\StringTok{"WON"}\NormalTok{]} +\NormalTok{regression\_model }\OperatorTok{=}\NormalTok{ lm.LinearRegression()} +\NormalTok{regression\_model.fit(X, Y)} + +\NormalTok{plt.plot(X.squeeze(), regression\_model.predict(X), }\StringTok{"k"}\NormalTok{)} +\NormalTok{sns.stripplot(data}\OperatorTok{=}\NormalTok{games, x}\OperatorTok{=}\StringTok{"GOAL\_DIFF"}\NormalTok{, y}\OperatorTok{=}\StringTok{"WON"}\NormalTok{, orient}\OperatorTok{=}\StringTok{"h"}\NormalTok{, hue}\OperatorTok{=}\StringTok{\textquotesingle{}WON\textquotesingle{}}\NormalTok{, alpha}\OperatorTok{=}\FloatTok{0.7}\NormalTok{)} +\NormalTok{plt.gca().invert\_yaxis()}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-4-output-1.pdf} + +The linear regression fit follows the data as closely as it can. +However, this approach has a key flaw - the predicted output, +\(\hat{y}\), can be outside the range of possible classes (there are +predictions above 1 and below 0). This means that the output can't +always be interpreted (what does it mean to predict a class of -2.3?). + +Our usual linear regression framework won't work here. Instead, we'll +need to get more creative. + +\subsection{Graph of Averages}\label{graph-of-averages} + +Back in +\href{https://inferentialthinking.com/chapters/08/1/Applying_a_Function_to_a_Column.html\#example-prediction}{Data +8}, you gradually built up to the concept of linear regression by using +the \textbf{graph of averages}. Before you knew the mathematical +underpinnings of the regression line, you took a more intuitive +approach: you bucketed the \(x\) data into bins of common values, then +computed the average \(y\) for all datapoints in the same bin. The +result gave you the insight needed to derive the regression fit. + +Let's take the same approach as we grapple with our new classification +task. In the cell below, we 1) bucket the \texttt{"GOAL\_DIFF"} data +into bins of similar values and 2) compute the average \texttt{"WON"} +value of all datapoints in a bin. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# bucket the GOAL\_DIFF data into 20 bins} +\NormalTok{bins }\OperatorTok{=}\NormalTok{ pd.cut(games[}\StringTok{"GOAL\_DIFF"}\NormalTok{], }\DecValTok{20}\NormalTok{)} +\NormalTok{games[}\StringTok{"bin"}\NormalTok{] }\OperatorTok{=}\NormalTok{ [(b.left }\OperatorTok{+}\NormalTok{ b.right) }\OperatorTok{/} \DecValTok{2} \ControlFlowTok{for}\NormalTok{ b }\KeywordTok{in}\NormalTok{ bins]} +\NormalTok{win\_rates\_by\_bin }\OperatorTok{=}\NormalTok{ games.groupby(}\StringTok{"bin"}\NormalTok{)[}\StringTok{"WON"}\NormalTok{].mean()} + +\CommentTok{\# plot the graph of averages} +\NormalTok{sns.stripplot(data}\OperatorTok{=}\NormalTok{games, x}\OperatorTok{=}\StringTok{"GOAL\_DIFF"}\NormalTok{, y}\OperatorTok{=}\StringTok{"WON"}\NormalTok{, orient}\OperatorTok{=}\StringTok{"h"}\NormalTok{, alpha}\OperatorTok{=}\FloatTok{0.5}\NormalTok{, hue}\OperatorTok{=}\StringTok{\textquotesingle{}WON\textquotesingle{}}\NormalTok{) }\CommentTok{\# alpha makes the points transparent} +\NormalTok{plt.plot(win\_rates\_by\_bin.index, win\_rates\_by\_bin, c}\OperatorTok{=}\StringTok{"tab:red"}\NormalTok{)} +\NormalTok{plt.gca().invert\_yaxis()}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-5-output-1.pdf} + +Interesting: our result is certainly not like the straight line produced +by finding the graph of averages for a linear relationship. We can make +two observations: + +\begin{itemize} +\tightlist +\item + All predictions on our line are between 0 and 1 +\item + The predictions are \textbf{non-linear}, following a rough ``S'' shape +\end{itemize} + +Let's think more about what we've just done. + +To find the average \(y\) value for each bin, we computed: + +\[\frac{1 \text{(\# Y = 1 in bin)} + 0 \text{(\# Y = 0 in bin)}}{\text{\# datapoints in bin}} = \frac{\text{\# Y = 1 in bin}}{\text{\# datapoints in bin}} = P(\text{Y = 1} | \text{bin})\] + +This is simply the probability of a datapoint in that bin belonging to +Class 1! This aligns with our observation from earlier: all of our +predictions lie between 0 and 1, just as we would expect for a +probability. + +Our graph of averages was really modeling the probability, \(p\), that a +datapoint belongs to Class 1, or essentially that \(\text{Y = 1}\) for a +particular value of \(\text{x}\). + +\[ p = P(Y = 1 | \text{ x} )\] + +In logistic regression, we have a new modeling goal. We want to model +the \textbf{probability that a particular datapoint belongs to Class 1} +by approximating the S-shaped curve we plotted above. However, we've +only learned about linear modeling techniques like Linear Regression and +OLS. + +\subsection{Handling Non-Linear +Output}\label{handling-non-linear-output} + +Fortunately for us, we're already well-versed with a technique to model +non-linear relationships -- we can apply non-linear transformations like +log or exponents to make a non-linear relationship more linear. Recall +the steps we've applied previously: + +\begin{itemize} +\tightlist +\item + Transform the variables until we linearize their relationship +\item + Fit a linear model to the transformed variables +\item + ``Undo'' our transformations to identify the underlying relationship + between the original variables +\end{itemize} + +In past examples, we used the bulge diagram to help us decide what +transformations may be useful. The S-shaped curve we saw above, however, +looks nothing like any relationship we've seen in the past. We'll need +to think carefully about what transformations will linearize this curve. + +\subsubsection{1. Odds}\label{odds} + +Let's consider our eventual goal: determining if we should predict a +Class of 0 or 1 for each datapoint. Rephrased, we want to decide if it +seems more ``likely'' that the datapoint belongs to Class 0 or to Class +1. One way of deciding this is to see which class has the higher +predicted probability for a given datapoint. The \textbf{odds} is +defined as the probability of a datapoint belonging to Class 1 divided +by the probability of it belonging to Class 0. + +\[\text{odds} = \frac{P(Y=1|x)}{P(Y=0|x)} = \frac{p}{1-p}\] + +If we plot the odds for each input \texttt{"GOAL\_DIFF"} (\(x\)), we see +something that looks more promising. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{p }\OperatorTok{=}\NormalTok{ win\_rates\_by\_bin} +\NormalTok{odds }\OperatorTok{=}\NormalTok{ p}\OperatorTok{/}\NormalTok{(}\DecValTok{1}\OperatorTok{{-}}\NormalTok{p) } + +\NormalTok{plt.plot(odds.index, odds)} +\NormalTok{plt.xlabel(}\StringTok{"x"}\NormalTok{)} +\NormalTok{plt.ylabel(}\VerbatimStringTok{r"Odds $= \textbackslash{}frac}\SpecialCharTok{\{p\}}\VerbatimStringTok{\{1{-}p\}$"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-6-output-1.pdf} + +\subsubsection{2. Log}\label{log} + +It turns out that the relationship between our input +\texttt{"GOAL\_DIFF"} and the odds is roughly exponential! Let's +linearize the exponential by taking the logarithm (as suggested by the +\href{https://ds100.org/course-notes/visualization_2/visualization_2.html\#tukey-mosteller-bulge-diagram}{Tukey-Mosteller +Bulge Diagram}). As a reminder, you should assume that any logarithm in +Data 100 is the base \(e\) natural logarithm unless told otherwise. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\NormalTok{log\_odds }\OperatorTok{=}\NormalTok{ np.log(odds)} +\NormalTok{plt.plot(odds.index, log\_odds, c}\OperatorTok{=}\StringTok{"tab:green"}\NormalTok{)} +\NormalTok{plt.xlabel(}\StringTok{"x"}\NormalTok{)} +\NormalTok{plt.ylabel(}\VerbatimStringTok{r"Log{-}Odds $= \textbackslash{}log\{\textbackslash{}frac}\SpecialCharTok{\{p\}}\VerbatimStringTok{\{1{-}p}\SpecialCharTok{\}\}}\VerbatimStringTok{$"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-7-output-1.pdf} + +\subsubsection{3. Putting it Together}\label{putting-it-together} + +We see something promising -- the relationship between the log-odds and +our input feature is approximately linear. This means that we can use a +linear model to describe the relationship between the log-odds and +\(x\). In other words: + +\begin{align} +\log{(\frac{p}{1-p})} &= \theta_0 + \theta_1 x_1 + ... + \theta_p x_p\\ +&= x^{\top} \theta +\end{align} + +Here, we use \(x^{\top}\) to represent an observation in our dataset, +stored as a row vector. You can imagine it as a single row in our design +matrix. \(x^{\top} \theta\) indicates a linear combination of the +features for this observation (just as we used in multiple linear +regression). + +We're in good shape! We have now derived the following relationship: + +\[\log{(\frac{p}{1-p})} = x^{\top} \theta\] + +Remember that our goal is to predict the probability of a datapoint +belonging to Class 1, \(p\). Let's rearrange this relationship to +uncover the original relationship between \(p\) and our input data, +\(x^{\top}\). + +\begin{align} +\log{(\frac{p}{1-p})} &= x^T \theta\\ +\frac{p}{1-p} &= e^{x^T \theta}\\ +p &= (1-p)e^{x^T \theta}\\ +p &= e^{x^T \theta}- p e^{x^T \theta}\\ +p(1 + e^{x^T \theta}) &= e^{x^T \theta} \\ +p &= \frac{e^{x^T \theta}}{1+e^{x^T \theta}}\\ +p &= \frac{1}{1+e^{-x^T \theta}}\\ +\end{align} + +Phew, that was a lot of algebra. What we've uncovered is the +\textbf{logistic regression model} to predict the probability of a +datapoint \(x^{\top}\) belonging to Class 1. If we plot this +relationship for our data, we see the S-shaped curve from earlier! + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# We\textquotesingle{}ll discuss the \textasciigrave{}LogisticRegression\textasciigrave{} class next time} +\NormalTok{xs }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\FloatTok{0.3}\NormalTok{, }\FloatTok{0.3}\NormalTok{)} + +\NormalTok{logistic\_model }\OperatorTok{=}\NormalTok{ lm.LogisticRegression(C}\OperatorTok{=}\DecValTok{20}\NormalTok{)} +\NormalTok{logistic\_model.fit(X, Y)} +\NormalTok{predicted\_prob }\OperatorTok{=}\NormalTok{ logistic\_model.predict\_proba(xs[:, np.newaxis])[:, }\DecValTok{1}\NormalTok{]} + +\NormalTok{sns.stripplot(data}\OperatorTok{=}\NormalTok{games, x}\OperatorTok{=}\StringTok{"GOAL\_DIFF"}\NormalTok{, y}\OperatorTok{=}\StringTok{"WON"}\NormalTok{, orient}\OperatorTok{=}\StringTok{"h"}\NormalTok{, alpha}\OperatorTok{=}\FloatTok{0.5}\NormalTok{)} +\NormalTok{plt.plot(xs, predicted\_prob, c}\OperatorTok{=}\StringTok{"k"}\NormalTok{, lw}\OperatorTok{=}\DecValTok{3}\NormalTok{, label}\OperatorTok{=}\StringTok{"Logistic regression model"}\NormalTok{)} +\NormalTok{plt.plot(win\_rates\_by\_bin.index, win\_rates\_by\_bin, lw}\OperatorTok{=}\DecValTok{2}\NormalTok{, c}\OperatorTok{=}\StringTok{"tab:red"}\NormalTok{, label}\OperatorTok{=}\StringTok{"Graph of averages"}\NormalTok{)} +\NormalTok{plt.legend(loc}\OperatorTok{=}\StringTok{"upper left"}\NormalTok{)} +\NormalTok{plt.gca().invert\_yaxis()}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-8-output-1.pdf} + +The S-shaped curve is formally known as the \textbf{sigmoid function} +and is typically denoted by \(\sigma\). + +\[\sigma(t) = \frac{1}{1+e^{-t}}\] + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Properties of the Sigmoid}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Reflection/Symmetry: + \[1-\sigma(t) = \frac{e^{-t}}{1+e^{-t}}=\sigma(-t)\] +\item + Inverse: \[t=\sigma^{-1}(p)=\log{(\frac{p}{1-p})}\] +\item + Derivative: + \[\frac{d}{dz} \sigma(t) = \sigma(t) (1-\sigma(t))=\sigma(t)\sigma(-t)\] +\item + Domain: \(-\infty < t < \infty\) +\item + Range: \(0 < \sigma(t) < 1\) +\end{itemize} + +\end{tcolorbox} + +In the context of our modeling process, the sigmoid is considered an +\textbf{activation function}. It takes in a linear combination of the +features and applies a non-linear transformation. + +\section{The Logistic Regression +Model}\label{the-logistic-regression-model} + +To predict a probability using the logistic regression model, we: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Compute a linear combination of the features, \(x^{\top}\theta\) +\item + Apply the sigmoid activation function, \(\sigma(x^{\top} \theta)\). +\end{enumerate} + +Our predicted probabilities are of the form +\(P(Y=1|x) = p = \frac{1}{1+e^{-x^T \theta}} = \frac{1}{1+e^{-(\theta_0 + \theta_1 x_1 + \theta_2 x_2 + \ldots + \theta_p x_p)}}\) + +An important note: despite its name, logistic regression is used for +\emph{classification} tasks, not regression tasks. In Data 100, we +always apply logistic regression with the goal of classifying data. + +Let's summarize our logistic regression modeling workflow: + +Our main takeaways from this section are: + +\begin{itemize} +\tightlist +\item + Assume log-odds is a linear combination of \(x\) and \(\theta\) +\item + Fit the ``S'' curve as best as possible +\item + The curve models the probability: \(P = (Y=1 | x)\) +\end{itemize} + +Putting this together, we know that the estimated probability that +response is 1 given the features \(x\) is equal to the logistic function +\(\sigma()\) at the value \(x^{\top}\theta\): + +\begin{align} +\hat{P}_{\theta}(Y = 1 | x) = \frac{1}{1 + e^{-x^{\top}\theta}} +\end{align} + +More commonly, the logistic regression model is written as: + +\begin{align} +\hat{P}_{\theta}(Y = 1 | x) = \sigma(x^{\top}\theta) +\end{align} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Properties of the Logistic Model}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Consider a logistic regression model with one feature and an intercept +term: + +\begin{align} +p = P(Y = 1 | x) = \frac{1}{1+e^{-(\theta_0 + \theta_1 x)}} +\end{align} + +Properties: + +\begin{itemize} +\tightlist +\item + \(\theta_0\) controls the position of the curve along the horizontal + axis +\item + The magnitude of \(\theta_1\) controls the ``steepness'' of the + sigmoid +\item + The sign of \(\theta_1\) controls the orientation of the curve +\end{itemize} + +\end{tcolorbox} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-color-frame, left=2mm, breakable, rightrule=.15mm, bottomrule=.15mm, opacityback=0, toprule=.15mm, leftrule=.75mm, arc=.35mm, colback=white] + +\vspace{-3mm}\textbf{Example Calculation}\vspace{3mm} + +Suppose we want to predict the probability that a team wins a game, +given \texttt{"GOAL\_DIFF"} (first feature) and the number of free +throws (second feature). Let's say we fit a logistic regression model +(with no intercept) using the training data and estimate the optimal +parameters. Now we want to predict the probability that a new team will +win their game. + +\begin{align} +\hat{\theta}^{\top} = \begin{matrix}[0.1 & -0.5]\end{matrix} +\\x^{\top} = \begin{matrix}[15 & 1]\end{matrix} +\end{align} + +\begin{align} +\hat{P}_{\hat{\theta}}(Y = 1 | x) = \sigma(x^{\top}\hat{\theta}) = \sigma(0.1 \cdot 15 + (-0.5) \cdot 1) = \sigma(1) = \frac{1}{1+e^{-1}} \approx 0.7311 +\end{align} + +We see that the response is more likely to be 1 than 0, indicating that +a reasonable prediction is \(\hat{y} = 1\). We'll dive deeper into this +in the next lecture. + +\end{tcolorbox} + +\section{Cross-Entropy Loss}\label{cross-entropy-loss} + +To quantify the error of our logistic regression model, we'll need to +define a new loss function. + +\subsection{Why Not MSE?}\label{why-not-mse} + +You may wonder: why not use our familiar mean squared error? It turns +out that the MSE is not well suited for logistic regression. To see why, +let's consider a simple, artificially generated \texttt{toy} dataset +with just one feature (this will be easier to work with than the more +complicated \texttt{games} data). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{toy\_df }\OperatorTok{=}\NormalTok{ pd.DataFrame(\{} + \StringTok{"x"}\NormalTok{: [}\OperatorTok{{-}}\DecValTok{4}\NormalTok{, }\OperatorTok{{-}}\DecValTok{2}\NormalTok{, }\OperatorTok{{-}}\FloatTok{0.5}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{3}\NormalTok{, }\DecValTok{5}\NormalTok{],} + \StringTok{"y"}\NormalTok{: [}\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{]\})} +\NormalTok{toy\_df.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& x & y \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & -4.0 & 0 \\ +1 & -2.0 & 0 \\ +2 & -0.5 & 1 \\ +3 & 1.0 & 0 \\ +4 & 3.0 & 1 \\ +\end{longtable} + +We'll construct a basic logistic regression model with only one feature +and no intercept term. Our predicted probabilities take the form: + +\[p=P(Y=1|x)=\frac{1}{1+e^{-\theta_1 x}}\] + +In the cell below, we plot the MSE for our model on the data. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ sigmoid(z):} + \ControlFlowTok{return} \DecValTok{1}\OperatorTok{/}\NormalTok{(}\DecValTok{1}\OperatorTok{+}\NormalTok{np.e}\OperatorTok{**}\NormalTok{(}\OperatorTok{{-}}\NormalTok{z))} + +\KeywordTok{def}\NormalTok{ mse\_on\_toy\_data(theta):} +\NormalTok{ p\_hat }\OperatorTok{=}\NormalTok{ sigmoid(toy\_df[}\StringTok{\textquotesingle{}x\textquotesingle{}}\NormalTok{] }\OperatorTok{*}\NormalTok{ theta)} + \ControlFlowTok{return}\NormalTok{ np.mean((toy\_df[}\StringTok{\textquotesingle{}y\textquotesingle{}}\NormalTok{] }\OperatorTok{{-}}\NormalTok{ p\_hat)}\OperatorTok{**}\DecValTok{2}\NormalTok{)} + +\NormalTok{thetas }\OperatorTok{=}\NormalTok{ np.linspace(}\OperatorTok{{-}}\DecValTok{15}\NormalTok{, }\DecValTok{5}\NormalTok{, }\DecValTok{100}\NormalTok{)} +\NormalTok{plt.plot(thetas, [mse\_on\_toy\_data(theta) }\ControlFlowTok{for}\NormalTok{ theta }\KeywordTok{in}\NormalTok{ thetas])} +\NormalTok{plt.title(}\StringTok{"MSE on toy classification data"}\NormalTok{)} +\NormalTok{plt.xlabel(}\VerbatimStringTok{r\textquotesingle{}$\textbackslash{}theta\_1$\textquotesingle{}}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{\textquotesingle{}MSE\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-10-output-1.pdf} + +This looks nothing like the parabola we found when plotting the MSE of a +linear regression model! In particular, we can identify two flaws with +using the MSE for logistic regression: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + The MSE loss surface is \emph{non-convex}. There is both a global + minimum and a (barely perceptible) local minimum in the loss surface + above. This means that there is the risk of gradient descent + converging on the local minimum of the loss surface, missing the true + optimum parameter \(\theta_1\). +\item + Squared loss is \emph{bounded} for a classification task. Recall that + each true \(y\) has a value of either 0 or 1. This means that even if + our model makes the worst possible prediction (e.g.~predicting \(p=0\) + for \(y=1\)), the squared loss for an observation will be no greater + than 1: \[(y-p)^2=(1-0)^2=1\] The MSE does not strongly penalize poor + predictions. +\end{enumerate} + +\subsection{Motivating Cross-Entropy +Loss}\label{motivating-cross-entropy-loss} + +Suffice to say, we don't want to use the MSE when working with logistic +regression. Instead, we'll consider what kind of behavior we would +\emph{like} to see in a loss function. + +Let \(y\) be the binary label (it can either be 0 or 1), and \(p\) be +the model's predicted probability of the label \(y\) being 1. + +\begin{itemize} +\tightlist +\item + When the true \(y\) is 1, we should incur \emph{low} loss when the + model predicts large \(p\) +\item + When the true \(y\) is 0, we should incur \emph{high} loss when the + model predicts large \(p\) +\end{itemize} + +In other words, our loss function should behave differently depending on +the value of the true class, \(y\). + +The \textbf{cross-entropy loss} incorporates this changing behavior. We +will use it throughout our work on logistic regression. Below, we write +out the cross-entropy loss for a \emph{single} datapoint (no averages +just yet). + +\[\text{Cross-Entropy Loss} = \begin{cases} + -\log{(p)} & \text{if } y=1 \\ + -\log{(1-p)} & \text{if } y=0 +\end{cases}\] + +Why does this (seemingly convoluted) loss function ``work''? Let's break +it down. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}} + >{\raggedright\arraybackslash}p{(\columnwidth - 2\tabcolsep) * \real{0.5000}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +When \(y=1\) +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +When \(y=0\) +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +& \\ +As \(p \rightarrow 0\), loss approches \(\infty\) & As +\(p \rightarrow 0\), loss approches 0 \\ +As \(p \rightarrow 1\), loss approaches 0 & As \(p \rightarrow 1\), loss +approaches \(\infty\) \\ +\end{longtable} + +All good -- we are seeing the behavior we want for our logistic +regression model. + +The piecewise function we outlined above is difficult to optimize: we +don't want to constantly ``check'' which form of the loss function we +should be using at each step of choosing the optimal model parameters. +We can re-express cross-entropy loss in a more convenient way: + +\[\text{Cross-Entropy Loss} = -\left(y\log{(p)}+(1-y)\log{(1-p)}\right)\] + +By setting \(y\) to 0 or 1, we see that this new form of cross-entropy +loss gives us the same behavior as the original formulation. Another way +to think about this is that in either scenario (y being equal to 0 or +1), only one of the cross-entropy loss terms is activated, which gives +us a convenient way to combine the two independent loss functions. + +When \(y=1\): + +\begin{align} +\text{CE} &= -\left((1)\log{(p)}+(1-1)\log{(1-p)}\right)\\ +&= -\log{(p)} +\end{align} + +When \(y=0\): + +\begin{align} +\text{CE} &= -\left((0)\log{(p)}+(1-0)\log{(1-p)}\right)\\ +&= -\log{(1-p)} +\end{align} + +The empirical risk of the logistic regression model is then the mean +cross-entropy loss across all datapoints in the dataset. When fitting +the model, we want to determine the model parameter \(\theta\) that +leads to the lowest mean cross-entropy loss possible. + +\[ +\begin{align} +R(\theta) &= - \frac{1}{n} \sum_{i=1}^n \left(y_i\log{(p_i)}+(1-y_i)\log{(1-p_i)}\right) \\ +&= - \frac{1}{n} \sum_{i=1}^n \left(y_i\log{\sigma(X_i^{\top}\theta)}+(1-y_i)\log{(1-\sigma(X_i^{\top}\theta))}\right) +\end{align} +\] + +The optimization problem is therefore to find the estimate +\(\hat{\theta}\) that minimizes \(R(\theta)\): + +\[ +\hat{\theta} = \underset{\theta}{\arg\min} - \frac{1}{n} \sum_{i=1}^n \left(y_i\log{(\sigma(X_i^{\top}\theta))}+(1-y_i)\log{(1-\sigma(X_i^{\top}\theta))}\right) +\] + +Plotting the cross-entropy loss surface for our \texttt{toy} dataset +gives us a more encouraging result -- our loss function is now convex. +This means we can optimize it using gradient descent. Computing the +gradient of the logistic model is fairly challenging, so we'll let +\texttt{sklearn} take care of this for us. You won't need to compute the +gradient of the logistic model in Data 100. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ cross\_entropy(y, p\_hat):} + \ControlFlowTok{return} \OperatorTok{{-}}\NormalTok{ y }\OperatorTok{*}\NormalTok{ np.log(p\_hat) }\OperatorTok{{-}}\NormalTok{ (}\DecValTok{1} \OperatorTok{{-}}\NormalTok{ y) }\OperatorTok{*}\NormalTok{ np.log(}\DecValTok{1} \OperatorTok{{-}}\NormalTok{ p\_hat)} + +\KeywordTok{def}\NormalTok{ mean\_cross\_entropy\_on\_toy\_data(theta):} +\NormalTok{ p\_hat }\OperatorTok{=}\NormalTok{ sigmoid(toy\_df[}\StringTok{\textquotesingle{}x\textquotesingle{}}\NormalTok{] }\OperatorTok{*}\NormalTok{ theta)} + \ControlFlowTok{return}\NormalTok{ np.mean(cross\_entropy(toy\_df[}\StringTok{\textquotesingle{}y\textquotesingle{}}\NormalTok{], p\_hat))} + +\NormalTok{plt.plot(thetas, [mean\_cross\_entropy\_on\_toy\_data(theta) }\ControlFlowTok{for}\NormalTok{ theta }\KeywordTok{in}\NormalTok{ thetas], color }\OperatorTok{=} \StringTok{\textquotesingle{}green\textquotesingle{}}\NormalTok{)} +\NormalTok{plt.ylabel(}\VerbatimStringTok{r\textquotesingle{}Mean Cross{-}Entropy Loss($\textbackslash{}theta$)\textquotesingle{}}\NormalTok{)} +\NormalTok{plt.xlabel(}\VerbatimStringTok{r\textquotesingle{}$\textbackslash{}theta$\textquotesingle{}}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-11-output-1.pdf} + +\section{Maximum Likelihood +Estimation}\label{maximum-likelihood-estimation} + +It may have seemed like we pulled cross-entropy loss out of thin air. +How did we know that taking the negative logarithms of our probabilities +would work so well? It turns out that cross-entropy loss is justified by +probability theory. + +The following section is out of scope, but is certainly an interesting +read! + +\subsection{Building Intuition: The Coin +Flip}\label{building-intuition-the-coin-flip} + +To build some intuition for logistic regression, let's look at an +introductory example to classification: the coin flip. Suppose we +observe some outcomes of a coin flip (1 = Heads, 0 = Tails). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{flips }\OperatorTok{=}\NormalTok{ [}\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{1}\NormalTok{, }\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{, }\DecValTok{0}\NormalTok{]} +\NormalTok{flips} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +[0, 0, 1, 1, 1, 1, 0, 0, 0, 0] +\end{verbatim} + +A reasonable model is to assume all flips are IID (independent and +identically distributed). In other words, each flip has the same +probability of returning a 1 (or heads). Let's define a parameter +\(\theta\), the probability that the next flip is a heads. We will use +this parameter to inform our decision for \(\hat y\) (predicting either +0 or 1) of the next flip. If +\(\theta \ge 0.5, \hat y = 1, \text{else } \hat y = 0\). + +You may be inclined to say \(0.5\) is the best choice for \(\theta\). +However, notice that we made no assumption about the coin itself. The +coin may be biased, so we should make our decision based only on the +data. We know that exactly \(\frac{4}{10}\) of the flips were heads, so +we might guess \(\hat \theta = 0.4\). In the next section, we will +mathematically prove why this is the best possible estimate. + +\subsection{Likelihood of Data}\label{likelihood-of-data} + +Let's call the result of the coin flip a random variable \(Y\). This is +a Bernoulli random variable with two outcomes. \(Y\) has the following +distribution: + +\[P(Y = y) = \begin{cases} + p, \text{if } y=1\\ + 1 - p, \text{if } y=0 + \end{cases} \] + +\(p\) is unknown to us. But we can find the \(p\) that makes the data we +observed the most \emph{likely}. + +The probability of observing 4 heads and 6 tails follows the binomial +distribution. + +\[\binom{10}{4} (p)^4 (1-p)^6\] + +We define the \textbf{likelihood} of obtaining our observed data as a +quantity \emph{proportional} to the probability above. To find it, +simply multiply the probabilities of obtaining each coin flip. + +\[(p)^{4} (1-p)^6\] + +The technique known as \textbf{maximum likelihood estimation} finds the +\(p\) that maximizes the above likelihood. You can find this maximum by +taking the derivative of the likelihood, but we'll provide a more +intuitive graphical solution. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{thetas }\OperatorTok{=}\NormalTok{ np.linspace(}\DecValTok{0}\NormalTok{, }\DecValTok{1}\NormalTok{)} +\NormalTok{plt.plot(thetas, (thetas}\OperatorTok{**}\DecValTok{4}\NormalTok{)}\OperatorTok{*}\NormalTok{(}\DecValTok{1}\OperatorTok{{-}}\NormalTok{thetas)}\OperatorTok{**}\DecValTok{6}\NormalTok{)} +\NormalTok{plt.xlabel(}\VerbatimStringTok{r"$\textbackslash{}theta$"}\NormalTok{)} +\NormalTok{plt.ylabel(}\StringTok{"Likelihood"}\NormalTok{)}\OperatorTok{;} +\end{Highlighting} +\end{Shaded} + +\includegraphics{logistic_regression_1/logistic_reg_1_files/figure-pdf/cell-13-output-1.pdf} + +More generally, the likelihood for some Bernoulli(\(p\)) random variable +\(Y\) is: + +\[P(Y = y) = \begin{cases} + 1, \text{with probability } p\\ + 0, \text{with probability } 1 - p + \end{cases} \] + +Equivalently, this can be written in a compact way: + +\[P(Y=y) = p^y(1-p)^{1-y}\] + +\begin{itemize} +\tightlist +\item + When \(y = 1\), this reads \(P(Y=y) = p\) +\item + When \(y = 0\), this reads \(P(Y=y) = (1-p)\) +\end{itemize} + +In our example, a Bernoulli random variable is analogous to a single +data point (e.g., one instance of a basketball team winning or losing a +game). All together, our \texttt{games} data consists of many IID +Bernoulli(\(p\)) random variables. To find the likelihood of independent +events in succession, simply multiply their likelihoods. + +\[\prod_{i=1}^{n} p^{y_i} (1-p)^{1-y_i}\] + +As with the coin example, we want to find the parameter \(p\) that +maximizes this likelihood. Earlier, we gave an intuitive graphical +solution, but let's take the derivative of the likelihood to find this +maximum. + +At a first glance, this derivative will be complicated! We will have to +use the product rule, followed by the chain rule. Instead, we can make +an observation that simplifies the problem. + +Finding the \(p\) that maximizes +\[\prod_{i=1}^{n} p^{y_i} (1-p)^{1-y_i}\] is equivalent to the \(p\) +that maximizes \[\text{log}(\prod_{i=1}^{n} p^{y_i} (1-p)^{1-y_i})\] + +This is because \(\text{log}\) is a strictly \emph{increasing} function. +It won't change the maximum or minimum of the function it was applied +to. From \(\text{log}\) properties, \(\text{log}(a*b)\) = +\(\text{log}(a) + \text{log}(b)\). We can apply this to our equation +above to get: + +\[\underset{p}{\text{argmax}} \sum_{i=1}^{n} \text{log}(p^{y_i} (1-p)^{1-y_i})\] + +\[= \underset{p}{\text{argmax}} \sum_{i=1}^{n} (\text{log}(p^{y_i}) + \text{log}((1-p)^{1-y_i}))\] + +\[= \underset{p}{\text{argmax}} \sum_{i=1}^{n} (y_i\text{log}(p) + (1-y_i)\text{log}(1-p))\] + +We can add a constant factor of \(\frac{1}{n}\) out front. It won't +affect the \(p\) that maximizes our likelihood. + +\[=\underset{p}{\text{argmax}} \frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(p) + (1-y_i)\text{log}(1-p)\] + +One last ``trick'' we can do is change this to a minimization problem by +negating the result. This works because we are dealing with a +\emph{concave} function, which can be made \emph{convex}. + +\[= \underset{p}{\text{argmin}} -\frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(p) + (1-y_i)\text{log}(1-p)\] + +Now let's say that we have data that are independent with different +probability \(p_i\). Then, we would want to find the +\(p_1, p_2, \dots, p_n\) that maximize +\[\prod_{i=1}^{n} p_i^{y_i} (1-p_i)^{1-y_i}\] + +Setting up and simplifying the optimization problems as we did above, we +ultimately want to find: + +\[= \underset{p}{\text{argmin}} -\frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(p_i) + (1-y_i)\text{log}(1-p_i)\] + +For logistic regression, \(p_i = \sigma(x^{\top}\theta)\). Plugging that +in, we get: + +\[= \underset{p}{\text{argmin}} -\frac{1}{n} \sum_{i=1}^{n} y_i\text{log}(\sigma(x^{\top}\theta)) + (1-y_i)\text{log}(1-\sigma(x^{\top}\theta))\] + +This is exactly our average cross-entropy loss minimization problem from +before! + +Why did we do all this complicated math? We have shown that +\emph{minimizing} cross-entropy loss is equivalent to \emph{maximizing} +the likelihood of the training data. + +\begin{itemize} +\tightlist +\item + By minimizing cross-entropy loss, we are choosing the model parameters + that are ``most likely'' for the data we observed. +\end{itemize} + +Note that this is under the assumption that all data is drawn +independently from the same logistic regression model with parameter +\(\theta\). In fact, many of the model + loss combinations we've seen +can be motivated using MLE (e.g., OLS, Ridge Regression, etc.). In +probability and ML classes, you'll get the chance to explore MLE +further. + +\bookmarksetup{startatroot} + +\chapter{Logistic Regression II}\label{logistic-regression-ii} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Apply decision rules to make a classification +\item + Learn when logistic regression works well and when it does not +\item + Introduce new metrics for model performance +\end{itemize} + +\end{tcolorbox} + +Today, we will continue studying the Logistic Regression model and +discuss decision boundaries that help inform the classification of a +particular prediction and learn about linear separability. Picking up +from last lecture's discussion of cross-entropy loss, we'll study a few +of its pitfalls, and learn potential remedies. We will also provide an +implementation of \texttt{sklearn}'s logistic regression model. Lastly, +we'll return to decision rules and discuss metrics that allow us to +determine our model's performance in different scenarios. + +This will introduce us to the process of \textbf{thresholding} -- a +technique used to \emph{classify} data from our model's predicted +probabilities, or \(P(Y=1|x)\). In doing so, we'll focus on how these +thresholding decisions affect the behavior of our model and learn +various evaluation metrics useful for binary classification, and apply +them to our study of logistic regression. + +\section{Decision Boundaries}\label{decision-boundaries} + +In logistic regression, we model the \emph{probability} that a datapoint +belongs to Class 1. + +Last week, we developed the logistic regression model to predict that +probability, but we never actually made any \emph{classifications} for +whether our prediction \(y\) belongs in Class 0 or Class 1. + +\[ p = P(Y=1 | x) = \frac{1}{1 + e^{-x^{\top}\theta}}\] + +A \textbf{decision rule} tells us how to interpret the output of the +model to make a decision on how to classify a datapoint. We commonly +make decision rules by specifying a \textbf{threshold}, \(T\). If the +predicted probability is greater than or equal to \(T\), predict Class +1. Otherwise, predict Class 0. + +\[\hat y = \text{classify}(x) = \begin{cases} + 1, & P(Y=1|x) \ge T\\ + 0, & \text{otherwise } + \end{cases}\] + +The threshold is often set to \(T = 0.5\), but \emph{not always}. We'll +discuss why we might want to use other thresholds \(T \neq 0.5\) later +in this lecture. + +Using our decision rule, we can define a \textbf{decision boundary} as +the ``line'' that splits the data into classes based on its features. +For logistic regression, since we are working in \(p\) dimensions, the +decision boundary is a \textbf{hyperplane} -- a linear combination of +the features in \(p\)-dimensions -- and we can recover it from the final +logistic regression model. For example, if we have a model with 2 +features (2D), we have \(\theta = [\theta_0, \theta_1, \theta_2]\) +including the intercept term, and we can solve for the decision boundary +like so: + +\[ +\begin{align} +T &= \frac{1}{1 + e^{-(\theta_0 + \theta_1 * \text{feature1} + \theta_2 * \text{feature2})}} \\ +1 + e^{-(\theta_0 + \theta_1 \cdot \text{feature1} + \theta_2 \cdot \text{feature2})} &= \frac{1}{T} \\ +e^{-(\theta_0 + \theta_1 \cdot \text{feature1} + \theta_2 \cdot \text{feature2})} &= \frac{1}{T} - 1 \\ +\theta_0 + \theta_1 \cdot \text{feature1} + \theta_2 \cdot \text{feature2} &= -\log(\frac{1}{T} - 1) +\end{align} +\] + +For a model with 2 features, the decision boundary is a line in terms of +its features. To make it easier to visualize, we've included an example +of a 1-dimensional and a 2-dimensional decision boundary below. Notice +how the decision boundary predicted by our logistic regression model +perfectly separates the points into two classes. Here the color is the +\emph{predicted} class, rather than the true class. + +In real life, however, that is often not the case, and we often see some +overlap between points of different classes across the decision +boundary. The \emph{true} classes of the 2D data are shown below: + +As you can see, the decision boundary predicted by our logistic +regression does not perfectly separate the two classes. There's a +``muddled'' region near the decision boundary where our classifier +predicts the wrong class. What would the data have to look like for the +classifier to make perfect predictions? + +\section{Linear Separability and +Regularization}\label{linear-separability-and-regularization} + +A classification dataset is said to be \textbf{linearly separable} if +there exists a hyperplane \textbf{among input features \(x\)} that +separates the two classes \(y\). + +Linear separability in 1D can be found with a rugplot of a single +feature where a point perfectly separates the classes (Remember that in +1D, our decision boundary is just a point). For example, notice how the +plot on the bottom left is linearly separable along the vertical line +\(x=0\). However, no such line perfectly separates the two classes on +the bottom right. + +This same definition holds in higher dimensions. If there are two +features, the separating hyperplane must exist in two dimensions (any +line of the form \(y=mx+b\)). We can visualize this using a scatter +plot. + +This sounds great! When the dataset is linearly separable, a logistic +regression classifier can perfectly assign datapoints into classes. Can +it achieve 0 cross-entropy loss? + +\[-(y \log(p) + (1 - y) \log(1 - p))\] + +Cross-entropy loss is 0 if \(p = 1\) when \(y = 1\), and \(p = 0\) when +\(y = 0\). Consider a simple model with one feature and no intercept. + +\[P_{\theta}(Y = 1|x) = \sigma(\theta x) = \frac{1}{1 + e^{-\theta x}}\] + +What \(\theta\) will achieve 0 loss if we train on the datapoint +\(x = 1, y = 1\)? We would want \(p = 1\) which occurs when +\(\theta \rightarrow \infty\). + +However, (unexpected) complications may arise. When data is linearly +separable, the optimal model parameters \textbf{diverge} to +\(\pm \infty\). \emph{The sigmoid can never output exactly 0 or 1}, so +no finite optimal \(\theta\) exists. This can be a problem when using +gradient descent to fit the model. Consider a simple, linearly separable +``toy'' dataset with two datapoints. + +Let's also visualize the mean cross entropy loss along with the +direction of the gradient (how this loss surface is calculated is out of +scope). + +It's nearly impossible to see, but the plateau to the right is slightly +tilted. Because gradient descent follows the tilted loss surface +downwards, it never converges. + +The diverging weights cause the model to be \textbf{overconfident}. Say +we add a new point \((x, y) = (-0.5, 1)\). Following the behavior above, +our model will incorrectly predict \(p=0\), and thus, \(\hat y = 0\). + +The loss incurred by this misclassified point is infinite. + +\[-(y\text{ log}(p) + (1-y)\text{ log}(1-p))=1 * \text{log}(0)\] + +Thus, diverging weights (\(|\theta| \rightarrow \infty\)) occur with +\textbf{linearly separable} data. ``Overconfidence'', as shown here, is +a particularly dangerous version of overfitting. + +\subsection{Regularized Logistic +Regression}\label{regularized-logistic-regression} + +To avoid large weights and infinite loss (particularly on linearly +separable data), we use regularization. The same principles apply as +with linear regression - make sure to standardize your features first. + +For example, \(L2\) (Ridge) Logistic Regression takes on the form: + +\[\min_{\theta} -\frac{1}{n} \sum_{i=1}^{n} (y_i \text{log}(\sigma(X_i^T\theta)) + (1-y_i)\text{log}(1-\sigma(X_i^T\theta))) + \lambda \sum_{j=1}^{d} \theta_j^2\] + +Now, let us compare the loss functions of un-regularized and regularized +logistic regression. + +As we can see, \(L2\) regularization helps us prevent diverging weights +and deters against ``overconfidence.'' + +\texttt{sklearn}'s logistic regression defaults to \(L2\) regularization +and \texttt{C=1.0}; \texttt{C} is the inverse of \(\lambda\): +\[C = \frac{1}{\lambda}\] Setting \texttt{C} to a large value, for +example, \texttt{C=300.0}, results in minimal regularization. + +\begin{verbatim} +# sklearn defaults +model = LogisticRegression(penalty = 'l2', C = 1.0, ...) +model.fit() +\end{verbatim} + +Note that in Data 100, we only use \texttt{sklearn} to fit logistic +regression models. There is no closed-form solution to the optimal theta +vector, and the gradient is a little messy (see the bonus section below +for details). + +From here, the \texttt{.predict} function returns the predicted class +\(\hat y\) of the point. In the simple binary case where the threshold +is 0.5, + +\[\hat y = \begin{cases} + 1, & P(Y=1|x) \ge 0.5\\ + 0, & \text{otherwise } + \end{cases}\] + +\section{Performance Metrics}\label{performance-metrics} + +You might be thinking, if we've already introduced cross-entropy loss, +why do we need additional ways of assessing how well our models perform? +In linear regression, we made numerical predictions and used a loss +function to determine how ``good'' these predictions were. In logistic +regression, our ultimate goal is to classify data -- we are much more +concerned with whether or not each datapoint was assigned the correct +class using the decision rule. As such, we are interested in the +\emph{quality} of classifications, not the predicted probabilities. + +The most basic evaluation metric is \textbf{accuracy}, that is, the +proportion of correctly classified points. + +\[\text{accuracy} = \frac{\# \text{ of points classified correctly}}{\# \text{ of total points}}\] + +Translated to code: + +\begin{verbatim} +def accuracy(X, Y): + return np.mean(model.predict(X) == Y) + +model.score(X, y) # built-in accuracy function +\end{verbatim} + +You can find the \texttt{sklearn} documentation +\href{https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html\#sklearn.linear_model.LogisticRegression.score}{here}. + +However, accuracy is not always a great metric for classification. To +understand why, let's consider a classification problem with 100 emails +where only 5 are truly spam, and the remaining 95 are truly ham. We'll +investigate two models where accuracy is a poor metric. + +\begin{itemize} +\tightlist +\item + \textbf{Model 1}: Our first model classifies every email as non-spam. + The model's accuracy is high (\(\frac{95}{100} = 0.95\)), but it + doesn't detect any spam emails. Despite the high accuracy, this is a + bad model. +\item + \textbf{Model 2}: The second model classifies every email as spam. The + accuracy is low (\(\frac{5}{100} = 0.05\)), but the model correctly + labels every spam email. Unfortunately, it also misclassifies every + non-spam email. +\end{itemize} + +As this example illustrates, accuracy is not always a good metric for +classification, particularly when your data could exhibit class +imbalance (e.g., very few 1's compared to 0's). + +\subsection{Types of Classification}\label{types-of-classification} + +There are 4 different different classifications that our model might +make: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \textbf{True positive}: correctly classify a positive point as being + positive (\(y=1\) and \(\hat{y}=1\)) +\item + \textbf{True negative}: correctly classify a negative point as being + negative (\(y=0\) and \(\hat{y}=0\)) +\item + \textbf{False positive}: incorrectly classify a negative point as + being positive (\(y=0\) and \(\hat{y}=1\)) +\item + \textbf{False negative}: incorrectly classify a positive point as + being negative (\(y=1\) and \(\hat{y}=0\)) +\end{enumerate} + +These classifications can be concisely summarized in a \textbf{confusion +matrix}. + +An easy way to remember this terminology is as follows: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Look at the second word in the phrase. \emph{Positive} means a + prediction of 1. \emph{Negative} means a prediction of 0. +\item + Look at the first word in the phrase. \emph{True} means our prediction + was correct. \emph{False} means it was incorrect. +\end{enumerate} + +We can now write the accuracy calculation as +\[\text{accuracy} = \frac{TP + TN}{n}\] + +In \texttt{sklearn}, we use the following syntax to plot a confusion +matrix: + +\begin{verbatim} +from sklearn.metrics import confusion_matrix +cm = confusion_matrix(Y_true, Y_pred) +\end{verbatim} + +\subsection{Accuracy, Precision, and +Recall}\label{accuracy-precision-and-recall} + +The purpose of our discussion of the confusion matrix was to motivate +better performance metrics for classification problems with class +imbalance - namely, precision and recall. + +\textbf{Precision} is defined as + +\[\text{precision} = \frac{\text{TP}}{\text{TP + FP}}\] + +Precision answers the question: ``Of all observations that were +predicted to be \(1\), what proportion was actually \(1\)?'' It measures +how accurate the classifier is when its predictions are positive. + +\textbf{Recall} (or \textbf{sensitivity}) is defined as + +\[\text{recall} = \frac{\text{TP}}{\text{TP + FN}}\] + +Recall aims to answer: ``Of all observations that were actually \(1\), +what proportion was predicted to be \(1\)?'' It measures how many +positive predictions were missed. + +Here's a helpful graphic that summarizes our discussion above. + +\subsection{Example Calculation}\label{example-calculation-1} + +In this section, we will calculate the accuracy, precision, and recall +performance metrics for our earlier spam classification example. As a +reminder, we had 100 emails, 5 of which were spam. We designed two +models: + +\begin{itemize} +\tightlist +\item + Model 1: Predict that every email is \emph{non-spam} +\item + Model 2: Predict that every email is \emph{spam} +\end{itemize} + +\subsubsection{Model 1}\label{model-1} + +First, let's begin by creating the confusion matrix. + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.2778}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.2778}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3889}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +0 +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +1 +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & True Negative: 95 & False Positive: 0 \\ +1 & False Negative: 5 & True Positive: 0 \\ +\end{longtable} + +\[\text{accuracy} = \frac{95}{100} = 0.95\] +\[\text{precision} = \frac{0}{0 + 0} = \text{undefined}\] +\[\text{recall} = \frac{0}{0 + 5} = 0\] + +Notice how our precision is undefined because we never predicted class +\(1\). Our recall is 0 for the same reason -- the numerator is 0 (we had +no positive predictions). + +\subsubsection{Model 2}\label{model-2} + +The confusion matrix for Model 2 is: + +\begin{longtable}[]{@{} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.2778}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.2778}} + >{\raggedright\arraybackslash}p{(\columnwidth - 4\tabcolsep) * \real{0.3889}}@{}} +\toprule\noalign{} +\begin{minipage}[b]{\linewidth}\raggedright +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +0 +\end{minipage} & \begin{minipage}[b]{\linewidth}\raggedright +1 +\end{minipage} \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & True Negative: 0 & False Positive: 95 \\ +1 & False Negative: 0 & True Positive: 5 \\ +\end{longtable} + +\[\text{accuracy} = \frac{5}{100} = 0.05\] +\[\text{precision} = \frac{5}{5 + 95} = 0.05\] +\[\text{recall} = \frac{5}{5 + 0} = 1\] + +Our precision is low because we have many false positives, and our +recall is perfect - we correctly classified all spam emails (we never +predicted class \(0\)). + +\subsection{Precision vs.~Recall}\label{precision-vs.-recall} + +Precision (\(\frac{\text{TP}}{\text{TP} + \textbf{ FP}}\)) penalizes +false positives, while recall +(\(\frac{\text{TP}}{\text{TP} + \textbf{ FN}}\)) penalizes false +negatives. In fact, precision and recall are \emph{inversely related}. +This is evident in our second model -- we observed a high recall and low +precision. Usually, there is a tradeoff in these two (most models can +either minimize the number of FP or FN; and in rare cases, both). + +The specific performance metric(s) to prioritize depends on the context. +In many medical settings, there might be a much higher cost to missing +positive cases. For instance, in our breast cancer example, it is more +costly to misclassify malignant tumors (false negatives) than it is to +incorrectly classify a benign tumor as malignant (false positives). In +the case of the latter, pathologists can conduct further studies to +verify malignant tumors. As such, we should minimize the number of false +negatives. This is equivalent to maximizing recall. + +\subsection{Three More Metrics}\label{three-more-metrics} + +The \textbf{True Positive Rate (TPR)} is defined as + +\[\text{true positive rate} = \frac{\text{TP}}{\text{TP + FN}}\] + +You'll notice this is equivalent to \emph{recall}. In the context of our +spam email classifier, it answers the question: ``What proportion of +spam did I mark correctly?''. We'd like this to be close to \(1\). + +The \textbf{True Negative Rate (TNR)} is defined as + +\[\text{true negative rate} = \frac{\text{TN}}{\text{TN + FP}}\] + +Another word for TNR is \emph{specificity}. This answers the question: +``What proportion of ham did I mark correctly?''. We'd like this to be +close to \(1\). + +The \textbf{False Positive Rate (FPR)} is defined as + +\[\text{false positive rate} = \frac{\text{FP}}{\text{FP + TN}}\] + +FPR is equal to \emph{1 - specificity}, or \emph{1 - TNR}. This answers +the question: ``What proportion of regular email did I mark as spam?''. +We'd like this to be close to \(0\). + +As we increase threshold \(T\), both TPR and FPR decrease. We've plotted +this relationship below for some model on a \texttt{toy} dataset. + +\section{Adjusting the Classification +Threshold}\label{adjusting-the-classification-threshold} + +One way to minimize the number of FP vs.~FN (equivalently, maximizing +precision vs.~recall) is by adjusting the classification threshold +\(T\). + +\[\hat y = \begin{cases} + 1, & P(Y=1|x) \ge T\\ + 0, & \text{otherwise } + \end{cases}\] + +The default threshold in \texttt{sklearn} is \(T = 0.5\). As we increase +the threshold \(T\), we ``raise the standard'' of how confident our +classifier needs to be to predict 1 (i.e., ``positive''). + +As you may notice, the choice of threshold \(T\) impacts our +classifier's performance. + +\begin{itemize} +\tightlist +\item + High \(T\): Most predictions are \(0\). + + \begin{itemize} + \tightlist + \item + Lots of false negatives + \item + Fewer false positives + \end{itemize} +\item + Low \(T\): Most predictions are \(1\). + + \begin{itemize} + \tightlist + \item + Lots of false positives + \item + Fewer false negatives + \end{itemize} +\end{itemize} + +In fact, we can choose a threshold \(T\) based on our desired number, or +proportion, of false positives and false negatives. We can do so using a +few different tools. We'll touch on two of the most important ones in +Data 100. + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Precision-Recall Curve (PR Curve) +\item + ``Receiver Operating Characteristic'' Curve (ROC Curve) +\end{enumerate} + +\subsection{Precision-Recall Curves}\label{precision-recall-curves} + +A \textbf{Precision-Recall Curve (PR Curve)} is an alternative to the +ROC curve that displays the relationship between precision and recall +for various threshold values. In this curve, we test out many different +possible thresholds and for each one we compute the precision and recall +of the classifier. + +Let's first consider how precision and recall change as a function of +the threshold \(T\). We know this quite well from earlier -- precision +will generally increase, and recall will decrease. + +Displayed below is the PR Curve for the same \texttt{toy} dataset. +Notice how threshold values increase as we move to the left. + +Once again, the perfect classifier will resemble the orange curve, this +time, facing the opposite direction. + +We want our PR curve to be as close to the ``top right'' of this graph +as possible. Again, we use the AUC to determine ``closeness'', with the +perfect classifier exhibiting an AUC = 1 (and the worst with an AUC = +0.5). + +\subsection{The ROC Curve}\label{the-roc-curve} + +The ``Receiver Operating Characteristic'' Curve (\textbf{ROC Curve}) +plots the tradeoff between FPR and TPR. Notice how the far-left of the +curve corresponds to higher threshold \(T\) values. At lower thresholds, +the FPR and TPR are both high as there are many positive predictions +while at higher thresholds the FPR and TPR are both low as there are +fewer positive predictions. + +The ``perfect'' classifier is the one that has a TPR of 1, and FPR of 0. +This is achieved at the top-left of the plot below. More generally, it's +ROC curve resembles the curve in orange. + +We want our model to be as close to this orange curve as possible. How +do we quantify ``closeness''? + +We can compute the \textbf{area under curve (AUC)} of the ROC curve. +Notice how the perfect classifier has an AUC = 1. The closer our model's +AUC is to 1, the better it is. + +\subsubsection{(Extra) What is the ``worst'' AUC, and why is it +0.5?}\label{extra-what-is-the-worst-auc-and-why-is-it-0.5} + +On the other hand, a terrible model will have an AUC closer to 0.5. +Random predictors randomly predict \(P(Y = 1 | x)\) to be uniformly +between 0 and 1. This indicates the classifier is not able to +distinguish between positive and negative classes, and thus, randomly +predicts one of the two. + +We can also illustrate this by comparing different thresholds and seeing +their points on the ROC curve. + +\section{(Bonus) Gradient Descent for Logistic +Regression}\label{bonus-gradient-descent-for-logistic-regression} + +Let's define the following terms: \[ +\begin{align} +t_i &= \phi(x_i)^T \theta \\ +p_i &= \sigma(t_i) \\ +t_i &= \log(\frac{p_i}{1 - p_i}) \\ +1 - \sigma(t_i) &= \sigma(-t_i) \\ +\frac{d}{dt} \sigma(t) &= \sigma(t) \sigma(-t) +\end{align} +\] + +Now, we can simplify the cross-entropy loss \[ +\begin{align} +y_i \log(p_i) + (1 - y_i) \log(1 - p_i) &= y_i \log(\frac{p_i}{1 - p_i}) + \log(1 - p_i) \\ +&= y_i \phi(x_i)^T + \log(\sigma(-\phi(x_i)^T \theta)) +\end{align} +\] + +Hence, the optimal \(\hat{\theta}\) is +\[\text{argmin}_{\theta} - \frac{1}{n} \sum_{i=1}^n (y_i \phi(x_i)^T + \log(\sigma(-\phi(x_i)^T \theta)))\] + +We want to minimize +\[L(\theta) = - \frac{1}{n} \sum_{i=1}^n (y_i \phi(x_i)^T + \log(\sigma(-\phi(x_i)^T \theta)))\] + +So we take the derivative \[ +\begin{align} +\triangledown_{\theta} L(\theta) &= - \frac{1}{n} \sum_{i=1}^n \triangledown_{\theta} y_i \phi(x_i)^T + \triangledown_{\theta} \log(\sigma(-\phi(x_i)^T \theta)) \\ +&= - \frac{1}{n} \sum_{i=1}^n y_i \phi(x_i) + \triangledown_{\theta} \log(\sigma(-\phi(x_i)^T \theta)) \\ +&= - \frac{1}{n} \sum_{i=1}^n y_i \phi(x_i) + \frac{1}{\sigma(-\phi(x_i)^T \theta)} \triangledown_{\theta} \sigma(-\phi(x_i)^T \theta) \\ +&= - \frac{1}{n} \sum_{i=1}^n y_i \phi(x_i) + \frac{\sigma(-\phi(x_i)^T \theta)}{\sigma(-\phi(x_i)^T \theta)} \sigma(\phi(x_i)^T \theta)\triangledown_{\theta} \sigma(-\phi(x_i)^T \theta) \\ +&= - \frac{1}{n} \sum_{i=1}^n (y_i - \sigma(\phi(x_i)^T \theta)\phi(x_i)) +\end{align} +\] + +Setting the derivative equal to 0 and solving for \(\hat{\theta}\), we +find that there's no general analytic solution. Therefore, we must solve +using numeric methods. + +\subsection{Gradient Descent Update +Rule}\label{gradient-descent-update-rule} + +\[\theta^{(0)} \leftarrow \text{initial vector (random, zeros, ...)} \] + +For \(\tau\) from 0 to convergence: +\[ \theta^{(\tau + 1)} \leftarrow \theta^{(\tau)} - \rho(\tau)\left( \frac{1}{n} \sum_{i=1}^n \triangledown_{\theta} L_i(\theta) \mid_{\theta = \theta^{(\tau)}}\right) \] + +\subsection{Stochastic Gradient Descent Update +Rule}\label{stochastic-gradient-descent-update-rule} + +\[\theta^{(0)} \leftarrow \text{initial vector (random, zeros, ...)} \] + +For \(\tau\) from 0 to convergence, let \(B\) \textasciitilde{} +\(\text{Random subset of indices}\). +\[ \theta^{(\tau + 1)} \leftarrow \theta^{(\tau)} - \rho(\tau)\left( \frac{1}{|B|} \sum_{i \in B} \triangledown_{\theta} L_i(\theta) \mid_{\theta = \theta^{(\tau)}}\right) \] + +\bookmarksetup{startatroot} + +\chapter{PCA I}\label{pca-i} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Discuss the dimensionality of a dataset and strategies for + dimensionality reduction +\item + Derive and carry out the procedure of PCA +\end{itemize} + +\end{tcolorbox} + +So far in this course, we've focused on \textbf{supervised learning} +techniques that create a function to map inputs (features) to labelled +outputs. Regression and classification are two main examples, where the +output value of regression is \emph{quantitative} while the output value +of classification is \emph{categorical}. + +Today, we'll introduce an \textbf{unsupervised learning} technique +called PCA. Unlike supervised learning, unsupervised learning is applied +to \emph{unlabeled} data. Because we have features but no labels, we aim +to identify patterns in those features. + +\section{Visualization (Revisited)}\label{visualization-revisited} + +Visualization can help us identify clusters or patterns in our dataset, +and it can give us an intuition about our data and how to clean it for +the model. For this demo, we'll return to the MPG dataset from Lecture +19 and see how far we can push visualization for multiple features. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ scipy }\ImportTok{as}\NormalTok{ sp} +\ImportTok{import}\NormalTok{ plotly.express }\ImportTok{as}\NormalTok{ px} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{mpg }\OperatorTok{=}\NormalTok{ sns.load\_dataset(}\StringTok{"mpg"}\NormalTok{).dropna()} +\NormalTok{mpg.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllllllll@{}} +\toprule\noalign{} +& mpg & cylinders & displacement & horsepower & weight & acceleration & +model\_year & origin & name \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 18.0 & 8 & 307.0 & 130.0 & 3504 & 12.0 & 70 & usa & chevrolet +chevelle malibu \\ +1 & 15.0 & 8 & 350.0 & 165.0 & 3693 & 11.5 & 70 & usa & buick skylark +320 \\ +2 & 18.0 & 8 & 318.0 & 150.0 & 3436 & 11.0 & 70 & usa & plymouth +satellite \\ +3 & 16.0 & 8 & 304.0 & 150.0 & 3433 & 12.0 & 70 & usa & amc rebel sst \\ +4 & 17.0 & 8 & 302.0 & 140.0 & 3449 & 10.5 & 70 & usa & ford torino \\ +\end{longtable} + +We can plot one feature as a histogram to see it's distribution. Since +we only plot one feature, we consider this a 1-dimensional plot. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{px.histogram(mpg, x}\OperatorTok{=}\StringTok{"displacement"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +We can also visualize two features (2-dimensional scatter plot): + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{px.scatter(mpg, x}\OperatorTok{=}\StringTok{"displacement"}\NormalTok{, y}\OperatorTok{=}\StringTok{"horsepower"}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +Three features (3-dimensional scatter plot): + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.scatter\_3d(mpg, x}\OperatorTok{=}\StringTok{"displacement"}\NormalTok{, y}\OperatorTok{=}\StringTok{"horsepower"}\NormalTok{, z}\OperatorTok{=}\StringTok{"weight"}\NormalTok{,} +\NormalTok{ width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{800}\NormalTok{)} +\NormalTok{fig.update\_traces(marker}\OperatorTok{=}\BuiltInTok{dict}\NormalTok{(size}\OperatorTok{=}\DecValTok{3}\NormalTok{))} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +We can even push to 4 features using a 3D scatter plot and a colorbar: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.scatter\_3d(mpg, x}\OperatorTok{=}\StringTok{"displacement"}\NormalTok{, } +\NormalTok{ y}\OperatorTok{=}\StringTok{"horsepower"}\NormalTok{, } +\NormalTok{ z}\OperatorTok{=}\StringTok{"weight"}\NormalTok{, } +\NormalTok{ color}\OperatorTok{=}\StringTok{"model\_year"}\NormalTok{,} +\NormalTok{ width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{800}\NormalTok{, } +\NormalTok{ opacity}\OperatorTok{=}\FloatTok{.7}\NormalTok{)} +\NormalTok{fig.update\_traces(marker}\OperatorTok{=}\BuiltInTok{dict}\NormalTok{(size}\OperatorTok{=}\DecValTok{5}\NormalTok{))} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +Visualizing 5 features is also possible if we make the scatter dots +unique to the datapoint's origin. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.scatter\_3d(mpg, x}\OperatorTok{=}\StringTok{"displacement"}\NormalTok{, } +\NormalTok{ y}\OperatorTok{=}\StringTok{"horsepower"}\NormalTok{, } +\NormalTok{ z}\OperatorTok{=}\StringTok{"weight"}\NormalTok{, } +\NormalTok{ color}\OperatorTok{=}\StringTok{"model\_year"}\NormalTok{,} +\NormalTok{ size}\OperatorTok{=}\StringTok{"mpg"}\NormalTok{,} +\NormalTok{ symbol}\OperatorTok{=}\StringTok{"origin"}\NormalTok{,} +\NormalTok{ width}\OperatorTok{=}\DecValTok{900}\NormalTok{, height}\OperatorTok{=}\DecValTok{800}\NormalTok{, } +\NormalTok{ opacity}\OperatorTok{=}\FloatTok{.7}\NormalTok{)} +\CommentTok{\# hide color scale legend on the plotly fig} +\NormalTok{fig.update\_layout(coloraxis\_showscale}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +However, adding more features to our visualization can make our plot +look messy and uninformative, and it can also be near impossible if we +have a large number of features. The problem is that many datasets come +with more than 5 features ------ hundreds, even. Is it still possible to +visualize all those features? + +\section{Dimensionality}\label{dimensionality} + +Suppose we have a dataset of: + +\begin{itemize} +\tightlist +\item + \(N\) observations (datapoints/rows) +\item + \(d\) attributes (features/columns) +\end{itemize} + +Let's ``rename'' this in terms of linear algebra so that we can be more +clear with our wording. Using linear algebra, we can view our matrix as: + +\begin{itemize} +\tightlist +\item + \(N\) row vectors in a \(d\)-Dimensional space, OR +\item + \(d\) column vectors in an \(N\)-Dimensions space +\end{itemize} + +The \textbf{intrinsic dimension} of a dataset is the \emph{minimal set +of dimensions} needed to approximately represent the data. In linear +algebra terms, it is the \textbf{dimension of the column space} of a +matrix, or the number of linearly independent columns in a matrix; this +is equivalently called the \textbf{rank} of a matrix. + +In the examples below, Dataset 1 has 2 dimensions because it has 2 +linearly independent columns. Similarly, Dataset 2 has 3 dimensions +because it has 3 linearly independent columns. + +What about Dataset 4 below? + +It may be tempting to say that it has 4 dimensions, but the +\texttt{Weight\ (lbs)} column is actually just a linear transformation +of the \texttt{Weight\ (kg)} column. Thus, no new information is +captured, and the matrix of our dataset has a (column) rank of 3! +Therefore, despite having 4 columns, we still say that this data is +3-dimensional. + +Plotting the weight columns together reveals the key visual intuition. +While the two columns visually span a 2D space as a line, the data does +not deviate at all from that singular line. This means that one of the +weight columns is redundant! Even given the option to cover the whole 2D +space, the data below does not. It might as well not have this +dimension, which is why we still do not consider the data below to span +more than 1 dimension. + +What happens when there are outliers? Below, we've added one outlier +point to the dataset above, and just that one point is enough to change +the rank of the matrix from 1 to 2 dimensions. However, the data is +still \emph{approximately} 1-dimensional. + +\textbf{Dimensionality reduction} is generally an \textbf{approximation +of the original data} that's achieved by \textbf{projecting} the data +onto a desired dimension. In the example below, our original datapoints +(blue dots) are 2-dimensional. We have a few choices if we want to +project them down to 1-dimension: project them onto the \(x\)-axis +(left), project them onto the \(y\)-axis (middle), or project them to a +line \(mx + b\) (right). The resulting datapoints after the projection +is shown in red. Which projection do you think is better? How can we +calculate that? + +In general, we want the projection which is the best approximation for +the original data (the graph on the right). In other words, we want the +projection that \emph{captures the most variance} of the original data. +In the next section, we'll see how this is calculated. + +\section{Matrix Decomposition +(Factorization)}\label{matrix-decomposition-factorization} + +One linear technique for dimensionality reduction is matrix +decomposition, which is closely tied to matrix multiplication. In this +section, we will decompose our data matrix \(X\) into a +lower-dimensional matrix \(Z\) that approximately recovers the original +data when multiplied by \(W\). + +First, consider the matrix multiplication example below: + +\begin{itemize} +\tightlist +\item + For table 1, each row of the fruits matrix represents one bowl of + fruit; for example, the first bowl/row has 2 apples, 2 lemons, and 2 + melons. +\item + For table 2, each column of the dollars matrix represents the cost of + fruit at a store; for example, the first store/column charges 2 + dollars for an apple, 1 dollar for a lemon, and 4 dollars for a melon. +\item + The output is the cost of each bowl at each store. +\end{itemize} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{Linear Algebra Review: Matrix Multiplication}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +In general, there are two ways to interpret matrix multiplication: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Each \emph{datapoint} in our output is a \emph{dot product} between a + row in the data matrix and a column in the operations matrix. In this + view, we perform multiple linear operations on the data +\item + Each \emph{column} in our output is a \emph{linear transformation} of + the original columns based on a column in the transformation matrix +\end{enumerate} + +We will use the second interpretation to link matrix multiplication with +matrix decomposition, where we receive a lower dimensional +representation of data along with a transformation matrix. + +\end{tcolorbox} + +\textbf{Matrix decomposition} (a.k.a \textbf{matrix factorization}) is +the opposite of matrix multiplication. Instead of multiplying two +matrices, we want to \emph{decompose} a single matrix into 2 separate +matrices. Just like with real numbers, there are infinite ways to +decompose a matrix into a product of two matrices. For example, \(9.9\) +can be decomposed as \(1.1 * 9\), \(3.3 * 3.3\), \(1 * 9.9\), etc. +Additionally, the sizes of the 2 decomposed matrices can vary +drastically. In the example below, the first factorization (top) +multiplies a \(3x2\) matrix by a \(2x3\) matrix while the second +factorization (bottom) multiplies a \(3x3\) matrix by a \(3x3\) matrix; +both result in the original matrix on the right. + +We can even expand the \(3x3\) matrices to \(3x4\) and \(4x3\) (shown +below as the factorization on top), but this defeats the point of +dimensionality reduction since we're adding more ``useless'' dimensions. +On the flip side, we also can't reduce the dimension to \(3x1\) and +\(1x3\) (shown below as the factorization on the bottom); since the rank +of the original matrix is greater than 1, this decomposition will not +result in the original matrix. + +In practice, we often work with datasets containing many features, so we +usually want to construct decompositions where the dimensionality is +below the rank of the original matrix. While this does not recover the +data exactly, we can still provide \emph{approximate reconstructions} of +the matrix. + +In the next section, we will discuss a method to automatically and +approximately factorize data. This avoids redundant features and makes +computation easier because we can train on less data. Since some +approximations are better than others, we will also discuss how the +method helps us capture a lot of information in a low number of +dimensions. + +\section{Principal Component Analysis +(PCA)}\label{principal-component-analysis-pca} + +In PCA, our goal is to transform observations from high-dimensional data +down to low dimensions (often 2, as most visualizations are 2D) through +linear transformations. In other words, we want to find a linear +transformation that creates a low-dimension representation that captures +as much of the original data's \emph{total variance} as possible. + +We often perform PCA during the Exploratory Data Analysis (EDA) stage of +our data science lifecycle when we don't know what model to use. It +helps us with: + +\begin{itemize} +\tightlist +\item + Visually identifying clusters of similar observations in high + dimensions. +\item + Removing irrelevant dimensions if we suspect that the dataset is + inherently low rank. For example, if the columns are collinear, there + are many attributes, but only a few mostly determine the rest through + linear associations. +\item + Creating a transformed dataset of decorrelated features. +\end{itemize} + +There are two equivalent ways of framing PCA: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Finding directions of \textbf{maximum variability} in the data. +\item + Finding the low dimensional (rank) matrix factorization that + \textbf{best approximates the data}. +\end{enumerate} + +To execute the first approach of \textbf{variance maximization} framing +(more common), we can find the variances of each attribute with +\texttt{np.var} and then keep the \(k\) attributes with the highest +variance. However, this approach limits us to work with attributes +individually; it cannot resolve collinearity, and we cannot combine +features. + +The second approach uses PCA to construct \textbf{principal components} +with the most variance in the data (even higher than the first approach) +using \textbf{linear combinations of features}. We'll describe the +procedure in the next section. + +\subsection{PCA Procedure (Overview)}\label{pca-procedure-overview} + +To perform PCA on a matrix: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \textbf{Center} the data matrix by subtracting the mean of each + attribute column. +\item + To find the \(i\)-th \textbf{principal component}, \(v_i\): + + \begin{enumerate} + \def\labelenumii{\arabic{enumii}.} + \tightlist + \item + \(v\) is a \textbf{unit vector} that linearly combines the + attributes. + \item + \(v\) gives a one-dimensional projection of the data. + \item + \(v\) is chosen to \textbf{maximize the variance} along the + projection onto \(v\). This is equivalent to \textbf{minimizing the + sum of squared distances} between each point and its projection onto + \(v\). + \item + Choose \(v\) such that it is orthogonal to all previous principal + components. + \end{enumerate} +\end{enumerate} + +The \(k\) principal components capture the most variance of any +\(k\)-dimensional reduction of the data matrix. + +In practice, however, we don't carry out the procedures in step 2 +because they take too long to compute. Instead, we use singular value +decomposition (SVD) to find all principal components efficiently. + +\subsection{Deriving PCA as Error +Minimization}\label{deriving-pca-as-error-minimization} + +In this section, we will derive PCA keeping the following goal in mind: +minimize the reconstruction loss for our matrix factorization model. You +are not expected to be able to be able to redo this derivation, but +understanding the derivation may help with future assignments. + +Given a matrix \(X\) with \(n\) rows and \(d\) columns, our goal is to +find its best decomposition such that \[X \approx Z W\] Z has \(n\) rows +and \(k\) columns; W has \(k\) rows and \(d\) columns. + +To measure the accuracy of our reconstruction, we define the +\textbf{reconstruction loss} below, where \(X_i\) is the row vector of +\(X\), and \(Z_i\) is the row vector of \(Z\): + +There are many solutions to the above, so let's constrain our model such +that \(W\) is a \textbf{row-orthonormal matrix} (i.e.~\(WW^T=I\)) where +the rows of \(W\) are our principal components. + +In our derivation, let's first work with the case where \(k=1\). Here Z +will be an \(n \times 1\) vector and W will be a \(1 \times d\) vector. + +\[\begin{aligned} +L(z,w) &= \frac{1}{n}\sum_{i=1}^{n}(X_i - z_{i}w)(X_i - z_{i}w)^T \\ +&= \frac{1}{n}\sum_{i=1}^{n}(X_{i}X_{i}^T - 2z_{i}X_{i}w^T + z_{i}^{2}ww^T) & \text{(expand the loss)} \\ += \frac{1}{n}\sum_{i=1}^{n}(-2z_{i}X_{i}w^T + z_{i}^{2}) & \text{(First term is constant and }ww^T=1\text{ by orthonormality)} \\ +\end{aligned}\] + +Now, we can take the derivative with respect to \(Z_i\). +\[\begin{aligned} +\frac{\partial{L(Z,W)}}{\partial{z_i}} &= \frac{1}{n}(-2X_{i}w^T + 2z_{i}) \\ +z_i &= X_iw^T & \text{(Setting derivative equal to 0 and solving for }z_i\text{)}\end{aligned}\] + +We can now substitute our solution for \(z_i\) in our loss function: + +\[\begin{aligned} +L(z,w) &= \frac{1}{n}\sum_{i=1}^{n}(-2z_{i}X_{i}w^T + z_{i}^{2}) \\ +L(z=X_iw^T,w) &= \frac{1}{n}\sum_{i=1}^{n}(-2X_iw^TX_{i}w^T + (X_iw^T)^{2}) \\ +&= \frac{1}{n}\sum_{i=1}^{n}(-X_iw^TX_{i}w^T) \\ +&= \frac{1}{n}\sum_{i=1}^{n}(-wX_{i}^TX_{i}w^T) \\ +&= -w\frac{1}{n}\sum_{i=1}^{n}(X_i^TX_{i})w^T \\ +&= -w\Sigma w^T +\end{aligned}\] + +Now, we need to minimize our loss with respect to \(w\). Since we have a +negative sign, one way we can do this is by making \(w\) really big. +However, we also have the orthonormality constraint \(ww^T=1\). To +incorporate this constraint into the equation, we can add a Lagrange +multiplier, \(\lambda\). Note that lagrangian multipliers are out of +scope for Data 100. + +\[ +L(w,\lambda) = -w\Sigma w^T + \lambda(ww^T-1) +\] + +Taking the derivative with respect to \(w\), \[\begin{aligned} +\frac{\partial{L(w,\lambda)}}{w} &= -2\Sigma w^T + 2\lambda w^T \\ +2\Sigma w^T - 2\lambda w^T &= 0 & \text{(Setting derivative equal to 0)} \\ +\Sigma w^T &= \lambda w^T \\ +\end{aligned}\] + +This result implies that: + +\begin{itemize} +\tightlist +\item + \(w\) is a \textbf{unitary eigenvector} of the covariance matrix. This + means that \(||w||^2 = ww^T = 1\) +\item + The error is minimized when \(w\) is the eigenvector with the largest + eigenvalue \(\lambda\). +\end{itemize} + +This derivation can inductively be used for the next (second) principal +component (not shown). + +The final takeaway from this derivation is that the \textbf{principal +components} are the \textbf{eigenvectors} with the \textbf{largest +eigenvalues} of the \textbf{covariance matrix}. These are the +\textbf{directions} of the \textbf{maximum variance} of the data. We can +construct the \textbf{latent factors (the Z matrix)} by +\textbf{projecting} the centered data X onto the principal component +vectors: + +\bookmarksetup{startatroot} + +\chapter{PCA II}\label{pca-ii} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Dissect Singular Value Decomposition (SVD) and use it to calculate + principal components +\item + Develop a deeper understanding of how to interpret Principal Component + Analysis (PCA) +\item + See applications of PCA in some real-world contexts +\end{itemize} + +\end{tcolorbox} + +\section{Dimensionality Reduction}\label{dimensionality-reduction} + +We often work with high-dimensional data that contain \emph{many} +columns/features. Given all these dimensions, this data can be difficult +to visualize and model. However, not all the data in this +high-dimensional space is useful ------ there could be repeated features +or outliers that make the data seem more complex than it really is. The +most concise representation of high-dimensional data is its +\textbf{intrinsic dimension}. Our goal with this lecture is to use +\textbf{dimensionality reduction} to find the intrinsic dimension of a +high-dimensional dataset. In other words, we want to find a smaller set +of new features/columns that approximates the original data well without +losing that much information. This is especially useful because this +smaller set of features allows us to better visualize the data and do +EDA to understand which modeling techniques would fit the data well. + +\subsection{Loss Minimization}\label{loss-minimization} + +In order to find the intrinsic dimension of a high-dimensional dataset, +we'll use techniques from linear algebra. Suppose we have a +high-dimensional dataset, \(X\), that has \(n\) rows and \(d\) columns. +We want to factor (split) \(X\) into two matrices, \(Z\) and \(W\). +\(Z\) has \(n\) rows and \(k\) columns; \(W\) has \(k\) rows and \(d\) +columns. + +\[ X \approx ZW\] + +We can reframe this problem as a loss function: in other words, if we +want \(X\) to roughly equal \(ZW\), their difference should be as small +as possible, ideally 0. This difference becomes our loss function, +\(L(Z, W)\): + +\[L(Z, W) = \frac{1}{n}\sum_{i=1}^{n}||X_i - Z_iW||^2\] + +Breaking down the variables in this formula: + +\begin{itemize} +\tightlist +\item + \(X_i\): A row vector from the original data matrix \(X\), which we + can assume is centered to a mean of 0. +\item + \(Z_i\): A row vector from the lower-dimension matrix \(Z\). The rows + of \(Z\) are also known as \textbf{latent vectors} and are used for + EDA. +\item + \(W\): The rows of \(W\) are the \textbf{principal components}. We + constrain our model so that \(W\) is a row-orthonormal matrix (e.g., + \(WW^T = I\)). +\end{itemize} + +Using calculus and optimization techniques (take EECS 127 if you're +interested!), we find that this loss is minimized when \[Z = XW^T\] The +proof for this is out of scope for Data 100, but for those who are +interested, we: + +\begin{itemize} +\tightlist +\item + Use Lagrangian multipliers to introduce the orthonormality constraint + on \(W\). +\item + Took the derivative with respect to \(W\) (which requires vector + calculus) and solve for 0. +\end{itemize} + +This gives us a very cool result of + +\[\Sigma w^T = \lambda w^T\] + +\(\Sigma\) is the covariance matrix of \(X\). The equation above implies +that: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \(w\) is a \textbf{unitary eigenvector} of the covariance matrix + \(\Sigma\). In other words, its norm is equal to 1: + \(||w||^2 = ww^T = 1\). +\item + The loss is minimized when \(w\) is the eigenvector with the + \textbf{largest eigenvalue} \(\lambda\). +\end{enumerate} + +This tells us that the principal components (rows of \(W\)) are the +eigenvectors with the largest eigenvalues of the covariance matrix +\(\Sigma\). They represent the directions of \textbf{maximum variance} +in the data. We can construct the latent factors, or the \(Z\) matrix, +by projecting the centered data \(X\) onto the principal component +vectors, \(W^T\). + +But how do we compute the eigenvectors of \(\Sigma\)? Let's dive into +SVD to answer this question. + +\section{Singular Value Decomposition +(SVD)}\label{singular-value-decomposition-svd} + +Singular value decomposition (SVD) is an important concept in linear +algebra. Since this class requires a linear algebra course (MATH 54, +MATH 56, or EECS 16A) as a pre/co-requisite, we assume you have taken or +are taking a linear algebra course, so we won't explain SVD in its +entirety. In particular, we will go over: + +\begin{itemize} +\tightlist +\item + Why SVD is a valid decomposition of rectangular matrices +\item + Why PCA is an application of SVD +\end{itemize} + +We will not dive deep into the theory and details of SVD. Instead, we +will only cover what is needed for a data science interpretation. If +you'd like more information, check out +\href{https://inst.eecs.berkeley.edu/~ee16b/sp23/notes/sp23/note14.pdf}{EECS +16B Note 14} or +\href{https://inst.eecs.berkeley.edu/~ee16b/sp23/notes/sp23/note15.pdf}{EECS +16B Note 15}. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{{[}Linear Algebra Review{]} Orthonormality}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +Orthonormal is a combination of two words: orthogonal and normal. + +When we say the columns of a matrix are orthonormal, we know that: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + The columns are all orthogonal to each other (all pairs of columns + have a dot product of zero) +\item + All columns are unit vectors (the length of each column vector is 1) +\end{enumerate} + +Orthonormal matrices have a few important properties: + +\begin{itemize} +\tightlist +\item + \textbf{Orthonormal inverse}: If an \(m \times n\) matrix \(Q\) has + orthonormal columns, \(QQ^T= Iₘ\) and \(Q^TQ=Iₙ\). +\item + \textbf{Rotation of coordinates}: The linear transformation + represented by an orthonormal matrix is often a rotation (and less + often a reflection). We can imagine columns of the matrix as where the + unit vectors of the original space will land. +\end{itemize} + +\end{tcolorbox} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{{[}Linear Algebra Review{]} Diagonal Matrices}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\textbf{Diagonal matrices} are square matrices with non-zero values on +the diagonal axis and zeros everywhere else. + +Right-multiplied diagonal matrices scale each column up or down by a +constant factor. Geometrically, this transformation can be viewed as +scaling the coordinate system. + +\end{tcolorbox} + +Singular value decomposition (SVD) describes a matrix \(X\)'s +decomposition into three matrices: \[ X = U S V^T \] + +Let's break down each of these terms one by one. + +\subsection{\texorpdfstring{\(U\)}{U}}\label{u} + +\begin{itemize} +\tightlist +\item + \(U\) is an \(n \times d\) matrix: \(U \in \mathbb{R}^{n \times d}\). +\item + Its columns are \textbf{orthonormal}. + + \begin{itemize} + \tightlist + \item + \(\vec{u_i}^T\vec{u_j} = 0\) for all pairs \(i, j\). + \item + All vectors \(\vec{u_i}\) are unit vectors where + \(|| \vec{u_i} || = 1\) . + \end{itemize} +\item + Columns of U are called the \textbf{left singular vectors} and are + \textbf{eigenvectors} of \(XX^T\). +\item + \(UU^T = I_n\) and \(U^TU = I_d\). +\item + We can think of \(U\) as a rotation. +\end{itemize} + +\subsection{\texorpdfstring{\(S\)}{S}}\label{s} + +\begin{itemize} +\tightlist +\item + \(S\) is a \(d \times d\) matrix: \(S \in \mathbb{R}^{d \times d}\). +\item + The majority of the matrix is zero. +\item + It has \(r\) \textbf{non-zero} \textbf{singular values}, and \(r\) is + the rank of \(X\). Note that rank \(r \leq d\). +\item + Diagonal values (\textbf{singular values} \(s_1, s_2, ... s_r\)), are + \textbf{non-negative} ordered from largest to smallest: + \(s_1 \ge s_2 \ge ... \ge s_r > 0\). +\item + We can think of \(S\) as a scaling operation. +\end{itemize} + +\subsection{\texorpdfstring{\(V^T\)}{V\^{}T}}\label{vt} + +\begin{itemize} +\tightlist +\item + \(V^T\) is an \(d \times d\) matrix: + \(V \in \mathbb{R}^{d \times d}\). +\item + Columns of \(V\) are orthonormal, so the rows of \(V^T\) are + orthonormal. +\item + Columns of \(V\) are called the \textbf{right singular vectors}, and + similarly to \(U\), are \textbf{eigenvectors} of \(X^TX\). +\item + \(VV^T = V^TV = I_d\) +\item + We can think of \(V\) as a rotation. +\end{itemize} + +\subsection{\texorpdfstring{SVD in +\texttt{NumPy}}{SVD in NumPy}}\label{svd-in-numpy} + +For this demo, we'll work with a rectangular dataset containing +\(n=100\) rows and \(d=4\) columns. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} + +\NormalTok{np.random.seed(}\DecValTok{23}\NormalTok{) }\CommentTok{\# kallisti} + +\NormalTok{plt.rcParams[}\StringTok{"figure.figsize"}\NormalTok{] }\OperatorTok{=}\NormalTok{ (}\DecValTok{4}\NormalTok{, }\DecValTok{4}\NormalTok{)} +\NormalTok{plt.rcParams[}\StringTok{"figure.dpi"}\NormalTok{] }\OperatorTok{=} \DecValTok{150} +\NormalTok{sns.}\BuiltInTok{set}\NormalTok{()} + +\NormalTok{rectangle }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/rectangle\_data.csv"}\NormalTok{)} +\NormalTok{rectangle.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& width & height & area & perimeter \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 8 & 6 & 48 & 28 \\ +1 & 2 & 4 & 8 & 12 \\ +2 & 1 & 3 & 3 & 8 \\ +3 & 9 & 3 & 27 & 24 \\ +4 & 9 & 8 & 72 & 34 \\ +\end{longtable} + +In \texttt{NumPy}, the SVD decomposition function can be called with +\texttt{np.linalg.svd} +(\href{https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html}{documentation}). +There are multiple versions of SVD; to get the version that we will +follow, we need to set the \texttt{full\_matrices} parameter to +\texttt{False}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{U, S, Vt }\OperatorTok{=}\NormalTok{ np.linalg.svd(rectangle, full\_matrices}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +First, let's examine \texttt{U}. As we can see, it's dimensions are +\(n \times d\). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{U.shape} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(100, 4) +\end{verbatim} + +The first 5 rows of \texttt{U} are shown below. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.DataFrame(U).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& 0 & 1 & 2 & 3 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & -0.155151 & 0.064830 & -0.029935 & 0.934418 \\ +1 & -0.038370 & -0.089155 & 0.062019 & -0.299462 \\ +2 & -0.020357 & -0.081138 & 0.058997 & 0.006852 \\ +3 & -0.101519 & -0.076203 & -0.148160 & -0.011848 \\ +4 & -0.218973 & 0.206423 & 0.007274 & -0.056580 \\ +\end{longtable} + +\(S\) is a little different in \texttt{NumPy}. Since the only useful +values in the diagonal matrix \(S\) are the singular values on the +diagonal axis, only those values are returned and they are stored in an +array. + +Our \texttt{rectangle\_data} has a rank of \(3\), so we should have 3 +non-zero singular values, \textbf{sorted from largest to smallest}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{S} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([3.62932568e+02, 6.29904732e+01, 2.56544651e+01, 2.56364534e-14]) +\end{verbatim} + +It seems like we have 4 non-zero values instead of 3, but notice that +the last value is so small (\(10^{-15}\)) that it's practically \(0\). +Hence, we can round the values to get 3 singular values. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{np.}\BuiltInTok{round}\NormalTok{(S)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([363., 63., 26., 0.]) +\end{verbatim} + +To get \texttt{S} in matrix format, we use \texttt{np.diag}. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{Sm }\OperatorTok{=}\NormalTok{ np.diag(S)} +\NormalTok{Sm} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +array([[3.62932568e+02, 0.00000000e+00, 0.00000000e+00, 0.00000000e+00], + [0.00000000e+00, 6.29904732e+01, 0.00000000e+00, 0.00000000e+00], + [0.00000000e+00, 0.00000000e+00, 2.56544651e+01, 0.00000000e+00], + [0.00000000e+00, 0.00000000e+00, 0.00000000e+00, 2.56364534e-14]]) +\end{verbatim} + +Finally, we can see that \texttt{Vt} is indeed a \(d \times d\) matrix. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{Vt.shape} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(4, 4) +\end{verbatim} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.DataFrame(Vt)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& 0 & 1 & 2 & 3 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & -0.146436 & -0.129942 & -8.100201e-01 & -0.552756 \\ +1 & -0.192736 & -0.189128 & 5.863482e-01 & -0.763727 \\ +2 & -0.704957 & 0.709155 & 7.951614e-03 & 0.008396 \\ +3 & -0.666667 & -0.666667 & 9.775109e-17 & 0.333333 \\ +\end{longtable} + +To check that this SVD is a valid decomposition, we can reverse it and +see if it matches our original table (it does, yay!). + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{pd.DataFrame(U }\OperatorTok{@}\NormalTok{ Sm }\OperatorTok{@}\NormalTok{ Vt).head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& 0 & 1 & 2 & 3 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 8.0 & 6.0 & 48.0 & 28.0 \\ +1 & 2.0 & 4.0 & 8.0 & 12.0 \\ +2 & 1.0 & 3.0 & 3.0 & 8.0 \\ +3 & 9.0 & 3.0 & 27.0 & 24.0 \\ +4 & 9.0 & 8.0 & 72.0 & 34.0 \\ +\end{longtable} + +\section{PCA with SVD}\label{pca-with-svd} + +Principal Component Analysis (PCA) and Singular Value Decomposition +(SVD) can be easily mixed up, especially when you have to keep track of +so many acronyms. Here is a quick summary: + +\begin{itemize} +\tightlist +\item + SVD: a linear algebra algorithm that splits a matrix into 3 component + parts. +\item + PCA: a data science procedure used for dimensionality reduction that + \emph{uses} SVD as one of the steps. +\end{itemize} + +\subsection{Deriving Principal Components From +SVD}\label{deriving-principal-components-from-svd} + +After centering the original data matrix \(X\) so that each column has a +mean of 0, we find its SVD: \[ X = U S V^T \] + +Because \(X\) is centered, the covariance matrix of \(X\), \(\Sigma\), +is equal to \(X^T X\). Rearranging this equation, we get + +\[ +\begin{align} +\Sigma &= X^T X \\ + &= (U S V^T)^T U S V^T \\ + &= V S^T U^T U S V^T & \text{U is orthonormal, so $U^T U = I$} \\ + &= V S^2 V^T +\end{align} +\] + +Multiplying both sides by \(V\), we get + +\[ +\begin{align} +\Sigma V &= VS^2 V^T V \\ +&= V S^2 +\end{align} +\] + +This shows that the columns of \(V\) are the \textbf{eigenvectors} of +the covariance matrix \(\Sigma\) and, therefore, the \textbf{principal +components}. Additionally, the squared singular values \(S^2\) are the +\textbf{eigenvalues} of \(\Sigma\). + +We've now shown that the first \(k\) columns of \(V\) (equivalently, the +first \(k\) rows of \(V^{T}\)) are the first k principal components of +\(X\). We can use them to construct the \textbf{latent vector +representation} of \(X\), \(Z\), by projecting \(X\) onto the principal +components. + +We can then instead compute \(Z\) as follows: + +\[ +\begin{align} +Z &= X V \\ + &= USV^T V \\ + &= U S +\end{align} +\] + +\[Z = XV = US\] + +In other words, we can construct \(X\)'s' latent vector representation +\(Z\) through: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Projecting \(X\) onto the first \(k\) columns of \(V\), \(V[:, :k]\) +\item + Multiplying the first \(k\) columns of U and the first \(k\) rows of S +\end{enumerate} + +Using \(Z\), we can approximately recover the centered \(X\) matrix by +multiplying \(Z\) by \(V^T\): \[ Z V^T = XV V^T = USV^T = X\] + +Note that to recover the original (uncentered) \(X\) matrix, we would +also need to add back the mean. + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-tip-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-tip-color}{\faLightbulb}\hspace{0.5em}{\hyperref[summary]{Summary} Terminology}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-tip-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\textbf{Note}: The notation used for PCA this semester differs from +previous semesters a bit. Please bay careful attention to the +terminology presented in this note. + +To summarize the terminology and concepts we've covered so far: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + \textbf{Principal Component}: The columns of \(V\) . These vectors + specify the principal coordinate system and represent the directions + along which the most variance in the data is captured. +\item + \textbf{Latent Vector Representation} of \(X\): The projection of our + data matrix \(X\) onto the principal components, \(Z = XV = US\). In + previous semesters, the terminology was different and this was termed + the principal components of \(X\). In other classes, the term + principal coordinate is also used. The \(i\)-th latent vector refers + to the \(i\)-th column of \(V\), corresponding to the \(i\)-th largest + singular value of \(X\). +\item + \textbf{\(S\)} (as in SVD): The diagonal matrix containing all the + singular values of \(X\). +\item + \textbf{\(\Sigma\)}: The covariance matrix of \(X\). Assuming \(X\) is + centered, \(\Sigma = X^T X\). In previous semesters, the singular + value decomposition of \(X\) was written out as \(X = U{\Sigma}V^T\). + Note the difference between \(\Sigma\) in that context compared to + this semester. +\end{enumerate} + +\end{tcolorbox} + +\subsection{PCA Visualization}\label{pca-visualization} + +As we discussed above, when conducting PCA, we first center the data +matrix \(X\) and then rotate it such that the direction with the most +variation (e.g., the direction that is most spread out) aligns with the +x-axis. + +In particular, the elements of each column of \(V\) (row of \(V^{T}\)) +rotate the original feature vectors, projecting \(X\) onto the principal +components. + +The first column of \(V\) indicates how each feature contributes +(e.g.~positive, negative, etc.) to principal component 1; it essentially +assigns ``weights'' to each feature. + +Coupled together, this interpretation also allows us to understand that: + +\begin{itemize} +\tightlist +\item + The principal components are all \textbf{orthogonal} to each other + because the columns of \(V\) are orthonormal. +\item + Principal components are \textbf{axis-aligned}. That is, if you plot + two PCs on a 2D plane, one will lie on the x-axis and the other on the + y-axis. +\item + Principal components are \textbf{linear combinations} of columns in + our data \(X\). +\end{itemize} + +\subsection{Using Principal +Components}\label{using-principal-components} + +Let's summarize the steps to obtain Principal Components via SVD: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Center the data matrix \(X\) by subtracting the mean of each attribute + column. +\item + To find the \(k\) \textbf{principal components}: + + \begin{enumerate} + \def\labelenumii{\arabic{enumii}.} + \tightlist + \item + Compute the SVD of the data matrix (\(X = U{S}V^{T}\)). + \item + The first \(k\) columns of \(V\) contain the \(k\) \textbf{principal + components} of \(X\). The \(k\)-th column of \(V\) is also known as + the \(k\)-th latent vector and corresponds to the \(k\)-th largest + singular value of \(X\). + \end{enumerate} +\end{enumerate} + +\subsection{Code Demo}\label{code-demo} + +Let's now walk through an example where we compute PCA using SVD. In +order to get the first \(k\) principal components from an \(n \times d\) +matrix \(X\), we: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Center \(X\) by subtracting the mean from each column. Notice how we + specify \texttt{axis=0} so that the mean is computed per column. +\end{enumerate} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{centered\_df }\OperatorTok{=}\NormalTok{ rectangle }\OperatorTok{{-}}\NormalTok{ np.mean(rectangle, axis}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\NormalTok{centered\_df.head(}\DecValTok{5}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lllll@{}} +\toprule\noalign{} +& width & height & area & perimeter \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & 2.97 & 1.35 & 24.78 & 8.64 \\ +1 & -3.03 & -0.65 & -15.22 & -7.36 \\ +2 & -4.03 & -1.65 & -20.22 & -11.36 \\ +3 & 3.97 & -1.65 & 3.78 & 4.64 \\ +4 & 3.97 & 3.35 & 48.78 & 14.64 \\ +\end{longtable} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{1} +\tightlist +\item + Get the Singular Value Decomposition of the centered \(X\): \(U\), + \(S\) and \(V^T\) +\end{enumerate} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{U, S, Vt }\OperatorTok{=}\NormalTok{ np.linalg.svd(centered\_df, full\_matrices}\OperatorTok{=}\VariableTok{False}\NormalTok{)} +\NormalTok{Sm }\OperatorTok{=}\NormalTok{ pd.DataFrame(np.diag(np.}\BuiltInTok{round}\NormalTok{(S, }\DecValTok{1}\NormalTok{)))} +\end{Highlighting} +\end{Shaded} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\setcounter{enumi}{2} +\tightlist +\item + Take the first \(k\) columns of \(V\). These are the first \(k\) + principal components of \(X\). +\end{enumerate} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{two\_PCs }\OperatorTok{=}\NormalTok{ Vt.T[:, :}\DecValTok{2}\NormalTok{]} +\NormalTok{pd.DataFrame(two\_PCs).head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}lll@{}} +\toprule\noalign{} +& 0 & 1 \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & -0.098631 & 0.668460 \\ +1 & -0.072956 & -0.374186 \\ +2 & -0.931226 & -0.258375 \\ +3 & -0.343173 & 0.588548 \\ +\end{longtable} + +\section{Data Variance and Centering}\label{data-variance-and-centering} + +We define the total variance of a data matrix as the sum of variances of +attributes. The principal components are a low-dimension representation +that capture as much of the original data's total variance as possible. +Formally, the \(i\)-th singular value tells us the \textbf{component +score}, or how much of the data variance is captured by the \(i\)-th +principal component. Assuming the number of datapoints is \(n\): + +\[\text{i-th component score} = \frac{(\text{i-th singular value}^2)}{n}\] + +Summing up the component scores is equivalent to computing the total +variance \emph{if we center our data}. + +\textbf{Data Centering}: PCA has a data-centering step that precedes any +singular value decomposition, where, if implemented, the component score +is defined as above. + +If you want to dive deeper into PCA, +\href{https://www.youtube.com/playlist?list=PLMrJAkhIeNNSVjnsviglFoY2nXildDCcv}{Steve +Brunton's SVD Video Series} is a great resource. + +\section{Interpreting PCA}\label{interpreting-pca} + +\subsection{PCA Plot}\label{pca-plot} + +We often plot the first two principal components using a scatter plot, +with PC1 on the \(x\)-axis and PC2 on the \(y\)-axis. This is often +called a PCA plot. + +If the first two singular values are large and all others are small, +then two dimensions are enough to describe most of what distinguishes +one observation from another. If not, a PCA plot omits a lot of +information. + +PCA plots help us assess similarities between our data points and if +there are any clusters in our dataset. In the case study before, for +example, we could create the following PCA plot: + +\subsection{Scree Plots}\label{scree-plots} + +A scree plot shows the \textbf{variance ratio} captured by each +principal component, with the largest variance ratio first. They help us +visually determine the number of dimensions needed to describe the data +reasonably. The singular values that fall in the region of the plot +after a large drop-off correspond to principal components that are +\textbf{not} needed to describe the data since they explain a relatively +low proportion of the total variance of the data. This point where +adding more principal components results in diminishing returns is +called the ``elbow'' and is the point just before the line flattens out. +Using this ``elbow method'', we can see that the elbow is at the second +principal component. + +\subsection{Biplots}\label{biplots} + +Biplots superimpose the directions onto the plot of PC1 vs.~PC2, where +vector \(j\) corresponds to the direction for feature \(j\) (e.g., +\(v_{1j}, v_{2j}\)). There are several ways to scale biplot vectors +------ in this course, we plot the direction itself. For other scalings, +which can lead to more interpretable directions/loadings, see +\href{https://blogs.sas.com/content/iml/2019/11/06/what-are-biplots.html}{SAS +biplots}. + +Through biplots, we can interpret how features correlate with the +principal components shown: positively, negatively, or not much at all. + +The directions of the arrow are (\(v_1\), \(v_2\)), where \(v_1\) and +\(v_2\) are how that specific feature column contributes to PC1 and PC2, +respectively. \(v_1\) and \(v_2\) are elements of the first and second +columns of \(V\), respectively (i.e., the first two rows of \(V^T\)). + +Say we were considering feature 3, and say that was the purple arrow +labeled ``520'' here (pointing bottom right). + +\begin{itemize} +\tightlist +\item + \(v_1\) and \(v_2\) are the third elements of the respective columns + in \(V\). They are scale feature 3's column vector in the linear + transformation to PC1 and PC2, respectively. +\item + Here, we would infer that \(v_1\) (in the \(x\)/PC1-direction) is + positive, meaning that a linear increase in feature 3 would correspond + to a linear increase of PC1, meaning feature 3 and PC1 are positively + correlated. +\item + \(v_2\) (in the \(y\)/pc2-direction) is negative, meaning a linear + increase in feature 3 would correspond to a linear decrease in PC2, + meaning feature 3 and PC2 are negatively correlated. +\end{itemize} + +\section{Example 1: House of Representatives +Voting}\label{example-1-house-of-representatives-voting} + +Let's examine how the House of Representatives (of the 116th Congress, +1st session) voted in the month of September 2019. + +Specifically, we'll look at the records of Roll call votes. From the +U.S. Senate +(\href{https://www.senate.gov/reference/Index/Votes.htm}{link}): roll +call votes occur when a representative or senator votes ``yea'' or +``nay'' so that the names of members voting on each side are recorded. A +voice vote is a vote in which those in favor or against a measure say +``yea'' or ``nay,'' respectively, without the names or tallies of +members voting on each side being recorded. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ pandas }\ImportTok{as}\NormalTok{ pd} +\ImportTok{import}\NormalTok{ seaborn }\ImportTok{as}\NormalTok{ sns} +\ImportTok{import}\NormalTok{ matplotlib.pyplot }\ImportTok{as}\NormalTok{ plt} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ yaml} +\ImportTok{from}\NormalTok{ datetime }\ImportTok{import}\NormalTok{ datetime} +\ImportTok{import}\NormalTok{ plotly.express }\ImportTok{as}\NormalTok{ px} +\ImportTok{import}\NormalTok{ plotly.graph\_objects }\ImportTok{as}\NormalTok{ go} + + +\NormalTok{votes }\OperatorTok{=}\NormalTok{ pd.read\_csv(}\StringTok{"data/votes.csv"}\NormalTok{)} +\NormalTok{votes }\OperatorTok{=}\NormalTok{ votes.astype(\{}\StringTok{"roll call"}\NormalTok{: }\BuiltInTok{str}\NormalTok{\})} +\NormalTok{votes.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llllll@{}} +\toprule\noalign{} +& chamber & session & roll call & member & vote \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & House & 1 & 555 & A000374 & Not Voting \\ +1 & House & 1 & 555 & A000370 & Yes \\ +2 & House & 1 & 555 & A000055 & No \\ +3 & House & 1 & 555 & A000371 & Yes \\ +4 & House & 1 & 555 & A000372 & No \\ +\end{longtable} + +Suppose we pivot this table to group each legislator and their voting +pattern across every (roll call) vote in this month. We mark 1 if the +legislator voted Yes (``yea''), and 0 otherwise (``No'', ``nay'', no +vote, speaker, etc.). + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ was\_yes(s):} + \ControlFlowTok{return} \DecValTok{1} \ControlFlowTok{if}\NormalTok{ s.iloc[}\DecValTok{0}\NormalTok{] }\OperatorTok{==} \StringTok{"Yes"} \ControlFlowTok{else} \DecValTok{0} + + +\NormalTok{vote\_pivot }\OperatorTok{=}\NormalTok{ votes.pivot\_table(} +\NormalTok{ index}\OperatorTok{=}\StringTok{"member"}\NormalTok{, columns}\OperatorTok{=}\StringTok{"roll call"}\NormalTok{, values}\OperatorTok{=}\StringTok{"vote"}\NormalTok{, aggfunc}\OperatorTok{=}\NormalTok{was\_yes, fill\_value}\OperatorTok{=}\DecValTok{0} +\NormalTok{)} +\BuiltInTok{print}\NormalTok{(vote\_pivot.shape)} +\NormalTok{vote\_pivot.head()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(441, 41) +\end{verbatim} + +\begin{longtable}[]{@{}llllllllllllllllllllll@{}} +\toprule\noalign{} +roll call & 515 & 516 & 517 & 518 & 519 & 520 & 521 & 522 & 523 & 524 & +... & 546 & 547 & 548 & 549 & 550 & 551 & 552 & 553 & 554 & 555 \\ +member & & & & & & & & & & & & & & & & & & & & & \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +A000055 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & ... & 0 & 0 & 1 & 0 & +0 & 1 & 0 & 0 & 1 & 0 \\ +A000367 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & ... & 0 & 1 & 1 & 1 & +1 & 0 & 1 & 1 & 0 & 1 \\ +A000369 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 1 & 1 & ... & 0 & 0 & 1 & 0 & +0 & 1 & 0 & 0 & 1 & 0 \\ +A000370 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & ... & 1 & 1 & 1 & 1 & +1 & 0 & 1 & 1 & 1 & 1 \\ +A000371 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & ... & 1 & 1 & 1 & 1 & +1 & 0 & 1 & 1 & 1 & 1 \\ +\end{longtable} + +\textbf{Do legislators' roll call votes show a relationship with their +political party?} + +\subsection{PCA with SVD}\label{pca-with-svd-1} + +While we could consider loading information about the legislator, such +as their party, and see how this relates to their voting pattern, it +turns out that we can do a lot with PCA to cluster legislators by how +they vote. Let's calculate the principal components using the SVD +method. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{vote\_pivot\_centered }\OperatorTok{=}\NormalTok{ vote\_pivot }\OperatorTok{{-}}\NormalTok{ np.mean(vote\_pivot, axis}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\NormalTok{u, s, vt }\OperatorTok{=}\NormalTok{ np.linalg.svd(vote\_pivot\_centered, full\_matrices}\OperatorTok{=}\VariableTok{False}\NormalTok{) }\CommentTok{\# SVD} +\end{Highlighting} +\end{Shaded} + +We can use the singular values in \texttt{s} to construct a scree plot: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.line(y}\OperatorTok{=}\NormalTok{s}\OperatorTok{**}\DecValTok{2} \OperatorTok{/} \BuiltInTok{sum}\NormalTok{(s}\OperatorTok{**}\DecValTok{2}\NormalTok{), title}\OperatorTok{=}\StringTok{\textquotesingle{}Variance Explained\textquotesingle{}}\NormalTok{, width}\OperatorTok{=}\DecValTok{700}\NormalTok{, height}\OperatorTok{=}\DecValTok{600}\NormalTok{, markers}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{fig.update\_xaxes(title\_text}\OperatorTok{=}\StringTok{\textquotesingle{}Principal Component i\textquotesingle{}}\NormalTok{)} +\NormalTok{fig.update\_yaxes(title\_text}\OperatorTok{=}\StringTok{\textquotesingle{}Proportion of Variance Explained\textquotesingle{}}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +It looks like this graph plateaus after the third principal component, +so our ``elbow'' is at PC3, and most of the variance is captured by just +the first three principal components. Let's use these PCs to visualize +the latent vector representation of \(X\)! + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Calculate the latent vector representation (US or XV)} +\CommentTok{\# using the first 3 principal components} +\NormalTok{vote\_2d }\OperatorTok{=}\NormalTok{ pd.DataFrame(index}\OperatorTok{=}\NormalTok{vote\_pivot\_centered.index)} +\NormalTok{vote\_2d[[}\StringTok{"z1"}\NormalTok{, }\StringTok{"z2"}\NormalTok{, }\StringTok{"z3"}\NormalTok{]] }\OperatorTok{=}\NormalTok{ (u }\OperatorTok{*}\NormalTok{ s)[:, :}\DecValTok{3}\NormalTok{]} + +\CommentTok{\# Plot the latent vector representation} +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.scatter\_3d(vote\_2d, x}\OperatorTok{=}\StringTok{\textquotesingle{}z1\textquotesingle{}}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}z2\textquotesingle{}}\NormalTok{, z}\OperatorTok{=}\StringTok{\textquotesingle{}z3\textquotesingle{}}\NormalTok{, title}\OperatorTok{=}\StringTok{\textquotesingle{}Vote Data\textquotesingle{}}\NormalTok{, width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{600}\NormalTok{)} +\NormalTok{fig.update\_traces(marker}\OperatorTok{=}\BuiltInTok{dict}\NormalTok{(size}\OperatorTok{=}\DecValTok{5}\NormalTok{))} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +Baesd on the plot above, it looks like there are two clusters of +datapoints. What do you think this corresponds to? + +By incorporating member information +(\href{https://github.com/unitedstates/congress-legislators}{source}), +we can augment our graph with biographic data like each member's party +and gender. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{legislators\_data }\OperatorTok{=}\NormalTok{ yaml.safe\_load(}\BuiltInTok{open}\NormalTok{(}\StringTok{"data/legislators{-}2019.yaml"}\NormalTok{))} + + +\KeywordTok{def}\NormalTok{ to\_date(s):} + \ControlFlowTok{return}\NormalTok{ datetime.strptime(s, }\StringTok{"\%Y{-}\%m{-}}\SpecialCharTok{\%d}\StringTok{"}\NormalTok{)} + + +\NormalTok{legs }\OperatorTok{=}\NormalTok{ pd.DataFrame(} +\NormalTok{ columns}\OperatorTok{=}\NormalTok{[} + \StringTok{"leg\_id"}\NormalTok{,} + \StringTok{"first"}\NormalTok{,} + \StringTok{"last"}\NormalTok{,} + \StringTok{"gender"}\NormalTok{,} + \StringTok{"state"}\NormalTok{,} + \StringTok{"chamber"}\NormalTok{,} + \StringTok{"party"}\NormalTok{,} + \StringTok{"birthday"}\NormalTok{,} +\NormalTok{ ],} +\NormalTok{ data}\OperatorTok{=}\NormalTok{[} +\NormalTok{ [} +\NormalTok{ x[}\StringTok{"id"}\NormalTok{][}\StringTok{"bioguide"}\NormalTok{],} +\NormalTok{ x[}\StringTok{"name"}\NormalTok{][}\StringTok{"first"}\NormalTok{],} +\NormalTok{ x[}\StringTok{"name"}\NormalTok{][}\StringTok{"last"}\NormalTok{],} +\NormalTok{ x[}\StringTok{"bio"}\NormalTok{][}\StringTok{"gender"}\NormalTok{],} +\NormalTok{ x[}\StringTok{"terms"}\NormalTok{][}\OperatorTok{{-}}\DecValTok{1}\NormalTok{][}\StringTok{"state"}\NormalTok{],} +\NormalTok{ x[}\StringTok{"terms"}\NormalTok{][}\OperatorTok{{-}}\DecValTok{1}\NormalTok{][}\StringTok{"type"}\NormalTok{],} +\NormalTok{ x[}\StringTok{"terms"}\NormalTok{][}\OperatorTok{{-}}\DecValTok{1}\NormalTok{][}\StringTok{"party"}\NormalTok{],} +\NormalTok{ to\_date(x[}\StringTok{"bio"}\NormalTok{][}\StringTok{"birthday"}\NormalTok{]),} +\NormalTok{ ]} + \ControlFlowTok{for}\NormalTok{ x }\KeywordTok{in}\NormalTok{ legislators\_data} +\NormalTok{ ],} +\NormalTok{)} +\NormalTok{legs[}\StringTok{"age"}\NormalTok{] }\OperatorTok{=} \DecValTok{2024} \OperatorTok{{-}}\NormalTok{ legs[}\StringTok{"birthday"}\NormalTok{].dt.year} +\NormalTok{legs.set\_index(}\StringTok{"leg\_id"}\NormalTok{)} +\NormalTok{legs.sort\_index()} + +\NormalTok{vote\_2d }\OperatorTok{=}\NormalTok{ vote\_2d.join(legs.set\_index(}\StringTok{"leg\_id"}\NormalTok{)).dropna()} + +\NormalTok{np.random.seed(}\DecValTok{42}\NormalTok{)} +\NormalTok{vote\_2d[}\StringTok{"z1\_jittered"}\NormalTok{] }\OperatorTok{=}\NormalTok{ vote\_2d[}\StringTok{"z1"}\NormalTok{] }\OperatorTok{+}\NormalTok{ np.random.normal(}\DecValTok{0}\NormalTok{, }\FloatTok{0.1}\NormalTok{, }\BuiltInTok{len}\NormalTok{(vote\_2d))} +\NormalTok{vote\_2d[}\StringTok{"z2\_jittered"}\NormalTok{] }\OperatorTok{=}\NormalTok{ vote\_2d[}\StringTok{"z2"}\NormalTok{] }\OperatorTok{+}\NormalTok{ np.random.normal(}\DecValTok{0}\NormalTok{, }\FloatTok{0.1}\NormalTok{, }\BuiltInTok{len}\NormalTok{(vote\_2d))} +\NormalTok{vote\_2d[}\StringTok{"z3\_jittered"}\NormalTok{] }\OperatorTok{=}\NormalTok{ vote\_2d[}\StringTok{"z3"}\NormalTok{] }\OperatorTok{+}\NormalTok{ np.random.normal(}\DecValTok{0}\NormalTok{, }\FloatTok{0.1}\NormalTok{, }\BuiltInTok{len}\NormalTok{(vote\_2d))} + +\NormalTok{px.scatter\_3d(vote\_2d, x}\OperatorTok{=}\StringTok{\textquotesingle{}z1\_jittered\textquotesingle{}}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}z2\_jittered\textquotesingle{}}\NormalTok{, z}\OperatorTok{=}\StringTok{\textquotesingle{}z3\_jittered\textquotesingle{}}\NormalTok{, color}\OperatorTok{=}\StringTok{\textquotesingle{}party\textquotesingle{}}\NormalTok{, symbol}\OperatorTok{=}\StringTok{"gender"}\NormalTok{, size}\OperatorTok{=}\StringTok{\textquotesingle{}age\textquotesingle{}}\NormalTok{,} +\NormalTok{ title}\OperatorTok{=}\StringTok{\textquotesingle{}Vote Data\textquotesingle{}}\NormalTok{, width}\OperatorTok{=}\DecValTok{800}\NormalTok{, height}\OperatorTok{=}\DecValTok{600}\NormalTok{, size\_max}\OperatorTok{=}\DecValTok{10}\NormalTok{,} +\NormalTok{ opacity }\OperatorTok{=} \FloatTok{0.7}\NormalTok{,} +\NormalTok{ color\_discrete\_map}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}Democrat\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}blue\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Republican\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{, }\StringTok{"Independent"}\NormalTok{: }\StringTok{"green"}\NormalTok{\},} +\NormalTok{ hover\_data}\OperatorTok{=}\NormalTok{[}\StringTok{\textquotesingle{}first\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}last\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}state\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}party\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}gender\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}age\textquotesingle{}}\NormalTok{])} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +Using SVD and PCA, we can clearly see a separation between the red dots +(Republican) and blue dots (Deomcrat). + +\subsection{Exploring the Principal +Components}\label{exploring-the-principal-components} + +We can also look at \(V^T\) directly to try to gain insight into why +each component is as it is. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{fig\_eig }\OperatorTok{=}\NormalTok{ px.bar(x}\OperatorTok{=}\NormalTok{vote\_pivot\_centered.columns, y}\OperatorTok{=}\NormalTok{vt[}\DecValTok{0}\NormalTok{, :])} +\CommentTok{\# extract the trace from the figure} +\NormalTok{fig\_eig.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +We have the party affiliation labels so we can see if this eigenvector +aligns with one of the parties. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{party\_line\_votes }\OperatorTok{=}\NormalTok{ (} +\NormalTok{ vote\_pivot\_centered.join(legs.set\_index(}\StringTok{"leg\_id"}\NormalTok{)[}\StringTok{"party"}\NormalTok{])} +\NormalTok{ .groupby(}\StringTok{"party"}\NormalTok{)} +\NormalTok{ .mean()} +\NormalTok{ .T.reset\_index()} +\NormalTok{ .rename(columns}\OperatorTok{=}\NormalTok{\{}\StringTok{"index"}\NormalTok{: }\StringTok{"call"}\NormalTok{\})} +\NormalTok{ .melt(}\StringTok{"call"}\NormalTok{)} +\NormalTok{)} +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.bar(} +\NormalTok{ party\_line\_votes,} +\NormalTok{ x}\OperatorTok{=}\StringTok{"call"}\NormalTok{, y}\OperatorTok{=}\StringTok{"value"}\NormalTok{, facet\_row }\OperatorTok{=} \StringTok{"party"}\NormalTok{, color}\OperatorTok{=}\StringTok{"party"}\NormalTok{,} +\NormalTok{ color\_discrete\_map}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}Democrat\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}blue\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Republican\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{, }\StringTok{"Independent"}\NormalTok{: }\StringTok{"green"}\NormalTok{\})} +\NormalTok{fig.for\_each\_annotation(}\KeywordTok{lambda}\NormalTok{ a: a.update(text}\OperatorTok{=}\NormalTok{a.text.split(}\StringTok{"="}\NormalTok{)[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{]))} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\subsection{Biplot}\label{biplot} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{loadings }\OperatorTok{=}\NormalTok{ pd.DataFrame(} +\NormalTok{ \{}\StringTok{"pc1"}\NormalTok{: np.sqrt(s[}\DecValTok{0}\NormalTok{]) }\OperatorTok{*}\NormalTok{ vt[}\DecValTok{0}\NormalTok{, :], }\StringTok{"pc2"}\NormalTok{: np.sqrt(s[}\DecValTok{1}\NormalTok{]) }\OperatorTok{*}\NormalTok{ vt[}\DecValTok{1}\NormalTok{, :]\},} +\NormalTok{ index}\OperatorTok{=}\NormalTok{vote\_pivot\_centered.columns,} +\NormalTok{)} + +\NormalTok{vote\_2d[}\StringTok{"num votes"}\NormalTok{] }\OperatorTok{=}\NormalTok{ votes[votes[}\StringTok{"vote"}\NormalTok{].isin([}\StringTok{"Yes"}\NormalTok{, }\StringTok{"No"}\NormalTok{])].groupby(}\StringTok{"member"}\NormalTok{).size()} +\NormalTok{vote\_2d.dropna(inplace}\OperatorTok{=}\VariableTok{True}\NormalTok{)} + +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.scatter(} +\NormalTok{ vote\_2d, } +\NormalTok{ x}\OperatorTok{=}\StringTok{\textquotesingle{}z1\_jittered\textquotesingle{}}\NormalTok{, } +\NormalTok{ y}\OperatorTok{=}\StringTok{\textquotesingle{}z2\_jittered\textquotesingle{}}\NormalTok{, } +\NormalTok{ color}\OperatorTok{=}\StringTok{\textquotesingle{}party\textquotesingle{}}\NormalTok{, } +\NormalTok{ symbol}\OperatorTok{=}\StringTok{"gender"}\NormalTok{, } +\NormalTok{ size}\OperatorTok{=}\StringTok{\textquotesingle{}num votes\textquotesingle{}}\NormalTok{,} +\NormalTok{ title}\OperatorTok{=}\StringTok{\textquotesingle{}Biplot\textquotesingle{}}\NormalTok{, } +\NormalTok{ width}\OperatorTok{=}\DecValTok{800}\NormalTok{, } +\NormalTok{ height}\OperatorTok{=}\DecValTok{600}\NormalTok{, } +\NormalTok{ size\_max}\OperatorTok{=}\DecValTok{10}\NormalTok{,} +\NormalTok{ opacity }\OperatorTok{=} \FloatTok{0.7}\NormalTok{,} +\NormalTok{ color\_discrete\_map}\OperatorTok{=}\NormalTok{\{}\StringTok{\textquotesingle{}Democrat\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}blue\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}Republican\textquotesingle{}}\NormalTok{:}\StringTok{\textquotesingle{}red\textquotesingle{}}\NormalTok{, }\StringTok{"Independent"}\NormalTok{: }\StringTok{"green"}\NormalTok{\},} +\NormalTok{ hover\_data}\OperatorTok{=}\NormalTok{[}\StringTok{\textquotesingle{}first\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}last\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}state\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}party\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}gender\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}age\textquotesingle{}}\NormalTok{])} + +\ControlFlowTok{for}\NormalTok{ (call, pc1, pc2) }\KeywordTok{in}\NormalTok{ loadings.head(}\DecValTok{20}\NormalTok{).itertuples():} +\NormalTok{ fig.add\_scatter(x}\OperatorTok{=}\NormalTok{[}\DecValTok{0}\NormalTok{,pc1], y}\OperatorTok{=}\NormalTok{[}\DecValTok{0}\NormalTok{,pc2], name}\OperatorTok{=}\NormalTok{call, } +\NormalTok{ mode}\OperatorTok{=}\StringTok{\textquotesingle{}lines+markers\textquotesingle{}}\NormalTok{, textposition}\OperatorTok{=}\StringTok{\textquotesingle{}top right\textquotesingle{}}\NormalTok{,} +\NormalTok{ marker}\OperatorTok{=} \BuiltInTok{dict}\NormalTok{(size}\OperatorTok{=}\DecValTok{10}\NormalTok{,symbol}\OperatorTok{=} \StringTok{"arrow{-}bar{-}up"}\NormalTok{, angleref}\OperatorTok{=}\StringTok{"previous"}\NormalTok{))} +\NormalTok{fig} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +Each roll call from the 116th Congress - 1st Session: +https://clerk.house.gov/evs/2019/ROLL\_500.asp + +\begin{itemize} +\tightlist +\item + 555: Raising a question of the privileges of the House + (\href{https://www.congress.gov/bill/116th-congress/house-resolution/590}{H.Res.590}) +\item + 553: + {[}https://www.congress.gov/bill/116th-congress/senate-joint-resolution/54/actions{]} +\item + 527: On Agreeing to the Amendment + \href{https://www.congress.gov/bill/116th-congress/house-bill/1146}{H.R.1146 + - Arctic Cultural and Coastal Plain Protection Act} +\end{itemize} + +As shown in the demo, the primary goal of PCA is to transform +observations from high-dimensional data down to low dimensions through +linear transformations. + +\section{Example 2: Image +Classification}\label{example-2-image-classification} + +In machine learning, PCA is often used as a preprocessing step prior to +training a supervised model. + +Let's explore how PCA is useful for building an image classification +model based on the Fashion-MNIST dataset, a dataset containing images of +articles of clothing; these images are gray scale with a size of 28 by +28 pixels. The copyright for Fashion-MNIST is held by +\href{https://github.com/zalandoresearch/fashion-mnist}{Zalando SE}. +Fashion-MNIST is licensed under the +\href{https://github.com/zalandoresearch/fashion-mnist/blob/master/LICENSE}{MIT +license}. + +First, we'll load in the data. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{import}\NormalTok{ requests} +\ImportTok{from}\NormalTok{ pathlib }\ImportTok{import}\NormalTok{ Path} +\ImportTok{import}\NormalTok{ time} +\ImportTok{import}\NormalTok{ gzip} +\ImportTok{import}\NormalTok{ os} +\ImportTok{import}\NormalTok{ numpy }\ImportTok{as}\NormalTok{ np} +\ImportTok{import}\NormalTok{ plotly.express }\ImportTok{as}\NormalTok{ px} + +\KeywordTok{def}\NormalTok{ fetch\_and\_cache(data\_url, }\BuiltInTok{file}\NormalTok{, data\_dir}\OperatorTok{=}\StringTok{"data"}\NormalTok{, force}\OperatorTok{=}\VariableTok{False}\NormalTok{):} + \CommentTok{"""} +\CommentTok{ Download and cache a url and return the file object.} + +\CommentTok{ data\_url: the web address to download} +\CommentTok{ file: the file in which to save the results.} +\CommentTok{ data\_dir: (default="data") the location to save the data} +\CommentTok{ force: if true the file is always re{-}downloaded} + +\CommentTok{ return: The pathlib.Path object representing the file.} +\CommentTok{ """} + +\NormalTok{ data\_dir }\OperatorTok{=}\NormalTok{ Path(data\_dir)} +\NormalTok{ data\_dir.mkdir(exist\_ok}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{ file\_path }\OperatorTok{=}\NormalTok{ data\_dir }\OperatorTok{/}\NormalTok{ Path(}\BuiltInTok{file}\NormalTok{)} + \CommentTok{\# If the file already exists and we want to force a download then} + \CommentTok{\# delete the file first so that the creation date is correct.} + \ControlFlowTok{if}\NormalTok{ force }\KeywordTok{and}\NormalTok{ file\_path.exists():} +\NormalTok{ file\_path.unlink()} + \ControlFlowTok{if}\NormalTok{ force }\KeywordTok{or} \KeywordTok{not}\NormalTok{ file\_path.exists():} + \BuiltInTok{print}\NormalTok{(}\StringTok{"Downloading..."}\NormalTok{, end}\OperatorTok{=}\StringTok{" "}\NormalTok{)} +\NormalTok{ resp }\OperatorTok{=}\NormalTok{ requests.get(data\_url)} + \ControlFlowTok{with}\NormalTok{ file\_path.}\BuiltInTok{open}\NormalTok{(}\StringTok{"wb"}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} +\NormalTok{ f.write(resp.content)} + \BuiltInTok{print}\NormalTok{(}\StringTok{"Done!"}\NormalTok{)} +\NormalTok{ last\_modified\_time }\OperatorTok{=}\NormalTok{ time.ctime(file\_path.stat().st\_mtime)} + \ControlFlowTok{else}\NormalTok{:} +\NormalTok{ last\_modified\_time }\OperatorTok{=}\NormalTok{ time.ctime(file\_path.stat().st\_mtime)} + \BuiltInTok{print}\NormalTok{(}\StringTok{"Using cached version that was downloaded (UTC):"}\NormalTok{, last\_modified\_time)} + \ControlFlowTok{return}\NormalTok{ file\_path} + + +\KeywordTok{def}\NormalTok{ head(filename, lines}\OperatorTok{=}\DecValTok{5}\NormalTok{):} + \CommentTok{"""} +\CommentTok{ Returns the first few lines of a file.} + +\CommentTok{ filename: the name of the file to open} +\CommentTok{ lines: the number of lines to include} + +\CommentTok{ return: A list of the first few lines from the file.} +\CommentTok{ """} + \ImportTok{from}\NormalTok{ itertools }\ImportTok{import}\NormalTok{ islice} + + \ControlFlowTok{with} \BuiltInTok{open}\NormalTok{(filename, }\StringTok{"r"}\NormalTok{) }\ImportTok{as}\NormalTok{ f:} + \ControlFlowTok{return} \BuiltInTok{list}\NormalTok{(islice(f, lines))} + + +\KeywordTok{def}\NormalTok{ load\_data():} + \CommentTok{"""} +\CommentTok{ Loads the Fashion{-}MNIST dataset.} + +\CommentTok{ This is a dataset of 60,000 28x28 grayscale images of 10 fashion categories,} +\CommentTok{ along with a test set of 10,000 images. This dataset can be used as} +\CommentTok{ a drop{-}in replacement for MNIST.} + +\CommentTok{ The classes are:} + +\CommentTok{ | Label | Description |} +\CommentTok{ |:{-}{-}{-}{-}{-}:|{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}{-}|} +\CommentTok{ | 0 | T{-}shirt/top |} +\CommentTok{ | 1 | Trouser |} +\CommentTok{ | 2 | Pullover |} +\CommentTok{ | 3 | Dress |} +\CommentTok{ | 4 | Coat |} +\CommentTok{ | 5 | Sandal |} +\CommentTok{ | 6 | Shirt |} +\CommentTok{ | 7 | Sneaker |} +\CommentTok{ | 8 | Bag |} +\CommentTok{ | 9 | Ankle boot |} + +\CommentTok{ Returns:} +\CommentTok{ Tuple of NumPy arrays: \textasciigrave{}(x\_train, y\_train), (x\_test, y\_test)\textasciigrave{}.} + +\CommentTok{ **x\_train**: uint8 NumPy array of grayscale image data with shapes} +\CommentTok{ \textasciigrave{}(60000, 28, 28)\textasciigrave{}, containing the training data.} + +\CommentTok{ **y\_train**: uint8 NumPy array of labels (integers in range 0{-}9)} +\CommentTok{ with shape \textasciigrave{}(60000,)\textasciigrave{} for the training data.} + +\CommentTok{ **x\_test**: uint8 NumPy array of grayscale image data with shapes} +\CommentTok{ (10000, 28, 28), containing the test data.} + +\CommentTok{ **y\_test**: uint8 NumPy array of labels (integers in range 0{-}9)} +\CommentTok{ with shape \textasciigrave{}(10000,)\textasciigrave{} for the test data.} + +\CommentTok{ Example:} + +\CommentTok{ (x\_train, y\_train), (x\_test, y\_test) = fashion\_mnist.load\_data()} +\CommentTok{ assert x\_train.shape == (60000, 28, 28)} +\CommentTok{ assert x\_test.shape == (10000, 28, 28)} +\CommentTok{ assert y\_train.shape == (60000,)} +\CommentTok{ assert y\_test.shape == (10000,)} + +\CommentTok{ License:} +\CommentTok{ The copyright for Fashion{-}MNIST is held by Zalando SE.} +\CommentTok{ Fashion{-}MNIST is licensed under the [MIT license](} +\CommentTok{ https://github.com/zalandoresearch/fashion{-}mnist/blob/master/LICENSE).} + +\CommentTok{ """} +\NormalTok{ dirname }\OperatorTok{=}\NormalTok{ os.path.join(}\StringTok{"datasets"}\NormalTok{, }\StringTok{"fashion{-}mnist"}\NormalTok{)} +\NormalTok{ base }\OperatorTok{=} \StringTok{"https://storage.googleapis.com/tensorflow/tf{-}keras{-}datasets/"} +\NormalTok{ files }\OperatorTok{=}\NormalTok{ [} + \StringTok{"train{-}labels{-}idx1{-}ubyte.gz"}\NormalTok{,} + \StringTok{"train{-}images{-}idx3{-}ubyte.gz"}\NormalTok{,} + \StringTok{"t10k{-}labels{-}idx1{-}ubyte.gz"}\NormalTok{,} + \StringTok{"t10k{-}images{-}idx3{-}ubyte.gz"}\NormalTok{,} +\NormalTok{ ]} + +\NormalTok{ paths }\OperatorTok{=}\NormalTok{ []} + \ControlFlowTok{for}\NormalTok{ fname }\KeywordTok{in}\NormalTok{ files:} +\NormalTok{ paths.append(fetch\_and\_cache(base }\OperatorTok{+}\NormalTok{ fname, fname))} + \CommentTok{\# paths.append(get\_file(fname, origin=base + fname, cache\_subdir=dirname))} + + \ControlFlowTok{with}\NormalTok{ gzip.}\BuiltInTok{open}\NormalTok{(paths[}\DecValTok{0}\NormalTok{], }\StringTok{"rb"}\NormalTok{) }\ImportTok{as}\NormalTok{ lbpath:} +\NormalTok{ y\_train }\OperatorTok{=}\NormalTok{ np.frombuffer(lbpath.read(), np.uint8, offset}\OperatorTok{=}\DecValTok{8}\NormalTok{)} + + \ControlFlowTok{with}\NormalTok{ gzip.}\BuiltInTok{open}\NormalTok{(paths[}\DecValTok{1}\NormalTok{], }\StringTok{"rb"}\NormalTok{) }\ImportTok{as}\NormalTok{ imgpath:} +\NormalTok{ x\_train }\OperatorTok{=}\NormalTok{ np.frombuffer(imgpath.read(), np.uint8, offset}\OperatorTok{=}\DecValTok{16}\NormalTok{).reshape(} + \BuiltInTok{len}\NormalTok{(y\_train), }\DecValTok{28}\NormalTok{, }\DecValTok{28} +\NormalTok{ )} + + \ControlFlowTok{with}\NormalTok{ gzip.}\BuiltInTok{open}\NormalTok{(paths[}\DecValTok{2}\NormalTok{], }\StringTok{"rb"}\NormalTok{) }\ImportTok{as}\NormalTok{ lbpath:} +\NormalTok{ y\_test }\OperatorTok{=}\NormalTok{ np.frombuffer(lbpath.read(), np.uint8, offset}\OperatorTok{=}\DecValTok{8}\NormalTok{)} + + \ControlFlowTok{with}\NormalTok{ gzip.}\BuiltInTok{open}\NormalTok{(paths[}\DecValTok{3}\NormalTok{], }\StringTok{"rb"}\NormalTok{) }\ImportTok{as}\NormalTok{ imgpath:} +\NormalTok{ x\_test }\OperatorTok{=}\NormalTok{ np.frombuffer(imgpath.read(), np.uint8, offset}\OperatorTok{=}\DecValTok{16}\NormalTok{).reshape(} + \BuiltInTok{len}\NormalTok{(y\_test), }\DecValTok{28}\NormalTok{, }\DecValTok{28} +\NormalTok{ )} + + \ControlFlowTok{return}\NormalTok{ (x\_train, y\_train), (x\_test, y\_test)} +\end{Highlighting} +\end{Shaded} + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{class\_names }\OperatorTok{=}\NormalTok{ [} + \StringTok{"T{-}shirt/top"}\NormalTok{,} + \StringTok{"Trouser"}\NormalTok{,} + \StringTok{"Pullover"}\NormalTok{,} + \StringTok{"Dress"}\NormalTok{,} + \StringTok{"Coat"}\NormalTok{,} + \StringTok{"Sandal"}\NormalTok{,} + \StringTok{"Shirt"}\NormalTok{,} + \StringTok{"Sneaker"}\NormalTok{,} + \StringTok{"Bag"}\NormalTok{,} + \StringTok{"Ankle boot"}\NormalTok{,} +\NormalTok{]} +\NormalTok{class\_dict }\OperatorTok{=}\NormalTok{ \{i: class\_name }\ControlFlowTok{for}\NormalTok{ i, class\_name }\KeywordTok{in} \BuiltInTok{enumerate}\NormalTok{(class\_names)\}} + +\NormalTok{(train\_images, train\_labels), (test\_images, test\_labels) }\OperatorTok{=}\NormalTok{ load\_data()} +\BuiltInTok{print}\NormalTok{(}\StringTok{"Training images"}\NormalTok{, train\_images.shape)} +\BuiltInTok{print}\NormalTok{(}\StringTok{"Test images"}\NormalTok{, test\_images.shape)} + +\NormalTok{rng }\OperatorTok{=}\NormalTok{ np.random.default\_rng(}\DecValTok{42}\NormalTok{)} +\NormalTok{n }\OperatorTok{=} \DecValTok{5000} +\NormalTok{sample\_idx }\OperatorTok{=}\NormalTok{ rng.choice(np.arange(}\BuiltInTok{len}\NormalTok{(train\_images)), size}\OperatorTok{=}\NormalTok{n, replace}\OperatorTok{=}\VariableTok{False}\NormalTok{)} + +\CommentTok{\# Invert and normalize the images so they look better} +\NormalTok{img\_mat }\OperatorTok{=} \OperatorTok{{-}}\DecValTok{1} \OperatorTok{*}\NormalTok{ train\_images[sample\_idx].astype(np.int16)} +\NormalTok{img\_mat }\OperatorTok{=}\NormalTok{ (img\_mat }\OperatorTok{{-}}\NormalTok{ img\_mat.}\BuiltInTok{min}\NormalTok{()) }\OperatorTok{/}\NormalTok{ (img\_mat.}\BuiltInTok{max}\NormalTok{() }\OperatorTok{{-}}\NormalTok{ img\_mat.}\BuiltInTok{min}\NormalTok{())} + +\NormalTok{images }\OperatorTok{=}\NormalTok{ pd.DataFrame(} +\NormalTok{ \{} + \StringTok{"images"}\NormalTok{: img\_mat.tolist(),} + \StringTok{"labels"}\NormalTok{: train\_labels[sample\_idx],} + \StringTok{"class"}\NormalTok{: [class\_dict[x] }\ControlFlowTok{for}\NormalTok{ x }\KeywordTok{in}\NormalTok{ train\_labels[sample\_idx]],} +\NormalTok{ \}} +\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024 +Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024 +Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024 +Using cached version that was downloaded (UTC): Tue Aug 27 03:33:08 2024 +Training images (60000, 28, 28) +Test images (10000, 28, 28) +\end{verbatim} + +Let's see what some of the images contained in this dataset look like. + +\begin{Shaded} +\begin{Highlighting}[] +\KeywordTok{def}\NormalTok{ show\_images(images, ncols}\OperatorTok{=}\DecValTok{5}\NormalTok{, max\_images}\OperatorTok{=}\DecValTok{30}\NormalTok{):} + \CommentTok{\# conver the subset of images into a n,28,28 matrix for facet visualization} +\NormalTok{ img\_mat }\OperatorTok{=}\NormalTok{ np.array(images.head(max\_images)[}\StringTok{"images"}\NormalTok{].to\_list())} +\NormalTok{ fig }\OperatorTok{=}\NormalTok{ px.imshow(} +\NormalTok{ img\_mat,} +\NormalTok{ color\_continuous\_scale}\OperatorTok{=}\StringTok{"gray"}\NormalTok{,} +\NormalTok{ facet\_col}\OperatorTok{=}\DecValTok{0}\NormalTok{,} +\NormalTok{ facet\_col\_wrap}\OperatorTok{=}\NormalTok{ncols,} +\NormalTok{ height}\OperatorTok{=}\DecValTok{220} \OperatorTok{*} \BuiltInTok{int}\NormalTok{(np.ceil(}\BuiltInTok{len}\NormalTok{(images) }\OperatorTok{/}\NormalTok{ ncols)),} +\NormalTok{ )} +\NormalTok{ fig.update\_layout(coloraxis\_showscale}\OperatorTok{=}\VariableTok{False}\NormalTok{)} + \CommentTok{\# Extract the facet number and convert it back to the class label.} +\NormalTok{ fig.for\_each\_annotation(} + \KeywordTok{lambda}\NormalTok{ a: a.update(text}\OperatorTok{=}\NormalTok{images.iloc[}\BuiltInTok{int}\NormalTok{(a.text.split(}\StringTok{"="}\NormalTok{)[}\OperatorTok{{-}}\DecValTok{1}\NormalTok{])][}\StringTok{"class"}\NormalTok{])} +\NormalTok{ )} + \ControlFlowTok{return}\NormalTok{ fig} + + +\NormalTok{fig }\OperatorTok{=}\NormalTok{ show\_images(images.groupby(}\StringTok{"class"}\NormalTok{, as\_index}\OperatorTok{=}\VariableTok{False}\NormalTok{).sample(}\DecValTok{2}\NormalTok{), ncols}\OperatorTok{=}\DecValTok{6}\NormalTok{)} +\NormalTok{fig.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +Let's break this down further and look at it by class, or the category +of clothing: + +\begin{Shaded} +\begin{Highlighting}[] +\BuiltInTok{print}\NormalTok{(class\_dict)} + +\NormalTok{show\_images(images.groupby(}\StringTok{\textquotesingle{}class\textquotesingle{}}\NormalTok{,as\_index}\OperatorTok{=}\VariableTok{False}\NormalTok{).sample(}\DecValTok{2}\NormalTok{), ncols}\OperatorTok{=}\DecValTok{6}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +{0: 'T-shirt/top', 1: 'Trouser', 2: 'Pullover', 3: 'Dress', 4: 'Coat', 5: 'Sandal', 6: 'Shirt', 7: 'Sneaker', 8: 'Bag', 9: 'Ankle boot'} +\end{verbatim} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\subsection{Raw Data}\label{raw-data} + +As we can see, each 28x28 pixel image is labelled by the category of +clothing it belongs to. Us humans can very easily look at these images +and identify the type of clothing being displayed, even if the image is +a little blurry. However, this task is less intuitive for machine +learning models. To illustrate this, let's take a small sample of the +training data to see how the images above are represented in their raw +format: + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{images.head()} +\end{Highlighting} +\end{Shaded} + +\begin{longtable}[]{@{}llll@{}} +\toprule\noalign{} +& images & labels & class \\ +\midrule\noalign{} +\endhead +\bottomrule\noalign{} +\endlastfoot +0 & {[}{[}1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,... & 3 & Dress \\ +1 & {[}{[}1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,... & 4 & Coat \\ +2 & {[}{[}1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,... & 0 & +T-shirt/top \\ +3 & {[}{[}1.0, 1.0, 1.0, 1.0, 1.0, 0.996078431372549, ... & 2 & +Pullover \\ +4 & {[}{[}1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,... & 1 & +Trouser \\ +\end{longtable} + +Each row represents one image. Every image belongs to a \texttt{"class"} +of clothing with it's enumerated \texttt{"label"}. In place of a +typically displayed image, the raw data contains a 28x28 \emph{2D array +of pixel values}; each pixel value is a float between 0 and 1. If we +just focus on the images, we get a 3D matrix. You can think of this as a +matrix containing 2D images. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{X }\OperatorTok{=}\NormalTok{ np.array(images[}\StringTok{"images"}\NormalTok{].to\_list())} +\NormalTok{X.shape } +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(5000, 28, 28) +\end{verbatim} + +However, we're not used to working with 3D matrices for our training +data \texttt{X}. Typical training data expects a \emph{vector} of +features for each datapoint, not a matrix per datapoint. We can reshape +our 3D matrix so that it fits our typical training data by ``unrolling'' +the the 28x28 pixels into a single row vector containing 28*28 = 784 +dimensions. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{X }\OperatorTok{=}\NormalTok{ X.reshape(X.shape[}\DecValTok{0}\NormalTok{], }\OperatorTok{{-}}\DecValTok{1}\NormalTok{)} +\NormalTok{X.shape} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +(5000, 784) +\end{verbatim} + +What we have now is 5000 datapoints that each have 784 features. That's +a lot of features! Not only would training a model on this data take a +very long time, it's also very likely that our matrix is linearly +independent. PCA is a very good strategy to use in situations like these +when there are lots of features, but we want to remove redundant +information. + +\subsection{\texorpdfstring{PCA with +\texttt{sklearn}}{PCA with sklearn}}\label{pca-with-sklearn} + +To perform PCA, let's begin by centering our data. + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{X }\OperatorTok{=}\NormalTok{ X }\OperatorTok{{-}}\NormalTok{ X.mean(axis}\OperatorTok{=}\DecValTok{0}\NormalTok{)} +\end{Highlighting} +\end{Shaded} + +We can run PCA using \texttt{sklearn}'s \texttt{PCA} package. + +\begin{Shaded} +\begin{Highlighting}[] +\ImportTok{from}\NormalTok{ sklearn.decomposition }\ImportTok{import}\NormalTok{ PCA} + +\NormalTok{n\_comps }\OperatorTok{=} \DecValTok{50} +\NormalTok{pca }\OperatorTok{=}\NormalTok{ PCA(n\_components}\OperatorTok{=}\NormalTok{n\_comps)} +\NormalTok{pca.fit(X)} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +PCA(n_components=50) +\end{verbatim} + +\subsection{Examining PCA Results}\label{examining-pca-results} + +Now that \texttt{sklearn} helped us find the principal components, let's +visualize a scree plot. + +\begin{Shaded} +\begin{Highlighting}[] +\CommentTok{\# Make a line plot and show markers} +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.line(y}\OperatorTok{=}\NormalTok{pca.explained\_variance\_ratio\_ }\OperatorTok{*} \DecValTok{100}\NormalTok{, markers}\OperatorTok{=}\VariableTok{True}\NormalTok{)} +\NormalTok{fig.show()} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +We can see that the line starts flattening out around 2 or 3, which +suggests that most of the data is explained by just the first two or +three dimensions. To illustrate this, let's plot the first three +principal components and the datapoints' corresponding classes. Can you +identify any patterns? + +\begin{Shaded} +\begin{Highlighting}[] +\NormalTok{images[[}\StringTok{\textquotesingle{}z1\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}z2\textquotesingle{}}\NormalTok{, }\StringTok{\textquotesingle{}z3\textquotesingle{}}\NormalTok{]] }\OperatorTok{=}\NormalTok{ pca.transform(X)[:, :}\DecValTok{3}\NormalTok{]} +\NormalTok{fig }\OperatorTok{=}\NormalTok{ px.scatter\_3d(images, x}\OperatorTok{=}\StringTok{\textquotesingle{}z1\textquotesingle{}}\NormalTok{, y}\OperatorTok{=}\StringTok{\textquotesingle{}z2\textquotesingle{}}\NormalTok{, z}\OperatorTok{=}\StringTok{\textquotesingle{}z3\textquotesingle{}}\NormalTok{, color}\OperatorTok{=}\StringTok{\textquotesingle{}class\textquotesingle{}}\NormalTok{, hover\_data}\OperatorTok{=}\NormalTok{[}\StringTok{\textquotesingle{}labels\textquotesingle{}}\NormalTok{], } +\NormalTok{ width}\OperatorTok{=}\DecValTok{1000}\NormalTok{, height}\OperatorTok{=}\DecValTok{800}\NormalTok{)} +\CommentTok{\# set marker size to 5} +\NormalTok{fig.update\_traces(marker}\OperatorTok{=}\BuiltInTok{dict}\NormalTok{(size}\OperatorTok{=}\DecValTok{5}\NormalTok{))} +\end{Highlighting} +\end{Shaded} + +\begin{verbatim} +Unable to display output for mime type(s): text/html +\end{verbatim} + +\section{Why Perform PCA}\label{why-perform-pca} + +As we saw in the demos, we often perform PCA during the Exploratory Data +Analysis (EDA) stage of our data science lifecycle (if we already know +what to model, we probably don't need PCA!). It helps us with: + +\begin{itemize} +\tightlist +\item + Visually identifying clusters of similar observations in high + dimensions. +\item + Removing irrelevant dimensions if we suspect that the dataset is + inherently low rank. For example, if the columns are collinear: there + are many attributes but only a few mostly determine the rest through + linear associations. +\item + Finding a small basis for representing variations in complex things, + e.g., images, genes. +\item + Reducing the number of dimensions to make some computations cheaper. +\end{itemize} + +\subsection{Why PCA, then Model?}\label{why-pca-then-model} + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Reduces dimensionality, allowing us to speed up training and reduce + the number of features, etc. +\item + Avoids multicollinearity in the new features created (i.e.~the + principal components) +\end{enumerate} + +\section{(Bonus) Applications of PCA}\label{bonus-applications-of-pca} + +\subsection{PCA in Biology}\label{pca-in-biology} + +PCA is commonly used in biomedical contexts, which have many named +variables! It can be used to: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Cluster data + (\href{https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-019-2680-1}{Paper + 1}, + \href{https://www.science.org/doi/10.1126/scirobotics.abk2378}{Paper + 2}). +\item + Identify correlated variables + (\href{https://docs.google.com/presentation/d/1-aDu0ILCkPx3iCcJGB3YXci-L4g90Q6AarXU6wffLB8/edit\#slide=id.g62cb86badb_0_1128}{interpret} + rows of \(V^{T}\) as linear coefficients) + (\href{https://www.nature.com/articles/s41598-017-05714-1}{Paper 3}). + Uses + \href{https://www.google.com/url?q=https://www.geo.fu-berlin.de/en/v/soga/Geodata-analysis/Principal-Component-Analysis/principal-components-basics/Interpretation-and-visualization/index.html\%23:~:text\%3DThe\%2520biplot\%2520is\%2520a\%2520very,in\%2520a\%2520single\%2520biplot\%2520display.\%26text\%3DThe\%2520plot\%2520shows\%2520the\%2520observations,principal\%2520components\%2520(synthetic\%2520variables).&sa=D&source=editors&ust=1682131633152964&usg=AOvVaw2H9SOeMP5kUS890Fkhfthx}{biplots}. +\end{enumerate} + +\section{(Bonus) PCA vs.~Regression}\label{bonus-pca-vs.-regression} + +\subsection{Regression: Minimizing Horizontal/Verticle +Error}\label{regression-minimizing-horizontalverticle-error} + +Suppose we know the child mortality rate of a given country. Linear +regression tries to predict the fertility rate from the mortality rate; +for example, if the mortality is 6, we might guess the fertility is near +4. The regression line tells us the ``best'' prediction of fertility +given all possible mortality values by minimizing the root mean squared +error. See the vertical red lines (note that only some are shown). + +We can also perform a regression in the reverse direction. That is, +given fertility, we try to predict mortality. In this case, we get a +different regression line that minimizes the root mean squared length of +the horizontal lines. + +\subsection{SVD: Minimizing Perpendicular +Error}\label{svd-minimizing-perpendicular-error} + +The rank-1 approximation is close but not the same as the mortality +regression line. Instead of minimizing \emph{horizontal} or +\emph{vertical} error, our rank-1 approximation minimizes the error +\emph{perpendicular} to the subspace onto which we're projecting. That +is, SVD finds the line such that if we project our data onto that line, +the error between the projection and our original data is minimized. The +similarity of the rank-1 approximation and the fertility was just a +coincidence. Looking at adiposity and bicep size from our body +measurements dataset, we see the 1D subspace onto which we are +projecting is between the two regression lines. + +\subsection{Beyond 1D and 2D}\label{beyond-1d-and-2d} + +Even in higher dimensions, the idea behind principal components is the +same! Suppose we have 30-dimensional data and decide to use the first 5 +principal components. Our procedure minimizes the error between the +original 30-dimensional data and the projection of that 30-dimensional +data onto the ``best'' 5-dimensional subspace. See +\href{https://eecs189.org/docs/notes/n10.pdf}{CS 189 Note 10} for more +details. + +\section{(Bonus) Automatic +Factorization}\label{bonus-automatic-factorization} + +One key fact to remember is that the decomposition is not arbitrary. The +\emph{rank} of a matrix limits how small our inner dimensions can be if +we want to perfectly recreate our matrix. The proof for this is out of +scope. + +Even if we know we have to factorize our matrix using an inner dimension +of \(R\), that still leaves a large space of solutions to traverse. What +if we have a procedure to automatically factorize a rank \(R\) matrix +into an \(R\)-dimensional representation with some transformation +matrix? + +\begin{itemize} +\tightlist +\item + Lower dimensional representation avoids redundant features. +\item + Imagine a 1000-dimensional dataset: If the rank is only 5, it's much + easier to do EDA after this mystery procedure. +\end{itemize} + +What if we wanted a 2D representation? It's valuable to compress all of +the data that is relevant into as few dimensions as possible in order to +plot it efficiently. Some 2D matrices yield better approximations than +others. How well can we do? + +\section{(Bonus) Proof of Component +Score}\label{bonus-proof-of-component-score} + +The proof defining component score is out of scope for this class, but +it is included below for your convenience. + +\textbf{Setup}: Consider the design matrix +\(X \in \mathbb{R}^{n \times d}\), where the \(j\)-th column +(corresponding to the \(j\)-th feature) is \(x_j \in \mathbb{R}^n\) and +the element in row \(i\), column \(j\) is \(x_{ij}\). Further, define +\(\tilde{X}\) as the \textbf{centered} design matrix. The \(j\)-th +column is \(\tilde{x}_j \in \mathbb{R}^n\) and the element in row \(i\), +column \(j\) is \(\tilde{x}_{ij} = x_{ij} - \bar{x_j}\), where +\(\bar{x_j}\) is the mean of the \(x_j\) column vector from the original +\(X\). + +\textbf{Variance}: Construct the \textbf{covariance matrix}: +\(\frac{1}{n} \tilde{X}^T \tilde{X} \in \mathbb{R}^{d \times d}\). The +\(j\)-th element along the diagonal is the \textbf{variance} of the +\(j\)-th column of the original design matrix \(X\): + +\[\left( \frac{1}{n} \tilde{X}^T \tilde{X} \right)_{jj} = \frac{1}{n} \tilde{x}_j ^T \tilde{x}_j = \frac{1}{n} \sum_{i=i}^n (\tilde{x}_{ij} )^2 = \frac{1}{n} \sum_{i=i}^n (x_{ij} - \bar{x_j})^2\] + +\textbf{SVD}: Suppose singular value decomposition of the +\emph{centered} design matrix \(\tilde{X}\) yields +\(\tilde{X} = U S V^T\), where \(U \in \mathbb{R}^{n \times d}\) and +\(V \in \mathbb{R}^{d \times d}\) are matrices with orthonormal columns, +and \(S \in \mathbb{R}^{d \times d}\) is a diagonal matrix with singular +values of \(\tilde{X}\). + +\[ +\begin{aligned} +\tilde{X}^T \tilde{X} &= (U S V^T )^T (U S V^T) \\ +&= V S U^T U S V^T & (S^T = S) \\ +&= V S^2 V^T & (U^T U = I) \\ +\frac{1}{n} \tilde{X}^T \tilde{X} &= \frac{1}{n} V S V^T =V \left( \frac{1}{n} S \right) V^T \\ +\frac{1}{n} \tilde{X}^T \tilde{X} V &= V \left( \frac{1}{n} S \right) V^T V = V \left( \frac{1}{n} S \right) & \text{(right multiply by }V \rightarrow V^T V = I \text{)} \\ +V^T \frac{1}{n} \tilde{X}^T \tilde{X} V &= V^T V \left( \frac{1}{n} S \right) = \frac{1}{n} S & \text{(left multiply by }V^T \rightarrow V^T V = I \text{)} \\ +\left( \frac{1}{n} \tilde{X}^T \tilde{X} \right)_{jj} &= \frac{1}{n}S_j^2 & \text{(Define }S_j\text{ as the} j\text{-th singular value)} \\ +\frac{1}{n} S_j^2 &= \frac{1}{n} \sum_{i=i}^n (x_{ij} - \bar{x_j})^2 +\end{aligned} +\] + +The last line defines the \(j\)-th component score. + +\bookmarksetup{startatroot} + +\chapter{Clustering}\label{clustering} + +\begin{tcolorbox}[enhanced jigsaw, colframe=quarto-callout-note-color-frame, left=2mm, breakable, opacitybacktitle=0.6, bottomrule=.15mm, opacityback=0, title=\textcolor{quarto-callout-note-color}{\faInfo}\hspace{0.5em}{Learning Outcomes}, colback=white, coltitle=black, rightrule=.15mm, colbacktitle=quarto-callout-note-color!10!white, bottomtitle=1mm, toprule=.15mm, toptitle=1mm, leftrule=.75mm, titlerule=0mm, arc=.35mm] + +\begin{itemize} +\tightlist +\item + Introduction to clustering +\item + Assessing the taxonomy of clustering approaches +\item + K-Means clustering +\item + Clustering with no explicit loss function: minimizing inertia +\item + Hierarchical Agglomerative Clustering +\item + Picking K: a hyperparameter +\end{itemize} + +\end{tcolorbox} + +Last time, we began our journey into unsupervised learning by discussing +Principal Component Analysis (PCA). + +In this lecture, we will explore another very popular unsupervised +learning concept: clustering. Clustering allows us to ``group'' similar +datapoints together without being given labels of what ``class'' or +where each point explicitly comes from. We will discuss two clustering +algorithms: K-Means clustering and hierarchical agglomerative +clustering, and we'll examine the assumptions, strengths, and drawbacks +of each one. + +\section{Review: Taxonomy of Machine +Learning}\label{review-taxonomy-of-machine-learning} + +\subsection{Supervised Learning}\label{supervised-learning} + +In supervised learning, our goal is to create a function that maps +inputs to outputs. Each model is learned from example input/output pairs +(training set), validated using input/output pairs, and eventually +tested on more input/output pairs. Each pair consists of: + +\begin{itemize} +\tightlist +\item + Input vector +\item + Output value (\textbf{label}) +\end{itemize} + +In regression, our output value is quantitative, and in classification, +our output value is categorical. + +\begin{figure}[H] + +{\centering \includegraphics{clustering/images/ml_taxonomy.png} + +} + +\caption{ML taxonomy} + +\end{figure}% + +\subsection{Unsupervised Learning}\label{unsupervised-learning} + +In unsupervised learning, our goal is to identify patterns in +\textbf{unlabeled} data. In this type of learning, we do not have +input/output pairs. Sometimes, we may have labels but choose to ignore +them (e.g.~PCA on labeled data). Instead, we are more interested in the +inherent structure of the data we have rather than trying to simply +predict a label using that structure of data. For example, if we are +interested in dimensionality reduction, we can use PCA to reduce our +data to a lower dimension. + +Now, let's consider a new problem: clustering. + +\subsection{Clustering Examples}\label{clustering-examples} + +\subsubsection{Example 1}\label{example-1-1} + +Consider this figure from Fall 2019 Midterm 2. The original dataset had +8 dimensions, but we have used PCA to reduce our data down to 2 +dimensions. + +Each point represents the 1st and 2nd principal component of how much +time patrons spent at 8 different zoo exhibits. Visually and +intuitively, we could potentially guess that this data belongs to 3 +groups: one for each cluster. The goal of clustering is now to assign +each point (in the 2 dimensional PCA representation) to a cluster. + +This is an unsupervised task, as: + +\begin{itemize} +\tightlist +\item + We don't have labels for each visitor. +\item + We want to infer patterns even without labels. +\end{itemize} + +\subsubsection{Example 2: Netflix}\label{example-2-netflix} + +Now suppose you're Netflix and are looking at information on customer +viewing habits. Clustering can come in handy here. We can assign each +person or show to a ``cluster.'' (Note: while we don't know for sure +that Netflix actually uses ML clustering to identify these categories, +they could, in principle.) + +Keep in mind that with clustering, we don't need to define clusters in +advance; it discovers groups automatically. On the other hand, with +classification, we have to decide labels in advance. This marks one of +the key differences between clustering and classification. + +\subsubsection{Example 3: Education}\label{example-3-education} + +Let's say we're working with student-generated materials and pass them +into the S-BERT module to extract sentence embeddings. Features from +clusters are extracted to: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\tightlist +\item + Detect anomalies in group activities +\item + Predict the group's median quiz grade +\end{enumerate} + +Here we can see the outline of the anomaly detection module. It consists +of: + +\begin{itemize} +\tightlist +\item + S-BERT feature extraction +\item + Topic extraction +\item + Feature extraction +\item + 16D \(\rightarrow\) 2D PCA dimensionality reduction and 2D + \(\rightarrow\) 16D reconstruction +\item + Anomaly detection based on reconstruction error +\end{itemize} + +Looking more closely at our clustering, we can better understand the +different components, which are represented by the centers. Below we +have two examples. + +Note that the details for this example are not in scope. + +\subsubsection{Example 4: Reverse Engineering +Biology}\label{example-4-reverse-engineering-biology} + +Now, consider the plot below: + +The rows of this plot are conditions (e.g., a row might be: ``poured +acid on the cells''), and the columns are genes. The green coloration +indicates that the gene was ``off'' whereas red indicates the gene was +``on''. For example, the \textasciitilde9 genes in the top left corner +of the plot were all turned off by the 6 experiments (rows) at the top. + +In a clustering lens, we might be interested in clustering similar +observations together based on the reactions (on/off) to certain +experiments. + +For example, here is a look at our data before and after clustering. + +Note: apologies if you can't differentiate red from green by eye! +Historical visualizations are not always the best. + +\section{Taxonomy of Clustering +Approaches}\label{taxonomy-of-clustering-approaches} + +There are many types of clustering algorithms, and they all have +strengths, inherent weaknesses, and different use cases. We will first +focus on a partitional approach: K-Means clustering. + +\section{K-Means Clustering}\label{k-means-clustering} + +The most popular clustering approach is K-Means. The algorithm itself +entails the following: + +\begin{enumerate} +\def\labelenumi{\arabic{enumi}.} +\item + Pick an arbitrary \(k\), and randomly place \(k\) ``centers'', each a + different color. +\item + Repeat until convergence: + + \begin{enumerate} + \def\labelenumii{\alph{enumii}.} + \tightlist + \item + Color points according to the closest center. + \item + Move the center for each color to the center of points with that + color. + \end{enumerate} +\end{enumerate} + +Consider the following data with an arbitrary \(k = 2\) and randomly +placed ``centers'' denoted by the different colors (blue, orange): + +Now, we will follow the rest of the algorithm. First, let us color each +point according to the closest center: + +Next, we will move the center for each color to the center of points +with that color. Notice how the centers are generally well-centered +amongst the data that shares its color. + +Assume this process (re-color and re-set centers) repeats for a few more +iterations. We eventually reach this state. + +After this iteration, the center stays still and does not move at all. +Thus, we have converged, and the clustering is complete! + +\subsubsection{A Quick Note}\label{a-quick-note} + +K-Means is a completely different algorithm than K-Nearest Neighbors. +K-means is used for \emph{clustering}, where each point is assigned to +one of \(K\) clusters. On the other hand, K-Nearest Neighbors is used +for \emph{classification} (or, less often, regression), and the +predicted value is typically the most common class among the +\(K\)-nearest data points in the training set. The names may be similar, +but there isn't really anything in common. + +\section{Minimizing Inertia}\label{minimizing-inertia} + +Consider the following example where \(K = 4\): + +Due to the randomness of where the \(K\) centers initialize/start, you +will get a different output/clustering every time you run K-Means. +Consider three possible K-Means outputs; the algorithm has converged, +and the colors denote the final cluster they are clustered as. + +Which clustering output is the best? To evaluate different clustering +results, we need a loss function. + +The two common loss functions are: + +\begin{itemize} +\tightlist +\item + \textbf{Inertia}: Sum of squared distances from each data point to its + center. +\item + \textbf{Distortion}: Weighted sum of squared distances from each data + point to its center. +\end{itemize} + +In the example above: + +\begin{itemize} +\tightlist +\item + Calculated inertia: + \(0.47^2 + 0.19^2 + 0.34^2 + 0.25^2 + 0.58^2 + 0.36^2 + 0.44^2\) +\item + Calculated distortion: + \(\frac{0.47^2 + 0.19^2 + 0.34^2}{3} + \frac{0.25^2 + 0.58^2 + 0.36^2 + 0.44^2}{4}\) +\end{itemize} + +Switching back to the four-cluster example at the beginning of this +section, \texttt{random.seed(25)} had an inertia of \texttt{44.96}, +\texttt{random.seed(29)} had an inertia of \texttt{45.95}, and +\texttt{random.seed(40)} had an inertia of \texttt{54.35}. It seems that +the best clustering output was \texttt{random.seed(25)} with an inertia +of \texttt{44.96}! + +It turns out that the function K-Means is trying to minimize is inertia, +but often fails to find global optimum. Why does this happen? We can +think of K-means as a pair of optimizers that take turns. The first +optimizer holds \emph{center positions} constant and optimizes +\emph{data colors}. The second optimizer holds \emph{data colors} +constant and optimizes \emph{center positions}. Neither optimizer gets +full control! + +This is a hard problem: give an algorithm that optimizes inertia FOR A +GIVEN \(K\); \(K\) is picked in advance. Your algorithm should return +the EXACT best centers and colors, but you don't need to worry about +runtime. + +\emph{Note: This is a bit of a CS61B/CS70/CS170 problem, so do not worry +about completely understanding the tricky predicament we are in too +much!} + +A potential algorithm: + +\begin{itemize} +\tightlist +\item + For all possible \(k^n\) colorings: + + \begin{itemize} + \tightlist + \item + Compute the \(k\) centers for that coloring. + \item + Compute the inertia for the \(k\) centers. + + \begin{itemize} + \tightlist + \item + If current inertia is better than best known, write down the + current centers and coloring and call that the new best known. + \end{itemize} + \end{itemize} +\end{itemize} + +No better algorithm has been found for solving the problem of minimizing +inertia exactly. + +\section{Hierarchical Agglomerative +Clustering}\label{hierarchical-agglomerative-clustering} + +Now, let us consider hierarchical agglomerative clustering. + +Consider the following results of two K-Means clustering outputs: + +Which clustering result do you like better? It seems K-Means likes the +one on the right better because it has lower inertia (the sum of squared +distances from each data point to its center), but this raises some +questions: + +\begin{itemize} +\tightlist +\item + Why is the inertia on the right lower? K-Means optimizes for distance, + not ``blobbiness''. +\item + Is clustering on the right ``wrong''? Good question! +\end{itemize} + +Now, let us introduce Hierarchical Agglomerative Clustering! We start +with every data point in a separate cluster, and we'll keep merging the +most similar pairs of data points/clusters until we have one big cluster +left. This is called a \textbf{bottom-up} or \textbf{agglomerative +method}. + +There are various ways to decide the order of combining clusters called +\textbf{Linkage Criterion}: + +\begin{itemize} +\tightlist +\item + \textbf{Single linkage} (similarity of the most similar): the distance + between two clusters as the \textbf{minimum} distance between a point + in the first cluster and a point in the second. +\item + \textbf{Complete linkage} (similarity of the least similar): the + distance between two clusters as the \textbf{maximum} distance between + a point in the first cluster and a point in the second. +\item + \textbf{Average linkage}: \textbf{average} similarity of pairs of + points in clusters. +\end{itemize} + +The linkage criterion decides how we measure the ``distance'' between +two clusters. Regardless of the criterion we choose, the aim is to +combine the two clusters that have the minimum ``distance'' between +them, with the distance computed as per that criterion. In the case of +complete linkage, for example, that means picking the two clusters that +minimize the maximum distance between a point in the first cluster and a +point in the second. + +When the algorithm starts, every data point is in its own cluster. In +the plot below, there are 12 data points, so the algorithm starts with +12 clusters. As the clustering begins, it assesses which clusters are +the closest together. + +The closest clusters are 10 and 11, so they are merged together. + +Next, points 0 and 4 are merged together because they are closest. + +At this point, we have 10 clusters: 8 with a single point (clusters 1, +2, 3, 4, 5, 6, 7, 8, and 9) and 2 with 2 points (clusters 0 and 10). + +Although clusters 0 and 3 are not the closest, let us consider if we +were trying to merge them. A tricky question arises: what is the +``distance'' between clusters 0 and 3? We can use the +\textbf{Complete-Link} approach that uses the \textbf{max} distance +among all pairs of points between groups to decide which group has +smaller ``distance''. + +Let us assume the algorithm runs a little longer, and we have reached +the following state. Clusters 0 and 7 are up next, but why? The +\textbf{max line between any member of 0 and 6} is longer than the +\textbf{max line between any member of 0 and 7}. + +Thus, 0 and 7 are merged into 0 as they are closer under the complete +linkage criterion. + +After more iterations, we finally converge to the plot on the left. +There are two clusters (0, 1), and the agglomerative algorithm has +converged. + +Notice that on the full dataset, our agglomerative clustering algorithm +achieves the more ``correct'' output. + +\subsection{Clustering, Dendrograms, and +Intuition}\label{clustering-dendrograms-and-intuition} + +Agglomerative clustering is one form of ``hierarchical clustering.'' It +is interpretable because we can keep track of when two clusters got +merged (each cluster is a tree), and we can visualize the merging +hierarchy, resulting in a ``dendrogram.'' Won't discuss this any further +for this course, but you might see these in the wild. Here are some +examples: + +Some professors use agglomerative clustering for grading bins; if there +is a big gap between two people, draw a grading threshold there. The +idea is that grade clustering should be more like the figure below on +the left, not the right. + +\section{Picking K}\label{picking-k} + +The algorithms we've discussed require us to pick a \(K\) before we +start. But how do we pick \(K\)? Often, the best \(K\) is subjective. +For example, consider the state plot below. + +How many clusters are there here? For K-Means, one approach to determine +this is to plot inertia versus many different \(K\) values. We'd pick +the \(K\) in the \textbf{elbow}, where we get diminishing returns +afterward. Note that big, complicated data often lacks an elbow, so this +method is not foolproof. Here, we would likely select \(K = 2\). + +\subsection{Silhouette Scores}\label{silhouette-scores} + +To evaluate how ``well-clustered'' a specific data point is, we can use +the \textbf{silhouette score}, also termed the \textbf{silhouette +width}. A high silhouette score indicates that a point is near the other +points in its cluster; a low score means that it's far from the other +points in its cluster. + +For a data point \(X\), score \(S\) is: \[S =\frac{B - A}{\max(A, B)}\] +where \(A\) is the average distance to \emph{other} points in the +cluster, and \(B\) is the average distance to points in the +\emph{closest} cluster. + +Consider what the highest possible value of \(S\) is and how that value +can occur. The highest possible value of \(S\) is 1, which happens if +every point in \(X\)'s cluster is right on top of \(X\); the average +distance to other points in \(X\)'s cluster is \(0\), so \(A = 0\). +Thus, \(S = \frac{B}{\max(0, B)} = \frac{B}{B} = 1\). Another case where +\(S = 1\) could happen is if \(B\) is \emph{much} greater than \(A\) (we +denote this as \(B >> A\)). + +Can \(S\) be negative? The answer is yes. If the average distance to X's +clustermates is larger than the distance to the closest cluster, then +this is possible. For example, the ``low score'' point on the right of +the image above has \(S = -0.13\). + +\subsection{Silhouette Plot}\label{silhouette-plot} + +We can plot the \textbf{silhouette scores} for all of our datapoints. +The x-axis represents the silhouette coefficient value or silhouette +score. The y-axis tells us which cluster label the points belong to, as +well as the number of points within a particular cluster. Points with +large silhouette widths are deeply embedded in their cluster; the red +dotted line shows the average. Below, we plot the silhouette score for +our plot with \(K=2\). + +Similarly, we can plot the silhouette score for the same dataset but +with \(K=3\): + +The average silhouette score is lower with 3 clusters, so \(K=2\) is a +better choice. This aligns with our visual intuition as well. + +\subsection{Picking K: Real World +Metrics}\label{picking-k-real-world-metrics} + +Sometimes you can rely on real-world metrics to guide your choice of +\(K\). For t-shirts, we can either: + +\begin{itemize} +\tightlist +\item + Cluster heights and weights of customers with \(K = 3\) to design + Small, Medium, and Large shirts +\item + Cluster heights and weights of customers with \(K = 5\) to design XS, + S, M, L, and XL shirts +\end{itemize} + +To choose \(K\), consider projected costs and sales for the 2 different +\(K\)s and select the one that maximizes profit. + +\section{Conclusion}\label{conclusion-1} + +We've now discussed a new machine learning goal ------ clustering ------ +and explored two solutions: + +\begin{itemize} +\tightlist +\item + K-Means Clustering tries to optimize a loss function called inertia + (no known algorithm to find the optimal answer in an efficient manner) +\item + Hierarchical Agglomerative Clustering builds clusters bottom-up by + merging clusters ``close'' to each other, depending on the choice of + linkage. +\end{itemize} + +Our version of these algorithms required a hyperparameter \(K\). There +are 4 ways to pick \(K\): the elbow method, silhouette scores, and by +harnessing real-world metrics. + +There are many machine learning problems. Each can be addressed by many +different solution techniques. Each has many metrics for evaluating +success / loss. Many techniques can be used to solve different problem +types. For example, linear models can be used for regression and +classification. + +We've only scratched the surface and haven't discussed many important +ideas, such as neural networks and deep learning. In the last lecture, +we'll provide some specific course recommendations on how to explore +these topics further. + + + + +\end{document} diff --git a/index.toc b/index.toc new file mode 100644 index 000000000..e69de29bb