Statistics and Statistical Programming (Fall 2020)/pset8: Difference between revisions

From CommunityData
No edit summary
No edit summary
Line 1: Line 1:
== Programming challenges (Part I) ==
== Programming challenges (Part I) ==


The first set of programming challenges this week revisit our conversation about race and policing from earlier in the quarter. Essentially, I'd like you to return one of questions I asked you to investigate in Problem Set 3: How does the observed race/ethnicity of a driver relate to whether a traffic stop involves a search? Way back then, I asked you to calculate and compare conditional summary statistics across the different categories of race/ethnicity included in the dataset. Now you have some additional analysis techniques you can bring to the study of this question.
The first set of programming challenges this week pose an open-ended set of questions about a simulated dataset from an observational study of high school graduates' academic achievement and subsequent income. Here is some information about the "study design" ('''note: this is not data from an actual study'''):  
 
* You may wish to use <math>\chi^2</math> tests to revisit contingency tables and test for dependency between race/ethnicity and whether or not a search was conducted.
* You should also construct a multiple regression model corresponding to the following:
 
<math>Searched = \beta_1{Race} + \beta_2{Female} + \beta_3{Time} + e</math>
 
 
 
== Programming challenges (Part II) ==
 
The second set of programming challenges this week pose a more open-ended set of questions about a simulated dataset from an observational study of high school graduates' academic achievement and subsequent income. Here is some information about the "study design" (''note: this is not data from an actual study''):  
:: Data from twelve cohorts of public high school students was collected from across the Chicago suburbs. Each cohort incorporates a random sample of 142 students from a single suburban school district. For each student, researchers gathered a standardized measure of the students' aggregate GPA as a proxy for their academic achievement. The researchers then matched the students' names against IRS records five years later and collected each student's reported pre-tax earnings for that year.  
:: Data from twelve cohorts of public high school students was collected from across the Chicago suburbs. Each cohort incorporates a random sample of 142 students from a single suburban school district. For each student, researchers gathered a standardized measure of the students' aggregate GPA as a proxy for their academic achievement. The researchers then matched the students' names against IRS records five years later and collected each student's reported pre-tax earnings for that year.  


Line 28: Line 17:
* ANOVAs, T-tests, and linear regression can help you test different kinds of hypotheses.
* ANOVAs, T-tests, and linear regression can help you test different kinds of hypotheses.
* Adjusting for multiple comparisons is important.
* Adjusting for multiple comparisons is important.
== Programming challenges (Part II) ==
The second set of programming challenges this week revisit the trick-or-treating experiment [[Statistics_and_Statistical_Programming_(Fall_2020)/pset5|we analyzed a few weeks ago]].
Load up the full dataset and fit the following model.
:: <math>\widehat{\mathrm{fruit}} = \beta_0 + \beta_1 \mathrm{obama} + \beta_2 \mathrm{age} + beta_3 \mathrm{male} + \beta_4 \mathrm{year} + \varepsilon</math>
Note that you may want/need to convert some of these variables to appropriate types/classes.
Interpret the results of the model.
Run the model on three subsets of the dataset: just 2012, 2014, and 2015. Be prepared to talk through the results.

Revision as of 18:29, 17 November 2020

Programming challenges (Part I)

The first set of programming challenges this week pose an open-ended set of questions about a simulated dataset from an observational study of high school graduates' academic achievement and subsequent income. Here is some information about the "study design" (note: this is not data from an actual study):

Data from twelve cohorts of public high school students was collected from across the Chicago suburbs. Each cohort incorporates a random sample of 142 students from a single suburban school district. For each student, researchers gathered a standardized measure of the students' aggregate GPA as a proxy for their academic achievement. The researchers then matched the students' names against IRS records five years later and collected each student's reported pre-tax earnings for that year.

I have provided you with a version of the dataset from this hypothetical study in which each row corresponds to one student. For each student, the dataset contains the following variables:

  • id: A unique numeric identifier for each student in the study (randomly generated to preserve student anonymity).
  • cohort: An anonymized label of the cohort (school district) the student was drawn from.
  • gpa: Approximate GPA percentile of the student within the entire district. Note that this means all student GPAs within each district were aggregated and converted to an identical scale before percentiles were calculated.
  • income: Pre-tax income (in thousands of US dollars) reported to the U.S. federal government (IRS) by the student five years after graduation.

For the rest of this programming challenge, you should use this dataset to answer the following research questions:

  • How does high school academic achievement relate to earnings?
  • How does this relationship vary by school district?

You may use any analytical procedures you deem appropriate given the structure of the dataset and study design. Some things you may want to keep in mind:

  • ANOVAs, T-tests, and linear regression can help you test different kinds of hypotheses.
  • Adjusting for multiple comparisons is important.

Programming challenges (Part II)

The second set of programming challenges this week revisit the trick-or-treating experiment we analyzed a few weeks ago.

Load up the full dataset and fit the following model.

Note that you may want/need to convert some of these variables to appropriate types/classes. Interpret the results of the model.

Run the model on three subsets of the dataset: just 2012, 2014, and 2015. Be prepared to talk through the results.