Statistics and Statistical Programming (Winter 2021)/Problem set 16: Difference between revisions

From CommunityData
(Created page with "== Part I: Replicate analysis from ''OpenIntro'' == For this part, please use the <code>mariokart</code> dataset included in the <code>openintro</code> library (and documente...")
 
No edit summary
Line 1: Line 1:
== Part I: Replicate analysis from ''OpenIntro'' ==
== Programming Challenges ==
=== PC1: Replicate analysis from ''OpenIntro'' ===


For this part, please use the <code>mariokart</code> dataset included in the <code>openintro</code> library (and documented [https://www.openintro.org/data/index.php?data=mariokart here]) to do the following:
For this part, please use the <code>mariokart</code> dataset included in the <code>openintro</code> library (and documented [https://www.openintro.org/data/index.php?data=mariokart here]) to do the following:


* Replicate the multiple regression model and results presented in Figure 9.15 on p. 366 of the 'OpenIntro' textbook.  
# Replicate the multiple regression model and results presented in Figure 9.15 on p. 366 of the 'OpenIntro' textbook.  
* Generate plots to diagnose any issues with this model.
# Generate plots to diagnose any issues with this model.
* Interpret the results with a particular focus on the relationship between price and two of the predictors: <code>cond_new</code> and <code>stock_photo</code>. Be sure to explain what the results mean for those predictors in terms of the underlying variables (i.e., don't just talk about coefficients as such, but translate them back into the values/scales of the variables).
# Interpret the results with a particular focus on the relationship between price and two of the predictors: <code>cond_new</code> and <code>stock_photo</code>. Be sure to explain what the results mean for those predictors in terms of the underlying variables (i.e., don't just talk about coefficients as such, but translate them back into the values/scales of the variables).
* Based on the results of this model, how would you advise a prospective vendor of a used copy of ''Mario Kart'' to maximize the auction price they might receive for the game on eBay?
# Based on the results of this model, how would you advise a prospective vendor of a used copy of ''Mario Kart'' to maximize the auction price they might receive for the game on eBay?


== Part II: Analyze and interpret a simulated study of education and income ==
=== PC2: Analyze and interpret a simulated study of education and income ===


The second part of this problem set poses an open-ended set of questions about a simulated dataset from an observational study of high school graduates' academic achievement and subsequent income. You can '''[https://communitydata.science/~ads/teaching/2020/stats/data/week_11/grads.rds download the data here]'''. I have provided some information about the "study design" below ('''reminder/note: this is not data from an actual study'''):  
The second part of this problem set poses an open-ended set of questions about a simulated dataset from an observational study of high school graduates' academic achievement and subsequent income. You can '''[https://communitydata.science/~ads/teaching/2020/stats/data/week_11/grads.rds download the data here]'''. I have provided some information about the "study design" below ('''reminder/note: this is not data from an actual study'''):  
:: You have been hired as a statistical consultant on a project studying the role of income in shaping academic achievement. Data from twelve cohorts of public high school students was collected from across the Chicago suburbs. Each cohort incorporates a random sample of 142 students from a single suburban school district. For each student, researchers gathered a standardized measure of the students' aggregate GPA as a proxy for their academic achievement. The researchers then matched the students' names against IRS records five years later and collected each student's reported pre-tax earnings for that year.  
:: You have been hired as a statistical consultant on a project studying the role of income in shaping academic achievement. Data from twelve cohorts of public high school students was collected from across the Chicago suburbs. Each cohort incorporates a random sample of 142 students from a single suburban school district. For each student, researchers gathered a standardized measure of the students' aggregate GPA as a proxy for their academic achievement. The researchers then matched the students' names against IRS records five years later and collected each student's reported pre-tax earnings for that year.  


Line 20: Line 22:


For the rest of this programming challenge, you should use this dataset to answer the following research questions:  
For the rest of this programming challenge, you should use this dataset to answer the following research questions:  
* How does high school academic achievement relate to earnings?
 
* (How) does this relationship vary by school district?
# How does high school academic achievement relate to earnings?
# (How) does this relationship vary by school district?


You may use any analytical procedures you deem appropriate given the study design and your current statistical knowledge. Some things you may want to keep in mind:
You may use any analytical procedures you deem appropriate given the study design and your current statistical knowledge. Some things you may want to keep in mind:
* Different tests like ANOVAs, T-tests, or linear regression might help you test different kinds of hypotheses.
* Different tests like ANOVAs, T-tests, or linear regression might help you test different kinds of hypotheses.


== Part III: Trick-or-treating all over again ==
=== Trick-or-treating all over again ===


The final questions revisit the trick-or-treating experiment [[Statistics_and_Statistical_Programming_(Fall_2020)/pset5|we analyzed a few weeks ago]].
The final questions revisit the trick-or-treating experiment [[Statistics_and_Statistical_Programming_(Fall_2020)/pset5|we analyzed a few weeks ago]].
Line 35: Line 38:
You may want to revisit your earlier analysis and exploration of the data as you prepare to conduct the following analyses. You may also want to generate new exploratory analysis and summary statistics that incorporate the <code>age</code>, <code>male</code>, and <code>year</code> variables that we did not consider in our analysis last time around.  
You may want to revisit your earlier analysis and exploration of the data as you prepare to conduct the following analyses. You may also want to generate new exploratory analysis and summary statistics that incorporate the <code>age</code>, <code>male</code>, and <code>year</code> variables that we did not consider in our analysis last time around.  


=== Fit a model to test for treatment effects ===
==== PC3: Fit a model to test for treatment effects ====


Now, let's construct a test for treatment effects. For a between-groups randomized-controlled trial (RCT) like this, that means we'll focus on the fitted parameter for the treatment assignment variable (<math>\beta_1\mathrm{obama}</math>) which will provide a direct estimate of the causal effect of exposure to the treatment (compared against the control) condition. That said, here are a few tips, notes, and requests:
Now, let's construct a test for treatment effects. For a between-groups randomized-controlled trial (RCT) like this, that means we'll focus on the fitted parameter for the treatment assignment variable (<math>\beta_1\mathrm{obama}</math>) which will provide a direct estimate of the causal effect of exposure to the treatment (compared against the control) condition. That said, here are a few tips, notes, and requests:
* The outcome is dichotomous, so you can/should use logistic regression to model this data (we can discuss this choice in class). You may want to evaluate whether the conditions necessary to do so are met.
* The outcome is dichotomous, so you can/should use logistic regression to model this data (we can discuss this choice in class). You may want to evaluate whether the conditions necessary to do so are met.
* You may want/need to convert some of these variables to appropriate types/classes in order to fit a logistic model. I also recommend at least turning <code>year</code> into a factor and creating a centered version of the <code>age</code> variable (we can discuss this in class too).  
* You may want/need to convert some of these variables to appropriate types/classes in order to fit a logistic model. I also recommend at least turning <code>year</code> into a factor and creating a centered version of the <code>age</code> variable (we can discuss this in class too).  
Line 46: Line 50:
** a 7-year old boy in 2012.
** a 7-year old boy in 2012.


=== Conduct a post-hoc "sub-group" analysis ===
==== PC4 Conduct a post-hoc "sub-group" analysis ====


The paper mentions that the methods of random assignment and the experimental conditions were a little different for each year in which the study was run. Fit models (without the parameter for <math>\mathrm{year}</math>) on the corresponding subsets of the data (2012, 2014, 2015).  
The paper mentions that the methods of random assignment and the experimental conditions were a little different for each year in which the study was run. Fit models (without the parameter for <math>\mathrm{year}</math>) on the corresponding subsets of the data (2012, 2014, 2015).  


=== Interpret and discuss your results ===
==== PC4 Interpret and discuss your results ====


Explain what you found! Be sure to find useful and meaningful ways to convey your findings in terms of the odds-ratios and model-predicted probabilities. Make sure to address any discrepancies you observe between your original (i.e., Problem Set 5) t-test estimates, the "full" logistic model results you estimated and the sub-group analysis you conducted above.
Explain what you found! Be sure to find useful and meaningful ways to convey your findings in terms of the odds-ratios and model-predicted probabilities. Make sure to address any discrepancies you observe between your original (i.e., Problem Set 5) t-test estimates, the "full" logistic model results you estimated and the sub-group analysis you conducted above.

Revision as of 04:10, 22 February 2021

Programming Challenges

PC1: Replicate analysis from OpenIntro

For this part, please use the mariokart dataset included in the openintro library (and documented here) to do the following:

  1. Replicate the multiple regression model and results presented in Figure 9.15 on p. 366 of the 'OpenIntro' textbook.
  2. Generate plots to diagnose any issues with this model.
  3. Interpret the results with a particular focus on the relationship between price and two of the predictors: cond_new and stock_photo. Be sure to explain what the results mean for those predictors in terms of the underlying variables (i.e., don't just talk about coefficients as such, but translate them back into the values/scales of the variables).
  4. Based on the results of this model, how would you advise a prospective vendor of a used copy of Mario Kart to maximize the auction price they might receive for the game on eBay?

PC2: Analyze and interpret a simulated study of education and income

The second part of this problem set poses an open-ended set of questions about a simulated dataset from an observational study of high school graduates' academic achievement and subsequent income. You can download the data here. I have provided some information about the "study design" below (reminder/note: this is not data from an actual study):

You have been hired as a statistical consultant on a project studying the role of income in shaping academic achievement. Data from twelve cohorts of public high school students was collected from across the Chicago suburbs. Each cohort incorporates a random sample of 142 students from a single suburban school district. For each student, researchers gathered a standardized measure of the students' aggregate GPA as a proxy for their academic achievement. The researchers then matched the students' names against IRS records five years later and collected each student's reported pre-tax earnings for that year.

I have provided you with a version of the dataset from this hypothetical study in which each row corresponds to one student. For each student, the dataset contains the following variables:

  • id: A unique numeric identifier for each student in the study (randomly generated to preserve student anonymity).
  • cohort: An anonymized label of the cohort (school district) the student was drawn from.
  • gpa: Approximate GPA percentile of the student within the entire district. Note that this means all student GPAs within each district were aggregated and converted to an identical scale before percentiles were calculated.
  • income: Pre-tax income (in thousands of US dollars) reported to the U.S. federal government (IRS) by the student five years after graduation.

For the rest of this programming challenge, you should use this dataset to answer the following research questions:

  1. How does high school academic achievement relate to earnings?
  2. (How) does this relationship vary by school district?

You may use any analytical procedures you deem appropriate given the study design and your current statistical knowledge. Some things you may want to keep in mind:

  • Different tests like ANOVAs, T-tests, or linear regression might help you test different kinds of hypotheses.

Trick-or-treating all over again

The final questions revisit the trick-or-treating experiment we analyzed a few weeks ago.

Load up the dataset. For this exercise we're going to fit a few versions of the following model.

You may want to revisit your earlier analysis and exploration of the data as you prepare to conduct the following analyses. You may also want to generate new exploratory analysis and summary statistics that incorporate the age, male, and year variables that we did not consider in our analysis last time around.

PC3: Fit a model to test for treatment effects

Now, let's construct a test for treatment effects. For a between-groups randomized-controlled trial (RCT) like this, that means we'll focus on the fitted parameter for the treatment assignment variable () which will provide a direct estimate of the causal effect of exposure to the treatment (compared against the control) condition. That said, here are a few tips, notes, and requests:

  • The outcome is dichotomous, so you can/should use logistic regression to model this data (we can discuss this choice in class). You may want to evaluate whether the conditions necessary to do so are met.
  • You may want/need to convert some of these variables to appropriate types/classes in order to fit a logistic model. I also recommend at least turning year into a factor and creating a centered version of the age variable (we can discuss this in class too).
  • Be sure to state the alternative and null hypotheses related to the experimental treatment under consideration.
  • It's a good idea to include the following in the presentation and interpretation of logistic model results: (1) a tabular summary/report of your fitted model including any goodness of fit statistics you can extract from R; (2) a transformation of the coefficient estimating treatment effects into an "odds ratio"; (3) model-predicted probabilities for prototypical study participants. (please note that examples for all of these are provided in Mako Hill's R tutorial on interpreting the results of logistic regression])
  • For the model-predicted probabilities, please estimate the treatment effects for the following hypothetical individuals:
    • a 9-year old girl in 2015.
    • a 7-year old boy in 2012.

PC4 Conduct a post-hoc "sub-group" analysis

The paper mentions that the methods of random assignment and the experimental conditions were a little different for each year in which the study was run. Fit models (without the parameter for ) on the corresponding subsets of the data (2012, 2014, 2015).

PC4 Interpret and discuss your results

Explain what you found! Be sure to find useful and meaningful ways to convey your findings in terms of the odds-ratios and model-predicted probabilities. Make sure to address any discrepancies you observe between your original (i.e., Problem Set 5) t-test estimates, the "full" logistic model results you estimated and the sub-group analysis you conducted above.