Statistics and Statistical Programming (Winter 2021)/Problem set 16: Difference between revisions

From CommunityData
 
(One intermediate revision by the same user not shown)
Line 11: Line 11:
You can load the dataset like:
You can load the dataset like:


<pre syntaxhighlight="R">library(openintro)
<pre syntaxhighlight="r">library(openintro)
data(mariokart)</pre>
data(mariokart)</pre>


Line 56: Line 56:


* The outcome is dichotomous, so you can/should use logistic regression to model this data (we can discuss this choice in class). You may want to evaluate whether the conditions necessary to do so are met.
* The outcome is dichotomous, so you can/should use logistic regression to model this data (we can discuss this choice in class). You may want to evaluate whether the conditions necessary to do so are met.
* You may want/need to convert some of these variables to appropriate types/classes in order to fit a logistic model. I also recommend at least turning <code>year</code> into a factor and creating a centered version of the <code>age</code> variable (we can discuss this in class too).  
* You may want/need to convert some of these variables to appropriate types/classes in order to fit a logistic model. I also recommend at least turning <code>year</code> into a factor and creating a "centered" version of the <code>age</code> variable. Centering a variable means setting a new baseline by subtracting some amount from every value for a variable (often the mean of the variable) so that the new "centered" variable is 0 at the mean, negative below it, and positive above it. It's can make interpreting regressions much easier. We can discuss this in class too.
* Be sure to state the alternative and null hypotheses related to the experimental treatment under consideration.
* Be sure to state the alternative and null hypotheses related to the experimental treatment under consideration.
* It's a good idea to include the following in the presentation and interpretation of logistic model results: (1) a tabular summary/report of your fitted model including any goodness of fit statistics you can extract from R; (2) a transformation of the coefficient estimating treatment effects into an "odds ratio"; (3) model-predicted probabilities for prototypical study participants. (''please note that examples for all of these are provided in this week's tutorial'')
* It's a good idea to include the following in the presentation and interpretation of logistic model results: (1) a tabular summary/report of your fitted model including any goodness of fit statistics you can extract from R; (2) a transformation of the coefficient estimating treatment effects into an "odds ratio"; (3) model-predicted probabilities for prototypical study participants. (''please note that examples for all of these are provided in this week's tutorial'')

Latest revision as of 04:36, 3 March 2021

Programming Challenges[edit]

PC1: Replicate analysis from OpenIntro[edit]

For this part, please use the mariokart dataset included in the openintro library (and documented here) to do the following:

  1. Replicate the multiple regression model and results presented in Figure 9.15 on p. 366 of the 'OpenIntro' textbook.
  2. Generate plots to diagnose any issues with this model.
  3. Interpret the results with a particular focus on the relationship between price and two of the predictors: cond_new and stock_photo. Be sure to explain what the results mean for those predictors in terms of the underlying variables (i.e., don't just talk about coefficients as such, but translate them back into the values/scales of the variables).
  4. Based on the results of this model, how would you advise a prospective vendor of a used copy of Mario Kart to maximize the auction price they might receive for the game on eBay?

You can load the dataset like:

library(openintro)
data(mariokart)

PC2: Analyze and interpret a simulated study of education and income[edit]

The second part of this problem set poses an open-ended set of questions about a simulated dataset from an observational study of high school graduates' academic achievement and subsequent income. You can download the data here.

The file is an RDS file which is the other data format in R (RData is the other one). To load an RDS file you do:

grads <- readRDS("FILENAME.rds")

load() is for RData files and it will contain the names of the variables when when you run load(whatever.rdata) a bunch of variables pop into being in your environment. RDS files contain just one object (like an R dataframe) so you need to load them with readRSD() and then assign (i.e., with <-) the output and store in a variable like you with with read.csv().

I have provided some information about the "study design" below (reminder/note: this is not data from an actual study):

You have been hired as a statistical consultant on a project studying the role of income in shaping academic achievement. Data from twelve cohorts of public high school students was collected from across the Chicago suburbs. Each cohort incorporates a random sample of 142 students from a single suburban school district. For each student, researchers gathered a standardized measure of the students' aggregate GPA as a proxy for their academic achievement. The researchers then matched the students' names against IRS records five years later and collected each student's reported pre-tax earnings for that year.

I have provided you with a version of the dataset from this hypothetical study in which each row corresponds to one student. For each student, the dataset contains the following variables:

  • id: A unique numeric identifier for each student in the study (randomly generated to preserve student anonymity).
  • cohort: An anonymized label of the cohort (school district) the student was drawn from.
  • gpa: Approximate GPA percentile of the student within the entire district. Note that this means all student GPAs within each district were aggregated and converted to an identical scale before percentiles were calculated.
  • income: Pre-tax income (in thousands of US dollars) reported to the U.S. federal government (IRS) by the student five years after graduation.

For the rest of this programming challenge, you should use this dataset to answer the following research questions:

  1. How does high school academic achievement relate to earnings?
  2. (How) does this relationship vary by school district?

You may use any analytical procedures you deem appropriate given the study design and your current statistical knowledge. Some things you may want to keep in mind:

  • Different tests like ANOVAs, T-tests, or linear regression might help you test different kinds of hypotheses.

Trick-or-treating all over again[edit]

The final questions revisit the trick-or-treating experiment we analyzed a few weeks ago (Statistics and Statistical Programming (Winter 2021)/Problem set 11).

Load up the dataset. For this exercise we're going to fit a few versions of the following model.

You may want to revisit your earlier analysis and exploration of the data as you prepare to conduct the following analyses. You may also want to generate new exploratory analysis and summary statistics that incorporate the age, male, and year variables that we did not consider in our analysis last time around.

PC3: Fit a model to test for treatment effects[edit]

Now, let's construct a test for treatment effects. For a between-groups randomized-controlled trial (RCT) like this, that means we'll focus on the fitted parameter for the treatment assignment variable () which will provide a direct estimate of the causal effect of exposure to the treatment (compared against the control) condition. That said, here are a few tips, notes, and requests:

  • The outcome is dichotomous, so you can/should use logistic regression to model this data (we can discuss this choice in class). You may want to evaluate whether the conditions necessary to do so are met.
  • You may want/need to convert some of these variables to appropriate types/classes in order to fit a logistic model. I also recommend at least turning year into a factor and creating a "centered" version of the age variable. Centering a variable means setting a new baseline by subtracting some amount from every value for a variable (often the mean of the variable) so that the new "centered" variable is 0 at the mean, negative below it, and positive above it. It's can make interpreting regressions much easier. We can discuss this in class too.
  • Be sure to state the alternative and null hypotheses related to the experimental treatment under consideration.
  • It's a good idea to include the following in the presentation and interpretation of logistic model results: (1) a tabular summary/report of your fitted model including any goodness of fit statistics you can extract from R; (2) a transformation of the coefficient estimating treatment effects into an "odds ratio"; (3) model-predicted probabilities for prototypical study participants. (please note that examples for all of these are provided in this week's tutorial)
  • For the model-predicted probabilities, please estimate the treatment effects for the following hypothetical individuals:
    • a 9-year old girl in 2015.
    • a 7-year old boy in 2012.

PC4 Conduct a post-hoc "sub-group" analysis[edit]

The paper mentions that the methods of random assignment and the experimental conditions were a little different for each year in which the study was run. Fit models (without the parameter for ) on the corresponding subsets of the data (2012, 2014, 2015).

PC4 Interpret and discuss your results[edit]

Explain what you found! Be sure to find useful and meaningful ways to convey your findings in terms of the odds-ratios and model-predicted probabilities. Make sure to address any discrepancies you observe between your original (i.e., Problem Set 5) t-test estimates, the "full" logistic model results you estimated and the sub-group analysis you conducted above.