Statistics and Statistical Programming (Fall 2020)/pset4: Difference between revisions

From CommunityData
No edit summary
Line 3: Line 3:
== Programming Challenges (thinly disguised Statistical Questions) ==
== Programming Challenges (thinly disguised Statistical Questions) ==


This week the programming challenges will mostly work with the full (simulated!) dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2. With the possible exception of the simulation in PC6, the "programming" here should not pose major challenges. Instead, a lot of the focus is on explaining the conceptual relationships involved.
This week the programming challenges will mostly work with the full (synthetic "Chicago bikeshare") dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2. With the possible exception of the simulation in PC6, nothing here should require anything totally new to you in R. Instead, a lot of the focus is on illustrating statistical concepts using relatively simple code. The emphasis is on material covered in ''OpenIntro'' §5 and programming material introduced in the [https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html Week 5 R tutorial].


=== PC1. Import the data ===
=== PC1. Import the data ===


The dataset is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_05</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).  
The dataset for this week is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_05</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).
 
You'll also want to make sure you have the data (and especially your friendly <code>x</code> variable from [[Statistics_and_Statistical_Programming_(Fall_2020)/pset2|Problem Set 2]] handy once again.


=== PC2. Compare the means ===
=== PC2. Compare the means ===


Calculate the mean of the variable <code>x</code> in the aggregate (this week's) dataset. Go back to your Week 3 problem set and revisit the mean you calculated for <code>x</code>.  
Calculate the mean of the variable <code>x</code> in the aggregate (this week's) dataset. Go back to [[[[Statistics_and_Statistical_Programming_(Fall_2020)/pset2|Problem Set 2]] and revisit the mean you calculated for <code>x</code>.  


==== PC2a. Interpret the comparison ====
==== PC2a. Interpret the comparison ====


Explain the ''conceptual'' relationship of these two means to each other.  
Knowing that the data you analyzed in Problem Set 2 was a random $5%$ sample from the dataset distributed for the current Problem Set, explain the ''conceptual'' relationship of these two means to each other.  


=== PC3. Confidence interval a mean ===
=== PC3. Confidence interval a mean ===
Again, using the variable <code>x</code> from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for standard error <math>(\frac{\sigma}{\sqrt{n}})</math>. (''Bonus:'' Do this by writing a function.)
Again, using the variable <code>x</code> from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for the [https://en.wikipedia.org/wiki/Standard_error#Standard_error_of_the_mean standard error of a mean]: <math>(\frac{\sigma}{\sqrt{n}})</math>, where <math>\sigma</math> is the standard deviation of the distribution and <math>n</math> is the number of observations. (''Bonus:'' Do this by writing a function.)
<!---
:* (b) Using an appropriate built-in R function (see this week's R lecture materials for a relevant example).
:* (c) Bonus: The results from (a) and (b) should be the same or very close. After reading ''OpenIntro'' §5, can you explain why they might not be exactly the same?
--->


==== PC3a. Compare and explain ====  
==== PC3a. Compare and explain ====  

Revision as of 15:25, 13 October 2020

← Back to Week 6

Programming Challenges (thinly disguised Statistical Questions)

This week the programming challenges will mostly work with the full (synthetic "Chicago bikeshare") dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2. With the possible exception of the simulation in PC6, nothing here should require anything totally new to you in R. Instead, a lot of the focus is on illustrating statistical concepts using relatively simple code. The emphasis is on material covered in OpenIntro §5 and programming material introduced in the Week 5 R tutorial.

PC1. Import the data

The dataset for this week is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the week_05 subdirectory in the data repository for the course. Go ahead and inspect the data and load it into R (Hint: You can use either the tidyverse read_tsv() function or the Base R read.delim() function to do this).

You'll also want to make sure you have the data (and especially your friendly x variable from Problem Set 2 handy once again.

PC2. Compare the means

Calculate the mean of the variable x in the aggregate (this week's) dataset. Go back to [[Problem Set 2 and revisit the mean you calculated for x.

PC2a. Interpret the comparison

Knowing that the data you analyzed in Problem Set 2 was a random $5%$ sample from the dataset distributed for the current Problem Set, explain the conceptual relationship of these two means to each other.

PC3. Confidence interval a mean

Again, using the variable x from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for the standard error of a mean: , where is the standard deviation of the distribution and is the number of observations. (Bonus: Do this by writing a function.)

PC3a. Compare and explain

Compare the mean of x from your Problem Set 2 data—and your confidence interval from PC3—to the mean of x in the current week's aggregate dataset. Is the mean for the aggregate dataset (this week's data) within the confidence interval for your Problem Set 2 data? Do you find this surprising? Why or why not? Explain the conceptual relationship of these values to each other.

PC4. Compare distributions

Let's go beyond the mean alone. Compare the distribution from your Problem Set 2 x vector to the aggregate version of x in this week's data. Draw histograms and compute other descriptive and summary statistics. What do you notice? Identify (and interpret) any differences.

PC5. Standard deviation of conditional means

Calculate the mean of x for each of the groups in the dataset for this week (within each group in the aggregate dataset) and the standard deviation of this distribution of means.

PC5a. Compare and explain

Compare the standard deviation from PC5 to the standard error you calculated in PC3 above. Discuss and explain the relationship between these values.

PC6. A simulation

Let's conduct a simulation that demonstrates a fundamental principle of statistics. Please see the [R tutorial materials from last week] for useful examples that can help you do this.

  • (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
  • (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
  • (c) Create 100 random samples of 2 items each from your randomly generated data and take the mean of each sample. Create a new vector that contains those means. Describe/display the distribution of those means.
  • (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (Bonus challenge: Write a function to complete this part.)

PC6a. Compare and explain the simulation

Compare the results from PC6 with those in the example simulation from [last week's R tutorial]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?

Reading Questions

RQ1. Confidence intervals vs. p-values

Reinhart (§1) argues that confidence intervals are preferable to p-values. Be prepared to explain, support and/or refute Reinhart's argument in your own words.

RQ2. Emotional contagion (revisited)

Revisit the paper we read for Week 1 of the course:

Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. Proceedings of the National Academy of Sciences 111(24):8788–90. [Open Access]

Come to class prepared to discuss your answers to the following questions

RQ2a. Hypotheses

Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).

RQ2b. Describe the effects

Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.

RQ2c. Statistical vs. practical significance

The authors report Cohen's d along with their regression estimates of the main effects. Look up the formula for Cohen's d. Discuss the substantive or practical significance of the estimates given the magnitudes of the d values reported.