Editing Statistics and Statistical Programming (Fall 2020)/pset4

From CommunityData

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
<small>[[Statistics_and_Statistical_Programming_(Fall_2020)#Week_6_.2810.2F20.2C_10.2F22.29|← Back to Week 6]]</small>
<small>[[Statistics_and_Statistical_Programming_(Fall_2020)#Week_6_.2810.2F20.2C_10.2F22.29|← Back to Week 6]]</small>


== Programming Challenges (thinly disguised Statistical Questions) ==
== Programming Challenges ==


This week the programming challenges will focus on the full population ("Chicago bikeshare") dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2.
This week we'll work with the full (simulated!) dataset from which I drew the 20 group samples you analyzed in Problem sets 1 and 2.  
 
With the possible exception of the simulation in PC6 (which is "recommended"), nothing here should require anything totally new to you in R. Instead, a lot of the focus is on illustrating statistical concepts using relatively simple code. The emphasis is on material covered in ''OpenIntro'' §5 and, for PC 6, programming material introduced in the [https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html Week 5 R tutorial].


=== PC1. Import the data ===
=== PC1. Import the data ===


The dataset for this week is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_06</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).
The dataset is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_05</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).  
 
You'll also want to make sure you have the data (and especially your friendly <code>x</code> variable) from [[Statistics_and_Statistical_Programming_(Fall_2020)/pset2|Problem Set 2]] handy once again.
 
=== PC2. Compare the means ===
 
Calculate the mean of the variable <code>x</code> in the aggregate (this week's) dataset. Go back to [[Statistics_and_Statistical_Programming_(Fall_2020)/pset2|Problem Set 2]] and revisit the mean you calculated for <code>x</code>.
 
==== Interpret the comparison ====
 
Knowing that the data you analyzed in Problem Set 2 was a random 5% sample from the dataset distributed for the present Problem Set, explain the ''conceptual'' relationship of these two means to each other.


=== PC3. Confidence interval of a mean ===
=== PC2. The sample mean ===
Again, using the variable <code>x</code> from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for the [https://en.wikipedia.org/wiki/Standard_error#Standard_error_of_the_mean standard error of a mean]: <math>(\frac{\sigma}{\sqrt{n}})</math>, where <math>\sigma</math> is the standard deviation of the sample and <math>n</math> is the number of observations (''Bonus:'' Do this by writing a function.).


==== Compare and explain ====
Calculate the mean of the variable <code>x</code> in the full dataset. Go back to your Week 3 problem set and revisit the mean you calculated for <code>x</code>. Be prepared to discuss the ''conceptual'' relationship of these two means to each other.  
Compare the mean of <code>x</code> from your Problem Set 2 data—and your confidence interval from PC3—to the mean of <code>x</code> in the dataset for the present Problem Set. Is the mean for the aggregate dataset (this week's data) within the confidence interval for your Problem Set 2 data? Do you find this surprising? Why or why not? Explain the conceptual relationship of these values to each other.


=== PC4. Compare distributions ===  
=== PC3. The standard error of the sample mean ===
Let's go beyond the mean alone. Compare the distribution from your Problem Set 2 <code>x</code> vector to the aggregate version of <code>x</code> in this week's data. Draw histograms (or density plots) and compute other descriptive and summary statistics.  
Again, using the variable <code>x</code> from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for standard error <math>(\frac{\sigma}{\sqrt{n}})</math>. (''Bonus:'' Do this by writing a function.)
<!---
:* (b) Using an appropriate built-in R function (see this week's R lecture materials for a relevant example).
:* (c) Bonus: The results from (a) and (b) should be the same or very close. After reading ''OpenIntro'' §5, can you explain why they might not be exactly the same?
--->


==== Interpret the comparison ====  
=== PC4. The population mean ===  
What do you notice? Identify (and interpret) any differences.
Compare the mean of <code>x</code> from your Problem Set 2 sample — and your confidence interval — to the population mean (the version of <code>x</code> in the current week's dataset). Is the full dataset (this week's) mean inside your sample (Problem Set 2) confidence interval? Do you find this surprising? Why or why not? Be prepared to discuss the relationship of these values to each other.


=== PC5. Standard deviation of conditional means ===  
=== PC5. Compare sample and population distributions ===  
Calculate the mean of <code>x</code> for each of the groups in the dataset for this week (within each <code>group</code> in the aggregate dataset) and the standard deviation of this distribution of means.
Let's look beyond the mean. Compare the distribution from your Problem Set 2 sample of <code>x</code> to the true population of <code>x</code>. Draw histograms and compute other descriptive and summary statistics. What do you notice? Be prepared to discuss and explain any differences.


==== Compare and explain ====
=== PC6. Standard deviations vs. standard errors ===  
Compare the standard deviation of the means across all groups that you just calculated to the standard error you calculated in PC3 above. Discuss and explain the relationship between these values.
Calculate the mean of <code>x</code> for each of the groups in the population (within each <code>group</code> in the population dataset) and the standard deviation of this distribution of conditional means. Compare this standard deviation to the standard error of the mean you calculated in PC3 above. Explain the relationship between these values.


=== (Recommended) PC6. A simulation ===  
=== PC7. A simulation ===  
Let's conduct a simulation that demonstrates a fundamental principle of statistics. Please see the [https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html R tutorial materials from last week] for useful examples that can help you do this.
I want you to conduct a simulation that demonstrates a fundamental insight of statistics. Please see the [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html R tutorial materials from last week]] for useful examples that can help you do this.
:* (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
:* (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
:* (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
:* (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
Line 46: Line 36:
:* (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (''Bonus challenge:'' Write a function to complete this part.)
:* (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (''Bonus challenge:'' Write a function to complete this part.)


==== Compare and explain the simulation ====
Compare the results from PC6 with those in the example simulation from [https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html last week's R tutorial]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?
== Reading Questions ==


=== RQ1. Confidence intervals vs. p-values ===
== Statistical Questions ==


Reinhart (§1) argues that confidence intervals are preferable to p-values. Be prepared to explain, support and/or refute Reinhart's argument in your own words.
=== SQ1. Why the simulation in PC7? ===
Compare the results from PC7 with those in the example simulation from [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html last week's R tutorial]]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?
Read something from Reinhart


=== RQ2. Emotional contagion (revisited) ===
== Empirical Paper Questions ==


Revisit the paper we read a couple of weeks ago:
Revisit the paper we read for Week 1 of the course:
: Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. ''Proceedings of the National Academy of Sciences'' 111(24):8788–90. [[http://www.pnas.org/content/111/24/8788.full Open Access]]
: Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. ''Proceedings of the National Academy of Sciences'' 111(24):8788–90. [[http://www.pnas.org/content/111/24/8788.full Open Access]]


Come to class prepared to discuss your answers to the following questions.
Come to class prepared to discuss your answers to the following questions
 
=== EQ1. Hypotheses ===
==== RQ2a. Hypotheses ====
Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).
Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).
====RQ2b. Describe the effects ====
===EQ2. Describe the effects ===
Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.
Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.
====RQ2c. Statistical vs. practical significance ====
===EQ3. Statistical vs. practical significance ===
The authors report ''[https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Cohen's d]'' along with their regression estimates of the main effects. Look up the formula for ''Cohen's d.'' Discuss the ''substantive'' or ''practical'' significance of the estimates given the magnitudes of the ''d'' values reported.
The authors report ''[https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Cohen's d]'' along with their regression estimates of the main effects. Look up the formula for ''Cohen's d.'' Discuss the ''substantive'' or ''practical'' significance of the estimates given the magnitudes of the ''d'' values reported.
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)