Statistics and Statistical Programming (Fall 2020)/pset4: Difference between revisions

From CommunityData
No edit summary
 
(14 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<small>[[Statistics_and_Statistical_Programming_(Fall_2020)#Week_6_.2810.2F20.2C_10.2F22.29|← Back to Week 6]]</small>
<small>[[Statistics_and_Statistical_Programming_(Fall_2020)#Week_6_.2810.2F20.2C_10.2F22.29|← Back to Week 6]]</small>


== Programming Challenges ==
== Programming Challenges (thinly disguised Statistical Questions) ==


This week we'll work with the full (simulated!) dataset from which I drew the 20 group samples you analyzed in Problem sets 1 and 2.  
This week the programming challenges will focus on the full population ("Chicago bikeshare") dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2.
 
With the possible exception of the simulation in PC6 (which is "recommended"), nothing here should require anything totally new to you in R. Instead, a lot of the focus is on illustrating statistical concepts using relatively simple code. The emphasis is on material covered in ''OpenIntro'' §5 and, for PC 6, programming material introduced in the [https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html Week 5 R tutorial].


=== PC1. Import the data ===
=== PC1. Import the data ===


The dataset is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_05</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).  
The dataset for this week is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_06</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).
 
You'll also want to make sure you have the data (and especially your friendly <code>x</code> variable) from [[Statistics_and_Statistical_Programming_(Fall_2020)/pset2|Problem Set 2]] handy once again.
 
=== PC2. Compare the means ===
 
Calculate the mean of the variable <code>x</code> in the aggregate (this week's) dataset. Go back to [[Statistics_and_Statistical_Programming_(Fall_2020)/pset2|Problem Set 2]] and revisit the mean you calculated for <code>x</code>.
 
==== Interpret the comparison ====
 
Knowing that the data you analyzed in Problem Set 2 was a random 5% sample from the dataset distributed for the present Problem Set, explain the ''conceptual'' relationship of these two means to each other.


=== PC2. The sample mean ===
=== PC3. Confidence interval of a mean ===
Again, using the variable <code>x</code> from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for the [https://en.wikipedia.org/wiki/Standard_error#Standard_error_of_the_mean standard error of a mean]: <math>(\frac{\sigma}{\sqrt{n}})</math>, where <math>\sigma</math> is the standard deviation of the sample and <math>n</math> is the number of observations (''Bonus:'' Do this by writing a function.).


Calculate the mean of the variable <code>x</code> in the full dataset. Go back to your Week 3 problem set and revisit the mean you calculated for <code>x</code>. Be prepared to discuss the ''conceptual'' relationship of these two means to each other.  
==== Compare and explain ====
Compare the mean of <code>x</code> from your Problem Set 2 data—and your confidence interval from PC3—to the mean of <code>x</code> in the dataset for the present Problem Set. Is the mean for the aggregate dataset (this week's data) within the confidence interval for your Problem Set 2 data? Do you find this surprising? Why or why not? Explain the conceptual relationship of these values to each other.


=== PC3. The standard error of the sample mean ===
=== PC4. Compare distributions ===  
Again, using the variable <code>x</code> from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for standard error <math>(\frac{\sigma}{\sqrt{n}})</math>. (''Bonus:'' Do this by writing a function.)
Let's go beyond the mean alone. Compare the distribution from your Problem Set 2 <code>x</code> vector to the aggregate version of <code>x</code> in this week's data. Draw histograms (or density plots) and compute other descriptive and summary statistics.  
<!---
:* (b) Using an appropriate built-in R function (see this week's R lecture materials for a relevant example).
:* (c) Bonus: The results from (a) and (b) should be the same or very close. After reading ''OpenIntro'' §5, can you explain why they might not be exactly the same?
--->


=== PC4. The population mean ===  
==== Interpret the comparison ====  
Compare the mean of <code>x</code> from your Problem Set 2 sample — and your confidence interval — to the population mean (the version of <code>x</code> in the current week's dataset). Is the full dataset (this week's) mean inside your sample (Problem Set 2) confidence interval? Do you find this surprising? Why or why not? Be prepared to discuss the relationship of these values to each other.
What do you notice? Identify (and interpret) any differences.


=== PC5. Compare sample and population distributions ===  
=== PC5. Standard deviation of conditional means ===  
Let's look beyond the mean. Compare the distribution from your Problem Set 2 sample of <code>x</code> to the true population of <code>x</code>. Draw histograms and compute other descriptive and summary statistics. What do you notice? Be prepared to discuss and explain any differences.
Calculate the mean of <code>x</code> for each of the groups in the dataset for this week (within each <code>group</code> in the aggregate dataset) and the standard deviation of this distribution of means.


=== PC6. Standard deviations vs. standard errors ===  
==== Compare and explain ====
Calculate the mean of <code>x</code> for each of the groups in the population (within each <code>group</code> in the population dataset) and the standard deviation of this distribution of conditional means. Compare this standard deviation to the standard error of the mean you calculated in PC3 above. Explain the relationship between these values.
Compare the standard deviation of the means across all groups that you just calculated to the standard error you calculated in PC3 above. Discuss and explain the relationship between these values.


=== PC7. A simulation ===  
=== (Recommended) PC6. A simulation ===  
I want you to conduct a simulation that demonstrates a fundamental insight of statistics. Please see the [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html R tutorial materials from last week]] for useful examples that can help you do this.
Let's conduct a simulation that demonstrates a fundamental principle of statistics. Please see the [https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html R tutorial materials from last week] for useful examples that can help you do this.
:* (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
:* (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
:* (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
:* (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
Line 36: Line 46:
:* (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (''Bonus challenge:'' Write a function to complete this part.)
:* (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (''Bonus challenge:'' Write a function to complete this part.)


==== Compare and explain the simulation ====
Compare the results from PC6 with those in the example simulation from [https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html last week's R tutorial]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?
== Reading Questions ==


== Statistical Questions ==
=== RQ1. Confidence intervals vs. p-values ===


=== SQ1. Why the simulation in PC7? ===
Reinhart (§1) argues that confidence intervals are preferable to p-values. Be prepared to explain, support and/or refute Reinhart's argument in your own words.
Compare the results from PC7 with those in the example simulation from [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html last week's R tutorial]]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?
Read something from Reinhart


== Empirical Paper Questions ==
=== RQ2. Emotional contagion (revisited) ===


Revisit the paper we read for Week 1 of the course:
Revisit the paper we read a couple of weeks ago:
: Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. ''Proceedings of the National Academy of Sciences'' 111(24):8788–90. [[http://www.pnas.org/content/111/24/8788.full Open Access]]
: Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. ''Proceedings of the National Academy of Sciences'' 111(24):8788–90. [[http://www.pnas.org/content/111/24/8788.full Open Access]]


Come to class prepared to discuss your answers to the following questions
Come to class prepared to discuss your answers to the following questions.
=== EQ1. Hypotheses ===
 
==== RQ2a. Hypotheses ====
Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).
Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).
===EQ2. Describe the effects ===
====RQ2b. Describe the effects ====
Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.
Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.
===EQ3. Statistical vs. practical significance ===
====RQ2c. Statistical vs. practical significance ====
The authors report ''[https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Cohen's d]'' along with their regression estimates of the main effects. Look up the formula for ''Cohen's d.'' Discuss the ''substantive'' or ''practical'' significance of the estimates given the magnitudes of the ''d'' values reported.
The authors report ''[https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Cohen's d]'' along with their regression estimates of the main effects. Look up the formula for ''Cohen's d.'' Discuss the ''substantive'' or ''practical'' significance of the estimates given the magnitudes of the ''d'' values reported.

Latest revision as of 20:14, 20 October 2020

← Back to Week 6

Programming Challenges (thinly disguised Statistical Questions)[edit]

This week the programming challenges will focus on the full population ("Chicago bikeshare") dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2.

With the possible exception of the simulation in PC6 (which is "recommended"), nothing here should require anything totally new to you in R. Instead, a lot of the focus is on illustrating statistical concepts using relatively simple code. The emphasis is on material covered in OpenIntro §5 and, for PC 6, programming material introduced in the Week 5 R tutorial.

PC1. Import the data[edit]

The dataset for this week is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the week_06 subdirectory in the data repository for the course. Go ahead and inspect the data and load it into R (Hint: You can use either the tidyverse read_tsv() function or the Base R read.delim() function to do this).

You'll also want to make sure you have the data (and especially your friendly x variable) from Problem Set 2 handy once again.

PC2. Compare the means[edit]

Calculate the mean of the variable x in the aggregate (this week's) dataset. Go back to Problem Set 2 and revisit the mean you calculated for x.

Interpret the comparison[edit]

Knowing that the data you analyzed in Problem Set 2 was a random 5% sample from the dataset distributed for the present Problem Set, explain the conceptual relationship of these two means to each other.

PC3. Confidence interval of a mean[edit]

Again, using the variable x from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for the standard error of a mean: , where is the standard deviation of the sample and is the number of observations (Bonus: Do this by writing a function.).

Compare and explain[edit]

Compare the mean of x from your Problem Set 2 data—and your confidence interval from PC3—to the mean of x in the dataset for the present Problem Set. Is the mean for the aggregate dataset (this week's data) within the confidence interval for your Problem Set 2 data? Do you find this surprising? Why or why not? Explain the conceptual relationship of these values to each other.

PC4. Compare distributions[edit]

Let's go beyond the mean alone. Compare the distribution from your Problem Set 2 x vector to the aggregate version of x in this week's data. Draw histograms (or density plots) and compute other descriptive and summary statistics.

Interpret the comparison[edit]

What do you notice? Identify (and interpret) any differences.

PC5. Standard deviation of conditional means[edit]

Calculate the mean of x for each of the groups in the dataset for this week (within each group in the aggregate dataset) and the standard deviation of this distribution of means.

Compare and explain[edit]

Compare the standard deviation of the means across all groups that you just calculated to the standard error you calculated in PC3 above. Discuss and explain the relationship between these values.

(Recommended) PC6. A simulation[edit]

Let's conduct a simulation that demonstrates a fundamental principle of statistics. Please see the R tutorial materials from last week for useful examples that can help you do this.

  • (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
  • (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
  • (c) Create 100 random samples of 2 items each from your randomly generated data and take the mean of each sample. Create a new vector that contains those means. Describe/display the distribution of those means.
  • (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (Bonus challenge: Write a function to complete this part.)

Compare and explain the simulation[edit]

Compare the results from PC6 with those in the example simulation from last week's R tutorial. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?

Reading Questions[edit]

RQ1. Confidence intervals vs. p-values[edit]

Reinhart (§1) argues that confidence intervals are preferable to p-values. Be prepared to explain, support and/or refute Reinhart's argument in your own words.

RQ2. Emotional contagion (revisited)[edit]

Revisit the paper we read a couple of weeks ago:

Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. Proceedings of the National Academy of Sciences 111(24):8788–90. [Open Access]

Come to class prepared to discuss your answers to the following questions.

RQ2a. Hypotheses[edit]

Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).

RQ2b. Describe the effects[edit]

Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.

RQ2c. Statistical vs. practical significance[edit]

The authors report Cohen's d along with their regression estimates of the main effects. Look up the formula for Cohen's d. Discuss the substantive or practical significance of the estimates given the magnitudes of the d values reported.