Statistics and Statistical Programming (Fall 2020)/pset4: Difference between revisions

From CommunityData
No edit summary
No edit summary
Line 1: Line 1:
<small>[[Statistics_and_Statistical_Programming_(Fall_2020)#Week_6_.2810.2F20.2C_10.2F22.29|← Back to Week 6]]</small>
<small>[[Statistics_and_Statistical_Programming_(Fall_2020)#Week_6_.2810.2F20.2C_10.2F22.29|← Back to Week 6]]</small>


== Programming Challenges ==
== Programming Challenges (thinly disguised Statistical Questions) ==


This week we'll work with the full (simulated!) dataset from which I drew the 20 group samples you analyzed in Problem sets 1 and 2.  
This week we'll work with the full (simulated!) dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2. With the possible exception of the simulation in PC7, most of the "programming" here should not pose much difficulty. Instead, a lot of the focus is on explaining the conceptual relationships involved


=== PC1. Import the data ===
=== PC1. Import the data ===
Line 9: Line 9:
The dataset is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_05</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).  
The dataset is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the <code>week_05</code> subdirectory in the [https://communitydata.science/~ads/teaching/2020/stats/data data repository for the course]. Go ahead and inspect the data and load it into R (''Hint:'' You can use either the tidyverse <code>read_tsv()</code> function or the Base R <code>read.delim()</code> function to do this).  


=== PC2. The sample mean ===
=== PC2. The means ===


Calculate the mean of the variable <code>x</code> in the full dataset. Go back to your Week 3 problem set and revisit the mean you calculated for <code>x</code>. Be prepared to discuss the ''conceptual'' relationship of these two means to each other.  
Calculate the mean of the variable <code>x</code> in the full (this week's) dataset. Go back to your Week 3 problem set and revisit the mean you calculated for <code>x</code>.  
 
==== PC2.a Compare and explain ====
 
Explain the ''conceptual'' relationship of these two means to each other.  


=== PC3. The standard error of the sample mean ===
=== PC3. The standard error of the sample mean ===
Line 20: Line 24:
--->
--->


=== PC4. The population mean ===  
==== PC3a. Compare and explain ====  
Compare the mean of <code>x</code> from your Problem Set 2 sample — and your confidence interval — to the population mean (the version of <code>x</code> in the current week's dataset). Is the full dataset (this week's) mean inside your sample (Problem Set 2) confidence interval? Do you find this surprising? Why or why not? Be prepared to discuss the relationship of these values to each other.
Compare the mean of <code>x</code> from your Problem Set 2 sample — and your confidence interval — to the population mean (the version of <code>x</code> in the current week's dataset). Is the full dataset (this week's) mean inside your sample (Problem Set 2) confidence interval? Do you find this surprising? Why or why not? Explain the conceptual relationship of these values to each other.
 
=== PC4. Compare sample and population distributions ===
Let's look beyond the mean. Compare the distribution from your Problem Set 2 sample of <code>x</code> to the true population of <code>x</code>. Draw histograms and compute other descriptive and summary statistics. What do you notice? Identify (and interpret) any differences.


=== PC5. Compare sample and population distributions ===  
=== PC5. Standard deviations vs. standard errors ===  
Let's look beyond the mean. Compare the distribution from your Problem Set 2 sample of <code>x</code> to the true population of <code>x</code>. Draw histograms and compute other descriptive and summary statistics. What do you notice? Be prepared to discuss and explain any differences.
Calculate the mean of <code>x</code> for each of the groups in the population (within each <code>group</code> in the population dataset) and the standard deviation of this distribution of conditional means.


=== PC6. Standard deviations vs. standard errors ===  
==== PC5a. Standard deviation vs. standard ====
Calculate the mean of <code>x</code> for each of the groups in the population (within each <code>group</code> in the population dataset) and the standard deviation of this distribution of conditional means. Compare this standard deviation to the standard error of the mean you calculated in PC3 above. Explain the relationship between these values.
Compare this standard deviation to the standard error of the sample mean you calculated in PC3 above. Discuss and explain the relationship between these values.


=== PC7. A simulation ===  
=== PC6. A simulation ===  
I want you to conduct a simulation that demonstrates a fundamental insight of statistics. Please see the [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html R tutorial materials from last week]] for useful examples that can help you do this.
I want you to conduct a simulation that demonstrates a fundamental pinciple of statistics. Please see the [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html R tutorial materials from last week]] for useful examples that can help you do this.
:* (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
:* (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
:* (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
:* (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
Line 36: Line 43:
:* (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (''Bonus challenge:'' Write a function to complete this part.)
:* (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (''Bonus challenge:'' Write a function to complete this part.)


==== PC6a. Why the simulation? ====
Compare the results from PC6 with those in the example simulation from [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html last week's R tutorial]]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?
== Reading Questions ==


== Statistical Questions ==
=== RQ1. Confidence intervals vs. p-values ===


=== SQ1. Why the simulation in PC7? ===
Reinhart (§1) argues that confidence intervals are preferable to p-values. Be prepared to explain, support and/or refute Reinhart's argument in your own words.
Compare the results from PC7 with those in the example simulation from [[https://communitydata.science/~ads/teaching/2020/stats/r_tutorials/w05-R_tutorial.html last week's R tutorial]]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?
Read something from Reinhart


== Empirical Paper Questions ==
=== RQ2. Emotional contagion revisited ===


Revisit the paper we read for Week 1 of the course:
Revisit the paper we read for Week 1 of the course:
Line 49: Line 58:


Come to class prepared to discuss your answers to the following questions
Come to class prepared to discuss your answers to the following questions
=== EQ1. Hypotheses ===
==== RQ2a. Hypotheses ====
Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).
Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).
===EQ2. Describe the effects ===
====RQ2b. Describe the effects ====
Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.
Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.
===EQ3. Statistical vs. practical significance ===
====RQ2c. Statistical vs. practical significance ====
The authors report ''[https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Cohen's d]'' along with their regression estimates of the main effects. Look up the formula for ''Cohen's d.'' Discuss the ''substantive'' or ''practical'' significance of the estimates given the magnitudes of the ''d'' values reported.
The authors report ''[https://en.wikipedia.org/wiki/Effect_size#Cohen's_d Cohen's d]'' along with their regression estimates of the main effects. Look up the formula for ''Cohen's d.'' Discuss the ''substantive'' or ''practical'' significance of the estimates given the magnitudes of the ''d'' values reported.

Revision as of 05:20, 13 October 2020

← Back to Week 6

Programming Challenges (thinly disguised Statistical Questions)

This week we'll work with the full (simulated!) dataset from which I drew the 20 group samples you analyzed in Problem Sets 1 and 2. With the possible exception of the simulation in PC7, most of the "programming" here should not pose much difficulty. Instead, a lot of the focus is on explaining the conceptual relationships involved

PC1. Import the data

The dataset is available in yet another plain text format: a "tab-delimited" (a.k.a., tab-separated or TSV) file. You can find it in the week_05 subdirectory in the data repository for the course. Go ahead and inspect the data and load it into R (Hint: You can use either the tidyverse read_tsv() function or the Base R read.delim() function to do this).

PC2. The means

Calculate the mean of the variable x in the full (this week's) dataset. Go back to your Week 3 problem set and revisit the mean you calculated for x.

PC2.a Compare and explain

Explain the conceptual relationship of these two means to each other.

PC3. The standard error of the sample mean

Again, using the variable x from your Problem Set 2 data, compute the 95% confidence interval for the mean of this vector "by hand" (i.e., in R) using the normal formula for standard error . (Bonus: Do this by writing a function.)

PC3a. Compare and explain

Compare the mean of x from your Problem Set 2 sample — and your confidence interval — to the population mean (the version of x in the current week's dataset). Is the full dataset (this week's) mean inside your sample (Problem Set 2) confidence interval? Do you find this surprising? Why or why not? Explain the conceptual relationship of these values to each other.

PC4. Compare sample and population distributions

Let's look beyond the mean. Compare the distribution from your Problem Set 2 sample of x to the true population of x. Draw histograms and compute other descriptive and summary statistics. What do you notice? Identify (and interpret) any differences.

PC5. Standard deviations vs. standard errors

Calculate the mean of x for each of the groups in the population (within each group in the population dataset) and the standard deviation of this distribution of conditional means.

PC5a. Standard deviation vs. standard

Compare this standard deviation to the standard error of the sample mean you calculated in PC3 above. Discuss and explain the relationship between these values.

PC6. A simulation

I want you to conduct a simulation that demonstrates a fundamental pinciple of statistics. Please see the [R tutorial materials from last week] for useful examples that can help you do this.

  • (a) Create a vector of 10,000 randomly generated numbers that are uniformly distributed between 0 and 9.
  • (b) Calculate the mean of the vector you just created. Plot a histogram of the distribution.
  • (c) Create 100 random samples of 2 items each from your randomly generated data and take the mean of each sample. Create a new vector that contains those means. Describe/display the distribution of those means.
  • (d) Do (c) except make the items 10 items in each sample instead of 2. Then do (c) again except with 100 items. Be ready to describe how the histogram changes as the sample size increases. (Bonus challenge: Write a function to complete this part.)

PC6a. Why the simulation?

Compare the results from PC6 with those in the example simulation from [last week's R tutorial]. What fundamental statistical principle is illustrated by these simulations? Why is this an important simulation for thinking about hypothesis testing?

Reading Questions

RQ1. Confidence intervals vs. p-values

Reinhart (§1) argues that confidence intervals are preferable to p-values. Be prepared to explain, support and/or refute Reinhart's argument in your own words.

RQ2. Emotional contagion revisited

Revisit the paper we read for Week 1 of the course:

Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks. Proceedings of the National Academy of Sciences 111(24):8788–90. [Open Access]

Come to class prepared to discuss your answers to the following questions

RQ2a. Hypotheses

Write down, in your own words, the key pairs of null/alternative hypotheses tested in the paper (hint: the four pairs that correspond to the main effects represented in the figure).

RQ2b. Describe the effects

Describe, in your own words, the main effects estimated in the paper for these four key pairs of hypotheses.

RQ2c. Statistical vs. practical significance

The authors report Cohen's d along with their regression estimates of the main effects. Look up the formula for Cohen's d. Discuss the substantive or practical significance of the estimates given the magnitudes of the d values reported.