Statistics and Statistical Programming (Winter 2017)/Problem Set: Week 4

Programming Challenges
We're going to be doing a bit more advanced data wrangling this week. And with real data!

We're going to be using two datasets from data.seattle.gov related to online engagement. Both datasets are drawn from Google Analytics but we're going to try to merge them together to make a statement about what proportion of views we believe are coming from mobile users and if we believe this number is going up or down over time.

The datasets are messy and aren't set up well to help us answer the question. Luckily, we can use R to solve these problems!


 * PC1. Download these two datasets from data.seattle.gov, save them into your git repository: COS-Statistics-Top5000-Pages COS-Statistics-Mobile Sessions. Assume, for the purposes of this assignment that the first dataset on views to the top 5000 pages is all views made to the http://www.seattle.gov website
 * PC2. Load both datasets into R as separate data frames. Explore the data to get a sense of the structure of the data. What are the columns, rows, missing data, etc? Write code to take (and then check/look at) several random subsamples of the data.
 * PC3. Using the top 5000 dataset, create a new data frame where one column is each month (as described in the data) and a second column is the total number of views made to all pages in the dataset over that month.
 * PC4. Using the mobile dataset, create a new data frame where one column is each month described in the data and the second is a measure (estimate?) of the total number of views made by mobiles (all platforms) over each month. This will involve at least two steps since total views are included. You'll need to first use the data there to create a measure of the total views for each line in the dataset.
 * PC5. Merge your two datasets together into a new dataset with columns for each month, total views (across the top 5000 pages) and total mobile views. Are there are missing data? Can you tell why?
 * PC6. Create a new column in your merged dataset that describes your best estimate of the proportion (or percentage, if you really must!) of views that comes from mobile. Be able to talk about the assumptions you've made here. Make sure that date, in this final column, is a date or datetime object in R.
 * PC7. Graph this over time and be ready to describe: (a) your best estimate of the proportion of views from mobiles to the Seattle City website over time and (b) an indication of whether it's going up or down.

Exercises from OpenIntro §4

 * Q0. Any questions or clarifications from the OpenIntro text or lecture notes?
 * Q1. Exercise 4.8 on Twitter users and news
 * Q2. Exercise 4.10 which is a continuation of 4.8
 * Q3. Exercise 4.19 on online communication
 * Q4. Exercise 4.32 which is asking you to explain why certain statements about statistical inference are true or false

Empirical Paper

 * Q5: First, walk us through the result visualized in Figure 1. Explain and interpret the result for us. Now go back to the blockquote on page 329 and, by referencing the figure, explain why Gelman and Stern think that this is a good example to illustrate their point about the difference between statistically significant versus non-significant.
 * Q6: Move on to the study about EMF. Walk us through Figure 2. First explain the basic result and then explain why Gelman and Stern thinks that Figure 2b is better than 2a.
 * Q7: In the paper's abstract Gelman and Stern describe their approach as different from three other problems: that statistical significance, that dichotomization of significant/not-significant encourages dismissing observed differences, and that thresholds are arbitrary. Summarize why these are important issues in your own words (and ideally, with examples) and explain how Gelman's key critique is different.