Statistics and Statistical Programming (Winter 2017)/Problem Set: Week 3

From CommunityData

This is general advice going forward but it makes sense to include it here: My advice is to start working through the programming challenges first. The programming challenges will only include material that we covered in the readings for the previous week.

If you're having trouble loading up your dataset (PC2) find me in the next day or so as you will only be able to do the other challenges once you've done that one.

Programming Challenges[edit]

PC0. Check the list of GitHub repositories page here. A few of you (Maggie, Luyue, and Janny) named yours something like "week_02." Although there's no problem with this, it might cause confusion going forward when you add homework for future week (like this week) that are not longer week 2. So if you're Maggie, Janny, and Luyue, I recommend that you create and push a new repository/directory with a more generic name which you can use for all your future assignments. For everybody else, please copy your files and work for this (and all future) problem sets into the same repository you used last time.
PC1. In the class assignments GitHub repository (uwcom521-assignments), I've uploaded a new dataset for each person in the class in the subdirectory week_03. Sync my repository, find your file, copy into your homework directory which is managed by Git. Commit your dataset file into your personal homework git repository.
PC2. Open the dataset in a spreadsheet (Google Docs, Excel, etc) to take a look at it. It's often a good idea to open it in NotePad as well so you can look at the structure of the "raw data." If you want to generate statistics or visualize things, that's a normal thing to do this at point. Manually inspecting the raw data is common and useful. I won't ask about this is class, but I do recommend it.
PC3. Load the CSV file into R. Also make sure that you loaded the week 2 dataset file.
PC4. Get to know your data! Do whatever is necessary to summarize the new dataset. Now many columns and rows are there? What are the appropriate summary statistics to report for each variable? What are the ranges, minimums, maximums, means, medians, standard deviations of the variables of variables? Draw histograms for all of the variables to get a sense of what it looks like. Save code to do all of these things.
PC5. Compare the week2.dataset vector with the first column (x) of the data frame. I mentioned in the video lecture that they are similar? Do you agree? How similar? Write R code to demonstrate or support this answer convincingly.
PC6. Visualize the data using ggplot2 and the geom_point() function. Graphing the x on the x-axis and y on the y-axis seem pretty reasonable! If only it were always so easy! Graph i, j, and k on other dimensions (e.g., color, shape, and size seems reasonable). Did you run into any trouble? How would you work around this?
PC7. A very common step when you import and prepare for data analysis is going to be cleaning and coding data. Some of that is needed here. As is very common, i, j are really dichotomous "true/false" variables but they are coded as 0 and 1 in this dataset. Recode these columns as logical. The variable k is really a categorical variable. Recode this as a factor and change the numbers into textual "meaning" to make it easier. Here's the relevant piece of the codebook (i.e., mapping): 0=none, 1=some, 2=lots, 3=all. The goals is to end up with a factor where those text strings are the levels of the factor. I haven't shown you how to do exactly this but you can solve this with things I have shown you. Or you can try to find a recipe online.
PC8. Take column i and set it equal to NA when if it is FALSE (originally 0). Then set all the values that are NA back to FALSE. Sorry for the busy work! ;)
PC9. Now that you have recoded your data in PC7, generate new summaries for those three variables. Also, go back and regenerate the visualizations. How have these changed? How are these different from the summary detail you presented above?
PC10. As always, save your work for all of the questions above as an R script. Commit that R script to your homework git repository and sync/push it to GitHub. Verify that it is online on the GitHub website at the URL linked to from the Statistics and Statistical Programming (Winter 2017)/List of student git repositories page.

Statistical Questions[edit]

Exercises from OpenIntro §2[edit]

Q0. Any questions or clarifications from the OpenIntro text or lecture notes?
Q1. Exercise 3.4 on triathlon times
Q2. Exercise 3.6 which is basically a continuation of 3.4
Q3. Exercise 3.18 on evaluating normal approximation
Q4. Exercise 3.32 on arachnophobia (spiders are frequent concern in statistical programming)

Empirical Paper[edit]

There will be no empirical paper this week. Understanding probability distributions is fundamental to statistics but few people really end there so it's hard to find a paper that is just about this.