Statistics and Statistical Programming (Winter 2021)/Problem set 5

From CommunityData

OpenIntro Exercises

  • Complete exercises from OpenIntro §3: 3.12, 3.15, 3.22, 3.28, 3.34, 3.38

Programming Challenges

For this problem set, the programming challenges focus on some of the more advanced fundamentals of R, including some of the new types of data import, transformation, tidying, and visualization introduced in the most recent R tutorial.

The programming challenges below ask you to perform a series of fairly typical data import, exploration, tidying, and descriptive analysis steps. Once again, you'll work with some "fake" data that Mako created to ensure consistency and illustrate some useful points. The most recent R tutorials and problem set worked solutions contain example code should help you do everything asked of you here. From this point forward, we will start to assume that you have become familiar with some of the more basic fundamental skills (e.g., creating your R Markdown script or notebook) and that you have some ideas of where to turn for help and more information when you need it. That said, you should always seek whatever help you need at any time, whether online in Discord, from your peers, or from me.

Note:if you have trouble accessing or importing your dataset, please reach out for help ASAP as you will only be able to do the other challenges once you've done that one!

PC0. Get started

Create and setup the metadata for a new RMarkdown script or notebook for this week's problem set (as usual). Make sure to confirm that R has the working directory location that you want.

PC1. Import data from a .csv file

Revisit your problem set code from Statistics and Statistical Programming (Winter 2021)/Problem set 4 and recall what group number you were in (should be an integer between 1-20). Hopefully it's recorded in your notebook! If not, generate a new one and make sure it's recorded this time!

Navigate to the import the .csv file in the datasets/problem_set_5 subdirectory in the class Dropbox folder with your number (e.g., group_<output>.csv). Note that it is a .csv file and you'll need to use an appropriate procedure/commands to import it!

Recommended sub-challenge: Inspect the dataset directly before you import. You might download the .csv file and use spreadsheet software (e.g., Google docs, LibreOffice, Excel, etc.) to do this. I often prefer look at the first few lines of a new dataset in a "raw" format via the command line or a text editor (e.g., NotePad) so that I can inspect the structure. This can help you figure out how best to import the data into R and clue you into any immediate data cleanup/tidying steps you'll need to take after import (e.g., do the columns have headers? are numbers/text formatted differently?). I won't ask about this in class, but I do recommend it for reasons I describe in the tutorial.

PC2. Explore and describe the data

Take appropriate steps to gain a basic understanding of this dataset.

  • How many columns and rows are there? What classes/types are the variables/columns?
  • What appropriate summary statistics can you provide for each variable (e.g., what are the range, center, and spread of the continuous variables?).
  • Generate univariate tables and visualizations (e.g., boxplots or histograms) to get a sense of what they look like.

If there additional steps you'd like to take, feel free to do so.

PC3. Use and write user-defined functions

Use the example function, my.mean() distributed in the most recent R tutorial materials to calculate the mean of the variable (column) named x in your dataset. Now, write your own function to calculate the median of x. Be ready to walk us through how your function works!

PC4. Compare two vectors

Load your vector from Statistics and Statistical Programming (Winter 2021)/Problem set 4 again (you might want to give it a new name) and perform the same cleanup steps you did in PC2.5 and PC2.6 last week (recode negative values as missing and log-transform the data). Now, compare the vector x from Problem Set #1 with the first column (x) of the data you imported for this assignment (Problem Set #2, i.e., the current dataset you just imported from a .csv file). They should be similar, but are they exactly the same? Use R code to show your answer.

PC5. Cleanup/tidy your data

Once again, some cleanup and recoding is needed for this week's data. It turns out that the variables i and j are really dichotomous "true/false" variables that have been coded as 0 and 1 respectively in this dataset. Recode these columns as logical (i.e., "TRUE" or "FALSE" values). The variable k is really a categorical variable. Recode k as a factor and change the numbers so that they are replaced with the following values or levels: 0="none", 1="some", 2="lots", 3="all". *Your data file may only contains the values 1,2,3. The goal is to end up with a factor (so the command class(k) should return the value TRUE) where those text strings are the levels of the factor.

PC6. Calculate conditional summary statistics

It's common to consider the conditional distributions of a continuous variable within the levels of a second, categorical variable. Please describe the distribution of x within each of the four levels of k. For each level of k calculate the mean and standard deviation of x.

PC7. Create a bivariate table

Now that you have some categorical variables to work with, let's go ahead and create a bivariate table so that you can examine the distributions of some of these values. Use the table() command to create a cross-tabulation of the recoded versions of the k variable and the j variable.

PC8. Create a bivariate visualization

Visualize two variables in the Problem Set #2 dataset using ggplot2 and the geom_point() function to produce a scatterplot of x on the x-axis and y on the y-axis. Optional bonus: Incorporate any of the other variables on other dimensions (e.g., color, shape, and/or size are all good options). If you run into any issues plotting these dimensions, revisit the examples in the tutorial and the ggplot2 documentation and consider that ggplot2 can be very picky about the classes of objects.

Statistical Questions

SQ1. Interpret bivariate analyses

Return to the dataset you imported and worked with in the programming challenges above. Imagine that it comes from a year-long study of bicyclists using a combination of survey and ride-tracking data from the Divvy bikeshare members in the Chicagoland area conducted a few years ago (let's say 2018, just to pick a year). Each row in the data corresponds to a single Divvy cyclist/member and the variables correspond to the following measures:

  • x: Average daily distance cycled (in miles) measured via bicycle dock check-in/check-out data.
  • j: An indicator (True/False) of whether any rides were recorded between January and March.
  • l: An indicator (True/False) of whether the cyclist also uses vehicle rideshare provided by Lyft (the company that owns Divvy).
  • k: A measure of how frequently the cyclist rode in bad weather, with bad weather defined using a standard measure provided by the U.S. NOAA (National Oceanic and Atmospheric Administration) and the categories (none, some, a lot, all) defined in terms of empirical quartiles within the dataset.
  • y: A continuous measure of income calculated in tens of thousands of dollars and scaled so that "0" = average income for a Divvy member (i.e., a value of "5" = $50,000 more per year than an average Divvy member).
  1. Return to the conditional means you created in PC6 above. Given the information you now have about the study, how would you interpret them? Does there seem to be any sort of relationship between the two variables?
  2. Return to the bivariate contingency table you created in PC7 above. Given the information you now have about the study, how would you interpret it? Does there seem to be any sort of relationship between the two variables?
  3. Return to the scatterplot you created in PC8 above. Given the information you now have about the study, how would you interpret it? Does there seem to be any sort of relationship between the two variables?

SQ2. Birthdays revisited (Optional bonus!)

Optional bonus statistical question

We talked about birthdays in the context of one of the textbook exercises for OpenIntro Chapter 3. Here's an opportunity to apply your knowledge and extend that exercise. Note that you can absolutely use R to help calculate the solutions to both parts of this problem. That said, it's a super famous problem and answers/examples are all over the internet, so if you want to challenge yourself, don't look at them while you're working on it! The only hint I'll give you is that you may find binomial coefficients useful and the choose()) function can calculate them for you in R.

  1. The first time I taught this course, there were 25 people in it (including the members of the teaching team). Imagine that I offered you a choice between two bets: Bet #1 is determined by the flip of a fair coin. You can choose heads or tails and you win the bet if your choice turns out to be correct). Bet #2 is determined by whether any two members of that previous version of the class shared a birthday. If a birthday was shared I win the bet, and if no shared birthdays were shared you win the bet. Assuming you want the best chance of winning, which bet should you choose?
  2. Now calculate the probability that any two members of our 7 person class share a birthday and compare this probability with the results of SQ2.1 above.