Community Data Science Course (Spring 2023)/Week 5 coding challenges

From CommunityData

There's actually nothing to download this time so you simply start with a fresh Jupyter notebook! Be sure to give a nice descriptive name, as always.

Although there's nothing to download, you will likely want to look at the following resources when working through the first half of these these:

#1 Wikipedia Page View API

  1. Identify a famous person who has been famous for at least a few years and that you have some personal interest in. Use the Wikimedia API to collect page view data from the English Wikipedia article on that person. Now use that data to generate a time-series visualization and include a link to it in your notebook.
  2. Identify 2 other languages editions of Wikipedia that have articles on that person. Collect page view data on the article in other languages and create a single visualization that shows how the dynamics and similar and/or different. (Note: My approach involved creating a TSV file with multiple columns.)
  3. Collect page view data on Marvel Comics and DC Comics in Wikipedia. (If you'd rather replace these examples with some other comparison of popular rivals, that's fine.)
    1. Which has more total page views in 2022?
    2. Can you draw a visualization of this?
    3. Where there years since 2015 when the less viewed page was viewed more? How many and which ones?
    4. Where their any months was this true? How many and which ones?
    5. How about any days? How many?
  4. I've made this file available which a list of several hundred titles of Wikipedia articles about Harry Potter [Forthcoming].[*] I think it's all of them! Download this file, read it in, and request monthly page view data from all of them?
    1. Once you've done this, sum up all of the page views from all of the pages and print out a TSV file with these total numbers.
    2. Make a time series graph of these numbers and include a link in your notebook.

#2 Starting on your projects

Cmbox notice.png If you are planning on collecting data, please look into using the Pushshift API instead of the default Reddit API. The Pushshift API is not as up-to-date but it is targeted toward data scientists, not app-makers, and is much better suited to our needs in the class.

Many of these challenges will not involve code. Feel free to just write "markdown" code into your notebook.

  1. Identify an API you will (or might!) want to use for your project.
  2. Find documentation for that API and include links
  3. What are the endpoints you plan to use? What are the parameters you will need to use?
  4. Is there a python module that exists that helps make contact with the API? (See if you can you find example code on how to use it).
  5. If so, download it, install it, and import it into your notebook.
  6. Does the API require authentication? Does it need to be approved? If so, sign up for a developer account and get your keys.
  7. Does the API list rate limits?
  8. Make a single API call, either directly using requests or using the Python module you have used. It doesn't matter for what. The goal is that you can make technical contact.
  9. IMPORTANT: If you have included any API keys in your notebook, make a copy of your notebook, delete the cell where you include the keys, and then upload the copy of the notebook. We'll show you some tricks for hiding this information going forward.

Notes

[*] You will probably not be shocked to hear that I collected this data from an API! I've included a Jupyter Notebook with that API online here. [Forthcoming]