Community Data Science Course (Spring 2023)/Week 5 coding challenges: Difference between revisions

From CommunityData
Line 17: Line 17:
## Where their any months was this true? How many and which ones?
## Where their any months was this true? How many and which ones?
## How about any days? How many?
## How about any days? How many?
# I've made [this file available] which includes list of more than 100 Wikipedia articles about alternative rock bands from Washington state that I built from [https://en.wikipedia.org/wiki/Category:Alternative_rock_groups_from_Washington_(state) this category in Wikipedia].[*] It's a <code>.jsonl</code> file. Download this file, read it in, and request monthly page view data from all of them. If you need some help with loading it in, I've included some sample code at the bottom of this page.
# I've made [https://github.com/kayleachampion/spr23_CDSW/blob/main/curriculum/week5/list_of_washington_alternative_rocks_bands_wikipedia-2023-04-25.jsonl this file available] which includes list of more than 100 Wikipedia articles about alternative rock bands from Washington state that I built from [https://en.wikipedia.org/wiki/Category:Alternative_rock_groups_from_Washington_(state) this category in Wikipedia].[*] It's a <code>.jsonl</code> file. Download the file (click "raw" and then save the file onto your drive). Now read it in, and request monthly page view data from all of them. If you need some help with loading it in, I've included some sample code at the bottom of this page.
## Once you've done this, sum up all of the page views from all of the pages and print out a TSV file with these total numbers.
## Once you've done this, sum up all of the page views from all of the pages and print out a TSV file with these total numbers.
## You know the routine by now! Now, make a time series graph of these numbers and include a link in your notebook.
## You know the routine by now! Now, make a time series graph of these numbers and include a link in your notebook.

Revision as of 00:52, 26 April 2023

There's actually nothing to download this time so you simply start with a fresh Jupyter notebook! Be sure to give a nice descriptive name, as always.

Although there's nothing to download, you will likely want to look at the following resources when working through the first half of these these:

#1 Wikipedia Page View API

  1. Identify a famous person who has been famous for at least a few years and that you have some personal interest in. Use the Wikimedia API to collect page view data from the English Wikipedia article on that person. Now use that data to generate a time-series visualization and include a link to it in your notebook.
  2. Identify 2 other languages editions of Wikipedia that have articles on that person. Collect page view data on the article in other languages and create a single visualization that shows how the dynamics and similar and/or different. (Note: My approach involved creating a TSV file with multiple columns.)
  3. Collect page view data on the articles about Marvel Comics and DC Comics in English Wikipedia. (If you'd rather replace these examples with some other comparison of popular rivals, that's just as good!)
    1. Which has more total page views in 2022?
    2. Can you draw a visualization in a spreadsheet that shows this? (Again, provide a link.)
    3. Where there years since 2015 when the less viewed page was viewed more? How many and which ones?
    4. Where their any months was this true? How many and which ones?
    5. How about any days? How many?
  4. I've made this file available which includes list of more than 100 Wikipedia articles about alternative rock bands from Washington state that I built from this category in Wikipedia.[*] It's a .jsonl file. Download the file (click "raw" and then save the file onto your drive). Now read it in, and request monthly page view data from all of them. If you need some help with loading it in, I've included some sample code at the bottom of this page.
    1. Once you've done this, sum up all of the page views from all of the pages and print out a TSV file with these total numbers.
    2. You know the routine by now! Now, make a time series graph of these numbers and include a link in your notebook.

#2 Starting on your projects

Cmbox notice.png If you are planning on collecting data from Reddit, please look into using the Pushshift API instead of the default Reddit API. The Pushshift API is not as up-to-date but it is targeted toward data scientists, not app-makers, and is much better suited to our needs in the class.

In this section, you will take your first steps towards working with your project API. Many of these questions will not involve code, so just mark down your answers in "markdown" cells in your notebook. Feel free to document any findings you think might be useful as you continue to work on your project; you might thank yourself later!

  1. Identify an API you will (or might!) want to use for your project.
  2. Find documentation for that API and include links in your notebook.
  3. What are the API endpoints you plan to use? What are the parameters you will need to use at that endpoint?
  4. Is there a Python module that exists that helps make contact with the API? (See if you can you find example code on how to use it).
    1. If so, download it, install it, and import it into your notebook.
  5. Does the API require authentication? Does it need to be approved?
    1. If so, sign up for a developer account and get your keys. (Do this early because it often takes time for these accounts to be approved.)
  6. Does the API list rate limits? Does it make any requests about how you should use it?
  7. Make a single API call, either directly using requests or using the Python module you have used. It doesn't matter for what. The goal is that you can get something'.
  8. IMPORTANT: If you have included any API keys in your notebook, make a copy of your notebook, delete the cell where you include the keys, before you upload the copy of the notebook. We'll show you some tricks for hiding this information going forward.

Notes

[*] You will probably not be shocked to hear that I collected this data from an API! I've included a Jupyter Notebook with the code to grab that data from the PetScan API online here. [Forthcoming]