Editing Human Centered Data Science (Fall 2018)/Assignments
From CommunityData
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 85: | Line 85: | ||
=== A1: Data curation === | === A1: Data curation === | ||
The goal of this assignment is to construct, analyze, and publish a dataset of monthly traffic on English Wikipedia from January 1 2008 through September 30 2018. All analysis should be performed in a single Jupyter notebook and all data, documentation, and code should be published in a single GitHub repository. | The goal of this assignment is to construct, analyze, and publish a dataset of monthly traffic on English Wikipedia from January 1 2008 through September 30 2018. All analysis should be performed in a single Jupyter notebook and all data, documentation, and code should be published in a single GitHub repository. | ||
Line 97: | Line 95: | ||
==== Step 1: Data acquisition ==== | ==== Step 1: Data acquisition ==== | ||
In order to measure Wikipedia traffic from 2008- | In order to measure Wikipedia traffic from 2008-2016, you will need to collect data from two different API endpoints, the Legacy Pagecounts API and the Pageviews API. | ||
# The '''Legacy Pagecounts API''' ([https://wikitech.wikimedia.org/wiki/Analytics/AQS/Legacy_Pagecounts documentation], [https://wikimedia.org/api/rest_v1/#!/Pagecounts_data_(legacy)/get_metrics_legacy_pagecounts_aggregate_project_access_site_granularity_start_end endpoint]) provides access to desktop and mobile traffic data from | # The '''Legacy Pagecounts API''' ([https://wikitech.wikimedia.org/wiki/Analytics/AQS/Legacy_Pagecounts documentation], [https://wikimedia.org/api/rest_v1/#!/Pagecounts_data_(legacy)/get_metrics_legacy_pagecounts_aggregate_project_access_site_granularity_start_end endpoint]) provides access to desktop and mobile traffic data from January 2008 through July 2016. | ||
#The '''Pageviews API''' ([https://wikitech.wikimedia.org/wiki/Analytics/AQS/Pageviews documentation], [https://wikimedia.org/api/rest_v1/#!/Pageviews_data/get_metrics_pageviews_aggregate_project_access_agent_granularity_start_end endpoint]) provides access to desktop, mobile web, and mobile app traffic data from July 2015 through | #The '''Pageviews API''' ([https://wikitech.wikimedia.org/wiki/Analytics/AQS/Pageviews documentation], [https://wikimedia.org/api/rest_v1/#!/Pageviews_data/get_metrics_pageviews_aggregate_project_access_agent_granularity_start_end endpoint]) provides access to desktop, mobile web, and mobile app traffic data from July 2015 through September 2017. | ||
For each API, you will need to collect data ''for all months where data is avaiable'' and then save the raw results into 5 separate JSON source data files (one file per API query type) before continuing to step 2. | For each API, you will need to collect data ''for all months where data is avaiable'' and then save the raw results into 5 separate JSON source data files (one file per API query type) before continuing to step 2. | ||
Your JSON-formatted source data file must contain the complete and un-edited output of your API queries. The naming convention for the source data files is: | Your JSON-formatted source data file must contain the complete and un-edited output of your API queries. The naming convention for the source data files is: | ||
Line 110: | Line 106: | ||
For example, your filename for monthly page views on desktop should be: | For example, your filename for monthly page views on desktop should be: | ||
pagecounts_desktop- | pagecounts_desktop-site_200801-201607.json | ||
'''Important notes:''' | '''Important notes:''' | ||
Line 156: | Line 152: | ||
The final data file should be named: | The final data file should be named: | ||
en- | en-wikipedia_traffic_200801-201709.csv | ||
==== Step 3: Analysis ==== | ==== Step 3: Analysis ==== | ||
[[File:PlotPageviewsEN_overlap.png|200px|thumb|A sample visualization of pageview traffic data.]] | |||
For this assignment, the "analysis" will be fairly straightforward: you will visualize the dataset you have created as a time series graph. | For this assignment, the "analysis" will be fairly straightforward: you will visualize the dataset you have created as a time series graph. | ||
Your visualization will track three traffic metrics: mobile traffic, desktop traffic, and all traffic (mobile + desktop). | Your visualization will track three traffic metrics: mobile traffic, desktop traffic, and all traffic (mobile + desktop). | ||
Your visualization should look similar to the example graph above, which is based on the same data you'll be using! The only big difference should be that your mobile traffic data will only go back to October 2014, since the API does not provide monthly traffic data going back to 2010. | |||
In order to complete the analysis correctly and receive full credit, your graph will need to be the right scale to view the data; all units, axes, and values should be clearly labeled; and the graph should possess a key and a title. You must also generate a .png or .jpeg formatted image of your final graph. | In order to complete the analysis correctly and receive full credit, your graph will need to be the right scale to view the data; all units, axes, and values should be clearly labeled; and the graph should possess a key and a title. You must also generate a .png or .jpeg formatted image of your final graph. | ||
You | You may choose to graph the data in Python, in your notebook. If you decide to use Google Sheet or some other open, public data visualization platform to build your graph, link to it in the README, and make sure sharing settings allow anyone who clicks on the link to view the graph and download the data! | ||
==== Step 4: Documentation ==== | ==== Step 4: Documentation ==== | ||
Line 182: | Line 176: | ||
At minimum, you README file should | At minimum, you README file should | ||
* Describe the goal of the project. | * Describe the goal of the project. | ||
* List the license of the source data and a link to the Wikimedia Foundation | * List the license of the source data and a link to the Wikimedia Foundation terms of use (LINK) | ||
* Link to all relevant API documentation | * Link to all relevant API documentation | ||
* Describe the values of all fields in your final data file. | * Describe the values of all fields in your final data file. | ||
Line 188: | Line 182: | ||
==== Submission instructions ==== | ==== Submission instructions ==== | ||
#Complete you Notebook and datasets in Jupyter Hub. | |||
#Download the data-512-a1 directory from Jupyter Hub. | |||
#Create the data-512-a1 repository on GitHub w/ your code and data. | #Create the data-512-a1 repository on GitHub w/ your code and data. | ||
#Complete and add your README and LICENSE file. | #Complete and add your README and LICENSE file. | ||
#Submit the link to your GitHub repo to: https://canvas.uw.edu/courses/ | #Submit the link to your GitHub repo to: https://canvas.uw.edu/courses/1174178/assignments/3876066 | ||
==== Required deliverables ==== | ==== Required deliverables ==== | ||
Line 207: | Line 203: | ||
* Ask questions on Slack if you're unsure about anything | * Ask questions on Slack if you're unsure about anything | ||
* When documenting/describing your project, think: "If I found this GitHub repo, and wanted to fully reproduce the analysis, what information would I want? What information would I need?" | * When documenting/describing your project, think: "If I found this GitHub repo, and wanted to fully reproduce the analysis, what information would I want? What information would I need?" | ||
--> | |||
=== A2: Bias in data === | === A2: Bias in data === | ||
The goal of this assignment is to explore the concept of bias through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. For this assignment, you will combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article. | ''to come'' | ||
<!-- | |||
The goal of this assignment is to explore the concept of 'bias' through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. For this assignment, you will combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article. | |||
You are expected to perform an analysis of how the ''coverage'' of politicians on Wikipedia and the ''quality'' of articles about politicians varies between countries. Your analysis will consist of a series of tables that show: | You are expected to perform an analysis of how the ''coverage'' of politicians on Wikipedia and the ''quality'' of articles about politicians varies between countries. Your analysis will consist of a series of tables that show: | ||
Line 216: | Line 215: | ||
You are also expected to write a short reflection on the project, that describes how this assignment helps you understand the causes and consequences of bias on Wikipedia. | You are also expected to write a short reflection on the project, that describes how this assignment helps you understand the causes and consequences of bias on Wikipedia. | ||
==== Getting the article and population data ==== | ==== Getting the article and population data ==== | ||
Line 223: | Line 220: | ||
The first step is getting the data, which lives in several different places. The wikipedia dataset can be found [https://figshare.com/articles/Untitled_Item/5513449 on Figshare]. Read through the documentation for this repository, then download and unzip it. | The first step is getting the data, which lives in several different places. The wikipedia dataset can be found [https://figshare.com/articles/Untitled_Item/5513449 on Figshare]. Read through the documentation for this repository, then download and unzip it. | ||
The population data is on [ | The population data is on the [http://www.prb.org/DataFinder/Topic/Rankings.aspx?ind=14 Population Research Bureau website]. Download this data as a CSV file (hint: look for the 'Microsoft Excel' icon in the upper right). | ||
==== Getting article quality predictions ==== | ==== Getting article quality predictions ==== | ||
Line 238: | Line 235: | ||
For context, these quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. If you're curious, you can read more about what these assessment classes mean on [https://en.wikipedia.org/wiki/Wikipedia:WikiProject_assessment#Grades English Wikipedia]. We will talk about what these categories mean, and how the ORES model predicts which category an article goes into, next week in class. For this assignment, you only need to know that these categories exist, and that ORES will assign one of these 6 categories to any article you send it. | For context, these quality classes are a sub-set of quality assessment categories developed by Wikipedia editors. If you're curious, you can read more about what these assessment classes mean on [https://en.wikipedia.org/wiki/Wikipedia:WikiProject_assessment#Grades English Wikipedia]. We will talk about what these categories mean, and how the ORES model predicts which category an article goes into, next week in class. For this assignment, you only need to know that these categories exist, and that ORES will assign one of these 6 categories to any article you send it. | ||
The ORES API is configured fairly similarly to the pageviews API we used last assignment; documentation can be found [https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context_revid_model here]. It expects a revision ID, which is the third column in the Wikipedia dataset, and a model, which is "wp10". The | The ORES API is configured fairly similarly to the pageviews API we used last assignment; documentation can be found [https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context_revid_model here]. It expects a revision ID, which is the third column in the Wikipedia dataset, and a model, which is "wp10". The sample iPython notebook for this assignment provides an example of a correctly-structured API query that you can use to understand how to gather your data, and also to examine the query output. | ||
In order to get article predictions for each article in the Wikipedia dataset, you will need to read <tt>page_data.csv</tt> into Python (or R), and then read through the dataset line by line, using the value of the <tt>last_edit</tt> column in the API query. If you're working in Python, the [https://docs.python.org/3/library/csv.html CSV module] will help with this. | In order to get article predictions for each article in the Wikipedia dataset, you will need to read <tt>page_data.csv</tt> into Python (or R), and then read through the dataset line by line, using the value of the <tt>last_edit</tt> column in the API query. If you're working in Python, the [https://docs.python.org/3/library/csv.html CSV module] will help with this. | ||
Line 285: | Line 282: | ||
==== Writeup ==== | ==== Writeup ==== | ||
Write a few paragraphs, either in the README or in the notebook, reflecting on what you have learned, what you found, what (if anything) surprised you about your findings, and/or what theories you have about why any biases might exist (if you find they exist). You can also include any questions this assignment raised for you about bias, Wikipedia, or machine learning. | Write a few paragraphs, either in the README or in the notebook, reflecting on what you have learned, what you found, what (if anything) surprised you about your findings, and/or what theories you have about why any biases might exist (if you find they exist). You can also include any questions this assignment raised for you about bias, Wikipedia, or machine learning. | ||
==== Submission instructions ==== | ==== Submission instructions ==== | ||
Line 295: | Line 288: | ||
#Create the data-512-a2 repository on GitHub w/ your code and data. | #Create the data-512-a2 repository on GitHub w/ your code and data. | ||
#Complete and add your README and LICENSE file. | #Complete and add your README and LICENSE file. | ||
#Submit the link to your GitHub repo to: https://canvas.uw.edu/courses/ | #Submit the link to your GitHub repo to: https://canvas.uw.edu/courses/1174178/assignments/3876068 | ||
==== Required deliverables ==== | ==== Required deliverables ==== | ||
Line 301: | Line 294: | ||
:# 1 final data file in CSV format that follows the formatting conventions. | :# 1 final data file in CSV format that follows the formatting conventions. | ||
:# 1 Jupyter notebook named <tt>hcds-a2-bias</tt> that contains all code as well as information necessary to understand each programming step, as well as your writeup (if you have not included it in the README) and the tables. | :# 1 Jupyter notebook named <tt>hcds-a2-bias</tt> that contains all code as well as information necessary to understand each programming step, as well as your writeup (if you have not included it in the README) and the tables. | ||
:# 1 README file in .txt or .md format that contains information to reproduce the analysis, including data descriptions, attributions and provenance information, and descriptions of all relevant resources and documentation (inside and outside the repo) and hyperlinks to those resources, and your writeup (if you have not included it in the notebook). | :# 1 README file in .txt or .md format that contains information to reproduce the analysis, including data descriptions, attributions and provenance information, and descriptions of all relevant resources and documentation (inside and outside the repo) and hyperlinks to those resources, and your writeup (if you have not included it in the notebook). | ||
:# 1 LICENSE file that contains an [https://opensource.org/licenses/MIT MIT LICENSE] for your code. | :# 1 LICENSE file that contains an [https://opensource.org/licenses/MIT MIT LICENSE] for your code. | ||
Line 309: | Line 302: | ||
* Experiment with queries in the sandbox of the technical documentation for the API to familiarize yourself with the schema and the data | * Experiment with queries in the sandbox of the technical documentation for the API to familiarize yourself with the schema and the data | ||
* Explore the data a bit before starting to be sure you understand how it is structured and what it contains | * Explore the data a bit before starting to be sure you understand how it is structured and what it contains | ||
* Ask questions on Slack if you're unsure about anything | * Ask questions on Slack if you're unsure about anything | ||
* When documenting/describing your project, think: "If I found this GitHub repo, and wanted to fully reproduce the analysis, what information would I want? What information would I need?" | * When documenting/describing your project, think: "If I found this GitHub repo, and wanted to fully reproduce the analysis, what information would I want? What information would I need?" | ||
--> | |||
=== A3: Crowdwork ethnography === | === A3: Crowdwork ethnography === | ||
For this assignment, you will go undercover as a member of the Amazon Mechanical Turk community. You will preview or perform Mechanical Turk tasks (called "HITs"), lurk in Turk worker discussion forums, and write an ethnographic account of your experience as a crowdworker, and how this experience changes your understanding of the phenomenon of crowdwork. | For this assignment, you will go undercover as a member of the Amazon Mechanical Turk community. You will preview or perform Mechanical Turk tasks (called "HITs"), lurk in Turk worker discussion forums, and write an ethnographic account of your experience as a crowdworker, and how this experience changes your understanding of the phenomenon of crowdwork. | ||
The full assignment description is available | <!-- | ||
The full assignment description is available in PDF form [[:File:HCDS_A4_Crowdwork_ethnography.pdf|here]]. | |||
--> | |||
=== A4: Final project plan === | === A4: Final project plan === |