Editing HCDS (Fall 2017)/Assignments

From CommunityData
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
<noinclude>
<div style="font-family:Rockwell,'Courier Bold',Courier,Georgia,'Times New Roman',Times,serif; min-width:10em;">
<div style="float:left; width:100%; margin-right:2%;">
{{Link/Graphic/Main/2
|highlight color= 27666b
|color=460c40
|link=
|image=
|text-align=left
|top font-size= 1.1em
|top color=FFF
|line color=FFF
|top text=This page is a work in progress.
|bottom font-size= 1em
|bottom color= FFF
|bottom text=
|line= none
}}</div></div>
</noinclude>


__FORCETOC__
__FORCETOC__
Line 16: Line 35:
;Scheduled assignments
;Scheduled assignments
* '''A1 - 5 points''' (due Week 4): Data curation (programming/analysis)
* '''A1 - 5 points''' (due Week 4): Data curation (programming/analysis)
* '''A2 - 10 points''' (due Week 6): Sources of bias in data (programming/analysis)
* '''A2 - 10 points''' (due Week 5): Sources of bias in data (programming/analysis)
* '''A3  - 10 points''' (due Week 7): Final project plan (written)
* '''A3  - 10 points''' (due Week 7): Final project plan (written)
* '''A4 - 10 points''' (due Week 9): Crowdwork self-ethnography (written)
* '''A4 - 10 points''' (due Week 9): Crowdwork self-ethnography (written)
Line 176: Line 195:


=== A2: Bias in data ===
=== A2: Bias in data ===
 
The goal of this assignment is to explore the concept of 'bias' through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. You are expected to perform an analysis of how the ''coverage'' of politicians on Wikipedia and the ''quality'' of articles about politicians varies between countries. Your analysis will consist of a series of visualizations that show:
The goal of this assignment is to explore the concept of 'bias' through data on Wikipedia articles - specifically, articles on political figures from a variety of countries. For this assignment, you will combine a dataset of Wikipedia articles with a dataset of country populations, and use a machine learning service called ORES to estimate the quality of each article.
 
You are expected to perform an analysis of how the ''coverage'' of politicians on Wikipedia and the ''quality'' of articles about politicians varies between countries. Your analysis will consist of a series of tables that show:
# the countries with the greatest and least coverage of politicians on Wikipedia compared to their population.  
# the countries with the greatest and least coverage of politicians on Wikipedia compared to their population.  
# the countries with the highest and lowest proportion of high quality articles about politicians.
# the countries with the highest and lowest proportion of high quality articles about politicians.


You are also expected to write a short reflection on the project, that describes how this assignment helps you understand the causes and consequences of bias on Wikipedia.
For this assignment, you will combine a dataset of Wikipedia data with a dataset of population data, and use a machine learning service called ORES to estimate the quality of each article.


==== Getting the article and population data ====
==== Getting the article and population data ====
Line 192: Line 208:


==== Getting article quality predictions ====
==== Getting article quality predictions ====
Now you need to get the predicted quality scores for each article in the Wikipedia dataset. For this step, we're using a Wikimedia API endpoint for a machine learning system called [https://www.mediawiki.org/wiki/ORES ORES] ("Objective Revision Evaluation Service"). ORES estimates the quality of an article (at a particular point in time), and assigns a series of probabilities that the article is in one of 6 quality categories. The options are, from best to worst:
Now you need to get the predicted quality scores for each article in the Wikipedia dataset. For this step, we're using a Wikimedia API endpoint for a machine learning system called [https://www.mediawiki.org/wiki/ORES ORES] ("Objective Revision Evaluation Service"). ORES estimates the quality of an article (at a particular point in time), and assigns a series of probabilities that the article is in one of 6 quality categories. The options are, from best to worst:


Line 210: Line 225:
When you query the API, you will notice that ORES returns a <tt>prediction</tt> value that contains the name of one category, as well as <tt>probability</tt> values for each of the 6 quality categories. For this assignment, you only need to capture and use the value for <tt>prediction</tt>. We'll talk more about what the other values mean in class next week.
When you query the API, you will notice that ORES returns a <tt>prediction</tt> value that contains the name of one category, as well as <tt>probability</tt> values for each of the 6 quality categories. For this assignment, you only need to capture and use the value for <tt>prediction</tt>. We'll talk more about what the other values mean in class next week.


==== Combining the datasets ====
==== Data processing ====
   
   
Some processing of the data will be necessary! In particular, you'll need to - after retrieving and including the ORES data for each article - merge the wikipedia data and population data together. Both have fields containing country names for just that purpose. After merging the data, you'll invariably run into entries which ''cannot'' be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vice versa. You will need to remove the rows that do not have matching data.
Some processing of the data will be necessary! In particular, you'll need to - after retrieving and including the ORES data for each article - merge the wikipedia data and population data together. Both have fields containing country names for just that purpose. After merging the data, you'll invariably run into entries which ''cannot'' be merged. Either the population dataset does not have an entry for the equivalent Wikipedia country, or vice versa. You will need to remove the rows that do not have matching data.


Consolidate the remaining data into a single CSV file which looks something like this:
Consolidate the data into a single CSV file which looks something like this:




Line 231: Line 246:
|population
|population
|}
|}
Note: <tt>revision_id</tt> here is the same thing as <tt>last_edit</tt>, which you used to get scores from the ORES API.


==== Analysis ====
==== Analysis ====
Your analysis will consist of calculating the proportion (as a percentage) of articles-per-population and high-quality articles for each country. By "high quality" articles, in this case we mean the number of articles about politicians in a given country that ORES predicted would be in either the "FA" (featured article) or "GA" (good article) classes.
Examples:
* if a country has a population of 10,000 people, and you found 10 articles about politicians from that country, then the percentage of articles-per-population would be .1%.
* if a country has 10 articles about politicians, and 2 of them are FA or GA class articles, then the percentage of high-quality articles would be 20%.


==== Tables ====
The analysis should be pretty straightforward. Produce four visualizations that show:
The tables should be pretty straightforward. Produce four tables that show:
#10 highest-ranked countries in terms of number of politician articles as a proportion of country population
#10 highest-ranked countries in terms of number of politician articles as a proportion of country population
#10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
#10 lowest-ranked countries in terms of number of politician articles as a proportion of country population
Line 248: Line 255:
#10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country
#10 lowest-ranked countries in terms of number of GA and FA-quality articles as a proportion of all articles about politicians from that country


Embed them in the iPython notebook.
In order to complete the analysis correctly and receive full credit, your graphs will need to be the right scale to view the data; all units, axes, and values should be clearly labeled; and the graph should possess a key and a title. You must also generate a .png or .jpeg formatted image of your final graphs.
 
You may choose to graph the data in Python, in your notebook. If you decide to use Google Sheet or some other open, public data visualization platform to build your graphs, link to them in the README, and make sure sharing settings allow anyone who clicks on the links to view the graphs and download the data!


==== Writeup ====
==== Writeup ====
Write a few paragraphs, either in the README or in the notebook, reflecting on what you have learned, what you found, what (if anything) surprised you about your findings, and/or what theories you have about why any biases might exist (if you find they exist). You can also include any questions this assignment raised for you about bias, Wikipedia, or machine learning.
Write a few paragraphs, either in the README or in the notebook, explaining your work and communicating what you have learned - about bias, or about Wikipedia - and what theories you have about why any biases might exist (if you find they exist).


==== Submission instructions ====
==== Submission instructions ====
Line 257: Line 266:
#Create the data-512-a2 repository on GitHub w/ your code and data.
#Create the data-512-a2 repository on GitHub w/ your code and data.
#Complete and add your README and LICENSE file.
#Complete and add your README and LICENSE file.
#Submit the link to your GitHub repo to: https://canvas.uw.edu/courses/1174178/assignments/3876068
#Submit the link to your GitHub repo to: https://canvas.uw.edu/courses/1174178/assignments/3876066


==== Required deliverables ====
==== Required deliverables ====
A directory in your GitHub repository called <tt>data-512-a2</tt> that contains the following files:
A directory in your GitHub repository called <tt>data-512-a2</tt> that contains the following files:
:# 1 final data file in CSV format that follows the formatting conventions.
:# 1 final data file in CSV format that follows the formatting conventions.
:# 1 Jupyter notebook named <tt>hcds-a2-bias</tt> that contains all code as well as information necessary to understand each programming step, as well as your writeup (if you have not included it in the README) and the tables.
:# 1 Jupyter notebook named <tt>hcds-a2-bias</tt> that contains all code as well as information necessary to understand each programming step, as well as your writeup (if you have not included it in the README).
:# 1 README file in .txt or .md format that contains information to reproduce the analysis, including data descriptions, attributions and provenance information, and descriptions of all relevant resources and documentation (inside and outside the repo) and hyperlinks to those resources, and your writeup (if you have not included it in the notebook).
:# 1 README file in .txt or .md format that contains information to reproduce the analysis, including data descriptions, attributions and provenance information, and descriptions of all relevant resources and documentation (inside and outside the repo) and hyperlinks to those resources, and your writeup (if you have not included it in the notebook).
:# 1 LICENSE file that contains an [https://opensource.org/licenses/MIT MIT LICENSE] for your code.
:# 1 LICENSE file that contains an [https://opensource.org/licenses/MIT MIT LICENSE] for your code.
:# 1 .png or .jpeg image of your visualization.


==== Helpful tips ====
==== Helpful tips ====
Line 275: Line 285:


=== A3: Final project plan ===
=== A3: Final project plan ===
''For examples of datasets you may want to use for your final project, see [[HCDS_(Fall_2017)/Datasets]].''
For this assignment, you will write up a study plan for your final class project. The plan will cover a variety of details about your final project, including what data you will use, what you will do with the data (e.g. statistical analysis, train a model), what results you expect or intend, and most importantly, why your project is interesting or important (and to whom, besides yourself).
For this assignment, you will write up a study plan for your final class project. The plan will cover a variety of details about your final project, including what data you will use, what you will do with the data (e.g. statistical analysis, train a model), what results you expect or intend, and most importantly, why your project is interesting or important (and to whom, besides yourself).


=== A4: Crowdwork ethnography ===
=== A4: Crowdwork self-ethnography ===
For this assignment, you will go undercover as a member of the Amazon Mechanical Turk community. You will preview or perform Mechanical Turk tasks (called "HITs"), lurk in Turk worker discussion forums, and write an ethnographic account of your experience as a crowdworker, and how this experience changes your understanding of the phenomenon of crowdwork.
For this assignment, you will go undercover as a member of the Amazon Mechanical Turk community. You will perform assigned tasks, participate (or lurk) in Turker discussion forums, and write an ethnographic account of your experience as a human-in-the-loop of data science.
 
The full assignment description is available in PDF form [[:File:HCDS_A4_Crowdwork_ethnography.pdf|here]].


=== A5: Final project presentation ===
=== A5: Final project presentation ===
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)