Human Centered Data Science (Fall 2019)/Schedule: Difference between revisions

From CommunityData
 
(4 intermediate revisions by the same user not shown)
Line 282: Line 282:
* Borkan, D., Dixon, L., Sorensen, J., Thain, N., & Vasserman, L. (2019). Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification. 2, 491–500. https://doi.org/10.1145/3308560.3317593
* Borkan, D., Dixon, L., Sorensen, J., Thain, N., & Vasserman, L. (2019). Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification. 2, 491–500. https://doi.org/10.1145/3308560.3317593
* Zhang, J., Chang, J., Danescu-Niculescu-Mizil, C., Dixon, L., Hua, Y., Taraborelli, D., & Thain, N. (2019). Conversations Gone Awry: Detecting Early Signs of Conversational Failure. 1350–1361. https://doi.org/10.18653/v1/p18-1125
* Zhang, J., Chang, J., Danescu-Niculescu-Mizil, C., Dixon, L., Hua, Y., Taraborelli, D., & Thain, N. (2019). Conversations Gone Awry: Detecting Early Signs of Conversational Failure. 1350–1361. https://doi.org/10.18653/v1/p18-1125
* Miriam Redi, Besnik Fetahu, Jonathan T. Morgan, and Dario Taraborelli. 2019. ''[https://arxiv.org/pdf/1902.11116.pdf Citation Needed a Taxonomy and Algorithmic Assessment of Wikipedia’s Verifiability].'' The Web Conference.
*[https://www.perspectiveapi.com/#/ Google's Perspective API]
*[https://www.perspectiveapi.com/#/ Google's Perspective API]


Line 318: Line 319:
* Shahriari, K., & Shahriari, M. (2017). ''[https://ethicsinaction.ieee.org/ IEEE standard review - Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems].'' Institute of Electrical and Electronics Engineers  
* Shahriari, K., & Shahriari, M. (2017). ''[https://ethicsinaction.ieee.org/ IEEE standard review - Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems].'' Institute of Electrical and Electronics Engineers  
* ACM US Policy Council ''[https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf Statement on Algorithmic Transparency and Accountability].'' January 2017.
* ACM US Policy Council ''[https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf Statement on Algorithmic Transparency and Accountability].'' January 2017.
* ''[https://futureoflife.org/ai-principles/ Asilomar AI Principles].'' Future of Life Institute, 2017.
* Diakopoulos, N., Friedler, S., Arenas, M., Barocas, S., Hay, M., Howe, B., … Zevenbergen, B. (2018). ''[http://www.fatml.org/resources/principles-for-accountable-algorithms Principles for Accountable Algorithms and a Social Impact Statement for Algorithms].'' Fatml.Org 2018.  
* Diakopoulos, N., Friedler, S., Arenas, M., Barocas, S., Hay, M., Howe, B., … Zevenbergen, B. (2018). ''[http://www.fatml.org/resources/principles-for-accountable-algorithms Principles for Accountable Algorithms and a Social Impact Statement for Algorithms].'' Fatml.Org 2018.  
* Jess Holbrook. ''[https://medium.com/google-design/human-centered-machine-learning-a770d10562cd Human Centered Machine Learning].'' Google Design Blog. 2017.
*Fabien Girardin. ''[https://medium.com/@girardin/experience-design-in-the-machine-learning-era-e16c87f4f2e2 Experience design in the machine learning era].'' Medium, 2016.
* Xavier Amatriain and Justin Basilico. ''[https://medium.com/netflix-techblog/netflix-recommendations-beyond-the-5-stars-part-1-55838468f429 Netflix Recommendations: Beyond the 5 stars].'' Netflix Tech Blog, 2012.
* Bart P. Knijnenburg, Martijn C. Willemsen, Zeno Gantner, Hakan Soncu, and Chris Newell. 2012. ''[https://pure.tue.nl/ws/files/3484177/724656348730405.pdf Explaining the user experience of recommender systems].'' User Modeling and User-Adapted Interaction 22, 4-5 (October 2012), 441-504. DOI=http://dx.doi.org/10.1007/s11257-011-9118-4
* Patrick Austin, ''[https://gizmodo.com/facebook-google-and-microsoft-use-design-to-trick-you-1827168534 Facebook, Google, and Microsoft Use Design to Trick You Into Handing Over Your Data, New Report Warns].'' Gizmodo, 6/18/2018
* Cremonesi, P., Elahi, M., & Garzotto, F. (2017). ''[https://core.ac.uk/download/pdf/74313597.pdf User interface patterns in recommendation-empowered content intensive multimedia applications].'' Multimedia Tools and Applications, 76(4), 5275-5309.
* Morgan, J. 2016. ''[https://meta.wikimedia.org/wiki/Research:Evaluating_RelatedArticles_recommendations Evaluating Related Articles recommendations]''. Wikimedia Research.
* Morgan, J. 2016. ''[https://meta.wikimedia.org/wiki/Research:Evaluating_RelatedArticles_recommendations Evaluating Related Articles recommendations]''. Wikimedia Research.
* Morgan, J. 2017. ''[https://meta.wikimedia.org/wiki/Research:Comparing_most_read_and_trending_edits_for_Top_Articles_feature Comparing most read and trending edits for the top articles feature]''. Wikimedia Research.
* Morgan, J. 2017. ''[https://meta.wikimedia.org/wiki/Research:Comparing_most_read_and_trending_edits_for_Top_Articles_feature Comparing most read and trending edits for the top articles feature]''. Wikimedia Research.
*Michael D. Ekstrand, F. Maxwell Harper, Martijn C. Willemsen, and Joseph A. Konstan. 2014. ''[https://md.ekstrandom.net/research/pubs/listcmp/listcmp.pdf User perception of differences in recommender algorithms].'' In Proceedings of the 8th ACM Conference on Recommender systems (RecSys '14).
*Michael D. Ekstrand, F. Maxwell Harper, Martijn C. Willemsen, and Joseph A. Konstan. 2014. ''[https://md.ekstrandom.net/research/pubs/listcmp/listcmp.pdf User perception of differences in recommender algorithms].'' In Proceedings of the 8th ACM Conference on Recommender systems (RecSys '14).
* Michael D. Ekstrand and Martijn C. Willemsen. 2016. ''[https://md.ekstrandom.net/research/pubs/behaviorism/BehaviorismIsNotEnough.pdf Behaviorism is Not Enough: Better Recommendations through Listening to Users].'' In Proceedings of the 10th ACM Conference on Recommender Systems (RecSys '16).
<br/>
<br/>
<hr/>
<hr/>
Line 342: Line 335:


;Agenda
;Agenda
* ''coming soon''
* Filling out course evaluation
<!--
* Week 8 in-class activity report out
* Reading reflections discussion
* End of quarter logistics
* Feedback on Final Project Plans
* Final project presentations and reports
* UI patterns & UX considerations for ML/data-driven applications
* Guest lecture: Rich Caruana, Microsoft Research
* Final project presentation: what to expect
* In-class activity (InterpretML): Harsha Nori, Microsoft
* In-class activity: final project peer review
 
-->


;Homework assigned
;Homework assigned
* Read and reflect: Alkhatib, A., & Bernstein, M. (2019). ''[https://hci.stanford.edu/publications/2019/streetlevelalgorithms/streetlevelalgorithms-chi2019.pdf Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions]''. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3290605.3300760
* Read and reflect: Passi, S., & Jackson, S. J. (2018). ''[https://dl.acm.org/citation.cfm?doid=3290265.3274405 Trust in Data Science: Collaboration, Translation, and Accountability in Corporate Data Science Projects].'' Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1–28. https://doi.org/10.1145/3274405 ([https://sjackson.infosci.cornell.edu/Passi&Jackson_TrustinDataScience(CSCW2018).pdf ACCESS PDF HERE])
* [[Human_Centered_Data_Science_(Fall_2019)/Assignments#A7:_Final_project_report|A7: Final project report]]
* [[Human_Centered_Data_Science_(Fall_2019)/Assignments#A7:_Final_project_report|A7: Final project report]]


;Resources
;Resources
<!--
* Rich Caruana, Harsha Nori, Samuel Jenkins, Paul Koch, Ester de Nicolas. 2019. ''InterpretML software toolkit'' ([https://github.com/interpretml/interpret github repo], [https://www.microsoft.com/en-us/research/blog/creating-ai-glass-boxes-open-sourcing-a-library-to-enable-intelligibility-in-machine-learning/ blog post])
* Daniela Aiello, Lisa Bates, et al. [https://shelterforce.org/2018/08/22/eviction-lab-misses-the-mark/ Eviction Lab Misses the Mark], ShelterForce, August 2018. 
* Partnership on AI, 2019 ''[https://www.partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system/ Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System].''  
-->
*Ethical OS ''[https://ethicalos.org/wp-content/uploads/2018/08/Ethical-OS-Toolkit-2.pdf Toolkit]'' and ''[https://ethicalos.org/wp-content/uploads/2018/08/EthicalOS_Check-List_080618.pdf Risk Mitigation Checklist]''. EthicalOS.org.
* Morgan, J. T., 2019. ''[https://figshare.com/articles/Ethical_Human_Centered_AI/8044553 Ethical and Human-centered AI at Wikimedia]''. Wikimedia Research 2030​.
* Morgan, J. T., 2019. ''[https://figshare.com/articles/Ethical_Human_Centered_AI/8044553 Ethical and Human-centered AI at Wikimedia]''. Wikimedia Research 2030​.



Latest revision as of 19:22, 27 November 2019

This page is a work in progress.


Week 1: September 26[edit]

Introduction to Human Centered Data Science
What is data science? What is human centered? What is human centered data science?
Assignments due
Agenda
  • Syllabus review
  • Pre-course survey results
  • What do we mean by data science?
  • What do we mean by human centered?
  • How does human centered design relate to data science?
  • In-class activity
  • Intro to assignment 1: Data Curation
Homework assigned
  • Read and reflect on both:
Resources




Week 2: October 3[edit]

Reproducibility and Accountability
data curation, preservation, documentation, and archiving; best practices for open scientific research
Assignments due
  • Week 1 reading reflection
  • A1: Data curation
Agenda
  • Reading reflection discussion
  • Assignment 1 review & reflection
  • A primer on copyright, licensing, and hosting for code and data
  • Introduction to replicability, reproducibility, and open research
  • In-class activity
  • Intro to assignment 2: Bias in data
Homework assigned
Resources




Week 3: October 10[edit]

Interrogating datasets
causes and consequences of bias in data; best practices for selecting, describing, and implementing training data
Assignments due
  • Week 2 reading reflection
Agenda
  • Reading reflection review
  • Sources and consequences of bias in data collection, processing, and re-use
  • In-class activity
Homework assigned
  • Read both, reflect on one:
Resources




Week 4: October 17[edit]

Introduction to qualitative and mixed-methods research
Big data vs thick data; integrating qualitative research methods into data science practice; crowdsourcing
Assignments due
  • Reading reflection
  • A2: Bias in data
Agenda
  • Reading reflection reflection
  • Overview of qualitative research
  • Introduction to ethnography
  • In-class activity: explaining art to aliens
  • Mixed methods research and data science
  • An introduction to crowdwork
  • Overview of assignment 3: Crowdwork ethnography
Homework assigned
Resources





Week 5: October 24[edit]

Research ethics for big data
privacy, informed consent and user treatment
Assignments due
  • Reading reflection
Agenda
  • Reading reflection review
  • Guest lecture
  • A2 retrospective
  • Final project deliverables and timeline
  • A brief history of research ethics in the United States


Homework assigned
  • Read and reflect: Gray, M. L., & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Eamon Dolan Books. (PDF available on Canvas)
Resources




Week 6: October 31[edit]

Data science and society
power, data, and society; ethics of crowdwork
Assignments due
  • Reading reflection
  • A3: Crowdwork ethnography
Agenda
  • Reading reflections
  • Assignment 3 review
  • Guest lecture: Stefania Druga
  • In-class activity
  • Introduction to assignment 4: Final project proposal
Homework assigned
  • Read both, reflect on one:
Resources




Week 7: November 7[edit]

Human centered machine learning
algorithmic fairness, transparency, and accountability; methods and contexts for algorithmic audits
Assignments due
  • Reading reflection
  • A4: Project proposal
Agenda
  • Reading reflection review
  • Algorithmic transparency, interpretability, and accountability
  • Auditing algorithms
  • In-class activity
  • Introduction to assignment 5: Final project proposal
Homework assigned
Resources




Week 8: November 14[edit]

User experience and data science
algorithmic interpretibility; human-centered methods for designing and evaluating algorithmic systems
Assignments due
  • Reading reflection
  • A5: Final project plan
Agenda
  • coming soon
Homework assigned
Resources




Week 9: November 21[edit]

Data science in context
Doing human centered datascience in product organizations; communicating and collaborating across roles and disciplines; HCDS industry trends and trajectories
Assignments due
  • Reading reflection
Agenda
  • Filling out course evaluation
  • Week 8 in-class activity report out
  • End of quarter logistics
  • Final project presentations and reports
  • Guest lecture: Rich Caruana, Microsoft Research
  • In-class activity (InterpretML): Harsha Nori, Microsoft


Homework assigned
Resources




Week 10: November 28 (No Class Session)[edit]

Assignments due
  • Reading reflection
Homework assigned
Resources




Week 11: December 5[edit]

Final presentations
presentation of student projects, course wrap up
Assignments due
  • Reading reflection
  • A5: Final presentation
Readings assigned
  • NONE
Homework assigned
  • NONE
Resources
  • NONE




Week 12: Finals Week (No Class Session)[edit]

  • NO CLASS
  • A7: FINAL PROJECT REPORT DUE BY 5:00PM on Tuesday, December 10 via Canvas
  • LATE PROJECT SUBMISSIONS NOT ACCEPTED.