Designing Internet Research (Spring 2022)


 * Designing Internet Research
 * COM528 - Department of Communication, University of Washington
 * Instructor: Benjamin Mako Hill (University of Washington)
 * Course Websites:
 * We will use Canvas for announcements and turning in assignments
 * We will use a class Discord server for real time communication and for posting and responding daily reflections on the readings
 * Everything else will be linked on this page.
 * Course Catalog Description:


 * Focuses on designing Internet research, assessing the adaptation of proven methods to Internet tools and environments, and developing new methods in view of particular capacities and characteristics of Internet applications. Legal and ethical aspects of Internet research receive ongoing consideration.

Overview and Learning Objectives
What new lines of inquiry and approaches to social research are made possible and necessary by the Internet? In what ways have established research methods been affected by the Internet? How does the Internet challenge established methods of social research? How are researchers responding to these challenges?

These are some of the key questions we will explore in this course. The course will focus on assessing the incorporation of Internet tools in established and emergent methods of social research, the adaptation of social research methods to study online phenomena, and the development of new methods and tools that correspond with the particular capacities and characteristics of the Internet. The readings will include both descriptions of Internet-related research methods with an eye to introducing skills and examples of studies that use them. The legal and ethical aspects of Internet research will receive ongoing consideration throughout the course. The purpose of this course is to help prepare students to design high quality research projects that use the Internet to study online communicative, social, cultural, and political phenomena.

I will consider the course a complete success if every student is able to do all of these things at the end of the quarter:


 * Discuss and compare distinct types of Internet research including: web archiving; textual analysis; ethnography; interviews; network analyses of social and hyperlink networks; analysis of digital trace data, traditional, natural, and field experiments; design research; interviewing; survey research; and narrative and visual analyses.
 * Describe particular challenges and threats to research validity associated with each method.
 * For at least one method, be able to provide a detailed description of a research project and feel comfortable embarking on a formative study using this methodology.
 * Given a manuscript (e.g., in the context of a request for peer review), be able to evaluate an Internet-based study in terms of its use its methodological choices.

Note About This Syllabus
You should expect this syllabus to be a dynamic document. Although the core expectations for this class are fixed, the details of readings and assignments will shift based on how the class goes, guest speakers that I arrange, my own readings in this area, etc. As a result, there are three important things to keep in mind:


 * Although details on this syllabus will change, I will try to ensure that I never change readings more than six days before they are due. I will send an announcement no later than before each Wednesday evening that fixes the schedule for the next week. This means that if I don't fill in a reading marked "" six days before it's due, it is dropped. If we don't change something marked "" before the deadline, then it is assigned. This means that if you plan to read more than six days ahead, contact the teaching team first.
 * Because this syllabus a wiki, you will be able to track every change by clicking the history button on this page when I make changes. I will summarize these changes in the weekly an announcement on Canvas sent that will be emailed to everybody in the class. Closely monitor your email or the announcements section on the course website on Canvas to make sure you don't miss these!
 * I will ask the class for voluntary anonymous feedback frequently — especially toward the beginning of the quarter. Please let me know what is working and what can be improved. In the past, I have made many adjustments to courses that I teach while the quarter progressed based on this feedback.

Books
This class has no textbook and I am not requiring you to buy any books for this class. That said, several required readings and many suggested readings, will come from several excellent books which you might want to consider adding to your library:

These books include:


 * Burgess, Jean, Alice Marwick, and Thomas Poell, eds. 2018. The SAGE Handbook of Social Media. London, UK: SAGE. [Available through UW libraries]
 * Foucault Welles, Brooke, and Sandra González-Bailón, eds. 2018. The Oxford Handbook of Networked Communication. London, UK: Oxford University Press. [Available through UW libraries]
 * Hargittai, Eszter, and Christian Sandvig, eds. 2015. Digital Research Confidential: The Secrets of Studying Behavior Online. MIT Press. [Available through UW libraries
 * Hesse-Biber, Sharlene Nagy, ed. 2011. The Handbook of Emergent Technologies in Social Research. Oxford, UK: Oxford University Press.
 * Hewson, Claire, Carl Vogel, and Dianna Laurent. 2016. Internet Research Methods. London, UK: SAGE. https://doi.org/10.4135/9781473920804. [Available through UW libraries]
 * Snee, Helene, Christine Hine, Yvette Morey, Steven Roberts, and Hayley Watson, eds. 2016. Digital Methods for Social Science: An Interdisciplinary Guide to Research Innovation. New York, NY: Palgrave-Macmillan. [Available through UW libraries]

Some Reflections on Technical Skills
This course will focus on teaching conceptual skills related to Internet research. These skills involve the "softer" skills of understanding, designing, and critiquing research plans. These are harder to teach, evaluate, and learn than more "hard" technical skills like programming, web scraping, and so on. But they are ultimately what will make a research project interesting, useful, or valid.

That said, I also believe that any skilled Internet researcher must be comfortable writing code to collect a dataset from the web or, at the very least, should have enough experience doing so that they know what is involved and what is possible and impossible. This is essential even if your only goals is to manage somebody else writing code and gathering data or work productively with a collaborator who is doing so.

Because students are going to come to the class with different technical skillsets, I will not be devoting time in this class to developing technical skills. That said, I strongly believe that a well rounded Internet researcher will have these skills as well. Although being successful in this class will not also require technical skills, being a successful Internet researcher will.

For example, I think most Internet researchers should have at least:


 * 1) Basic skills in a general purpose high-level programming language used for Internet-based data collection and analysis. I strongly recommend the Python programming language although other programming languages like Ruby and Perl are also good choices. Generally speaking, statistical programming languages like R, Stata, Matlab are not well suited for this. However, if you happen to known a statistical programming language, learning a language like Python will be much easier!
 * 2) Familiarity with the technologies of web APIs. In particular, students should understand what APIs are, how they work, and should be able to read, interpret, and process data in JSON.
 * 3) Knowledge of how to process and move data from a website or API into a format that they will be able to use for analysis. The final format will depend on the nature of the result but this might be a statistical programming environment like R, Stata, SAS, SPSS, etc or a qualitative data analysis tools like ATLAS.ti, NVivo, Dedoose, Taguette, or similar.

If you are already comfortable doing these things, great. If you are not, I'd love to work with you to help you make a plan for building these skills. To be clear: It's not part of the class and it's not part of how you will be evaluated. But it's something that I want to help you all to have.

Here are some options for building these technical skills:


 * I can help point you to find some online resources like MOOCs, online tutorials, and so on that are useful for building these skills. The details will probably vary based on what you know already.
 * I have plans to teach a class (likely in the Spring quarter of 2023) that will be a sort of companion class to this one and which will introduce Python and the skills above. I'd love to have any of you join me!
 * I also regularly have organized free workshops called the Community Data Science Workshops that teach exactly these skills. Although I've historically tried to time these workshops with this class, the ongoing pandemic has meant that it isn't in the cards this quarter. I do hope to teach them again at some point in the near future, though and happy to put you all on the announcement list.

Assignments
The assignments in this class are designed to give you an opportunity to try your hand at using the conceptual material taught in the class. There will be no exams or quizzes. Unless otherwise noted, all assignments are due at the end of the day (i.e., 11:59pm on the day they are due).

Reflections

 * Deliverables: (1) Post a message in the #reading-reflections channel on the course Discord server; (2) Respond to at least one of your classmates before class.
 * Due Date: (1) the day before class at 6pm (on any day with reading); (2) the day of class by 10:30am (on a day with reading)
 * Maximum length: 500 words

For every day that we have readings (i.e., every day except for the consulting weeks and and the final presentation week), I'm asking everybody to reflect on the readings by the day before class and to share their reflections with everybody else.

Reflections should be no more than 500 words (equivalent about half a page single-spaced page). So everyone will have a chance to read the reflections before class, response papers should be posted to the #reading-reflections channel on the course Discord server the day before by 6pm (i.e., on Sundays and Tuesday) so that we can all read, think, and respond. Please also pose one or two open-ended discussion questions that may serve as jumping off points for our in-class conversation. Don't bother with summarizing (we've all done the reading after all) and focus on engaging with ideas.

In terms of content, response papers offer you an opportunity to engage the readings by identifying common or conflicting premises, thinking through potential implications, offering political or cultural examples, posing well-supported objections, or outlining critical extensions. In my experience, the most thought provoking reflections go beyond pointing out things that one wonders about or finds interesting and explain why you find it interesting. Turn in your response paper to Canvas by posting a new message in the appropriate day in the course the discussion board.

I'd also like everybody read over everybody else's responses and respond to at least one person—evening things out so that not everybody response to one person would be nice, but use your judgement.

Research Project
As a demonstration of your learning in this course, you will design a plan for an internet research project and will, if possible, also collect (at least) an initial sample of a dataset that you will use to complete the project.

The genre of the paper you can produce can take one of the following three forms:


 * 1) A draft of a manuscript for submission to a conference or journal.
 * 2) A proposal for funding (e.g., for submission for the NSF for a graduate student fellowship).
 * 3) A draft of the methods chapter of your dissertation.

In any the three paths, I expect you take this opportunity to produce a document that will further your to academic career outside of the class. If none of these approaches work for you, I'm willing to discuss other possible deliverables.

Project Identification

 * Due Date: April 15
 * Maximum paper length: 800 words (~3 pages)
 * Deliverables: Turn in the appropriate Canvas dropbox

Early on, I want you to identify your final project. Your proposal should be short and can be either paragraphs or bullets. It should include the following things:


 * The genre of the project and a short description of how it fits into your career trajectory.
 * A one paragraph abstract of the proposed study and research question, theory, community, and/or groups you plan to study.
 * A short description of the type of data you plan to collect as part of your final project.

Final Project

 * Paper Due Date: June 10
 * Maximum final paper length: 8000 words (~27 pages)
 * All Deliverables: Turn in in the appropriate Canvas dropboxes

Because the emphasis in this class is on methods and because I'm not an expert in each of your areas or fields, I'm happy to assume that your paper, proposal, or thesis has already established the relevance and significance of your study and has a comprehensive literature review, well-grounded conceptual approach, and compelling reason why this research is so important. Instead of providing all of this details, instead feel free to start with a brief summary of the purpose and importance of this research, and an introduction of your research questions or hypotheses. If your provide more detail, that's fine, but I won't give you detailed feedback on this parts.

Whatever you choose to turn in for your final project should include:


 * a statement of the purpose, central focus, relevance and significance of your project;
 * a description of the specific Internet application(s) and/or environment(s) and/or objects to be studied and employed in the research;
 * key research questions or hypotheses;
 * operationalization of key concepts;
 * a description and rationale of the specific method(s), (if more than one method will be used, explain how the methods will produce complementary findings);
 * a description of the step-by-step plan for data collection;
 * description and rationale of the level(s), unit(s) and process of analysis (if more than one kind of data are generated, explain how each kind will be analyzed individually and/or comparatively);
 * an explanation of how these analyses will enable you to answer the RQs
 * a sample instrument (as appropriate);
 * a sample dataset and description of a formative analysis you have completed;
 * a description of actual or anticipated results and any potential problems with their interpretation;
 * a plan for publishing/disseminating the findings from this research
 * a summary of technical, ethical, human subjects and legal issues that may be encountered in this research, and how you will address them;
 * a schedule (using specific dates) and proposed budget if applicable

I also expect each student to begin data collection for your project (i.e., using the technical skills you learn in the class) and describe your progress in this regard this in your paper. If collecting data for a proposed project is impractical (e.g., because of IRB applications, funding, etc), lets talk. I would love for you to engage in the collection of public dataset as part of a pilot or formative study. If this is not feasible or useful, we can discuss other options.

I have a preference for you to write this paper individually but I'm open to the idea that you may want to work with others in the class.

Outline / Draft

 * Due Date: May 20
 * Presentation Date: June 1
 * All Deliverables: Turn in in the appropriate Canvas dropbox

I want you all to turn it an outline or draft 2-3 weeks before the final project so that we can discuss this in our final set of one-on-one consulting meetings. Although the specific format will vary based on the nature of your project and your progress on it, it should demonstrate major progress on your final deliverables for the class and provide an answer—in outline form—to every applicable item on the list in the section above.

I you're looking for an outline format that is useful for writing papers, I typically use what my groups calls Matsuzaki outlines (and which are described in details on our wiki). The Matsuzaki outline is particularly well suited to quantitative social scientific work, and probably less good for others. That said, folks have used it successfully for a range of projects.

If you're looking for information on how to organize a quantitative academic paper in the social sciences, check out my page on the structure of a quantitative empirical research paper.

Participation
The course relies heavily on participation and discussion. It is important to realize that we will not summarize reading in class and I will not cover it in lecture. I expect you all to have read it and we will jump in and start discussing it. The "Participation Rubric" section of my detailed page on assessment gives the rubric I will use in evaluating participation.

Assessment
I have put together a very detailed page that describes the way I approach assessment and grading—both in general and in this course. Please read it carefully I will assign grades for each of following items on the UW 4.0 grade scale according to the weights below:


 * Participation: 30%
 * Reflection: 15%
 * Proposal identification: 5%
 * Final paper outline: 5%
 * Final Presentation: 10%
 * Final Paper: 35%

Monday March 28: Introduction
Resources:


 * Class video recording [Available through Canvas] — It's mostly just me walking through the syllabus and doesn't include the introductions and such

Required Readings:


 * Agre, Phil. 2004. “Internet Research: For and Against.” In Internet Research Annual: Selected Papers from the Association of Internet Researchers Conferences 2000-2002, edited by Mia Consalvo, Nancy Baym, Jeremy Hunsinger, Klaus Bruhn Jensen, John Logie, Monica Muerero, and Leslie Regan Shade. Vol. 1. New York: Peter Lang. http://polaris.gseis.ucla.edu/pagre/research.html.
 * Lazer, David, Alex Pentland, Lada Adamic, Sinan Aral, Albert-Laszlo Barabasi, Devon Brewer, Nicholas Christakis, et al. 2009. “Computational Social Science.” Science 323 (5915): 721–23. https://doi.org/10.1126/science.1167742.
 * Sandvig, Christian, and Eszter Hargittai. 2015. “How to Think about Digital Research.” In Digital Research Confidential: The Secrets of Studying Behavior Online, edited by Eszter Hargittai and Christian Sandvig, 1–28. Cambridge, MA: MIT Press.

Optional Reading:


 * December, John. 1996. “Units of Analysis for Internet Communication.” Journal of Computer-Mediated Communication 1 (4): 0–0. https://doi.org/10.1111/j.1083-6101.1996.tb00173.x.

Wednesday March 30: Ethics
Required Readings:


 * franzke, aline shakti, Anja Bechmann, Michael Zimmer, and Charles M. Ess. 2020. “Internet Research: Ethical Guidelines 3.0.” Association of Internet Researchers. https://aoir.org/reports/ethics3.pdf.

To frame a conversation around research ethics, lets read this piece:


 * Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” Proceedings of the National Academy of Sciences 111 (24): 8788–90. https://doi.org/10.1073/pnas.1320040111.

And these pieces that are all vaguely in response to it:


 * Carr, Nicholas. 2014. “The Manipulators: Facebook’s Social Engineering Project.” The Los Angeles Review of Books, September 14, 2014. http://lareviewofbooks.org/essay/manipulators-facebooks-social-engineering-project/.
 * [Skim page to get a sense of the backlash] Grimmelmann, James. 2014. “The Facebook Emotional Manipulation Study: Sources.” The Laboratorium (blog). June 30, 2014. http://laboratorium.net/archive/2014/06/30/the_facebook_emotional_manipulation_study_source.
 * Bernstein, Michael. 2014. “The Destructive Silence of Social Computing Researchers.” Medium (blog). July 7, 2014. https://medium.com/@msbernst/the-destructive-silence-of-social-computing-researchers-9155cdff659.
 * Lampe, Clifford. 2014. “Facebook Is Good for Science.” The Chronicle of Higher Education Blogs: The Conversation (blog). July 8, 2014. http://chronicle.com/blogs/conversation/2014/07/08/facebook-is-good-for-science/.
 * Hancock, Jeffrey T. 2018. “The Ethics of Digital Research.” The Oxford Handbook of Networked Communication, April. https://doi.org/10.1093/oxfordhb/9780190460518.013.25.

Optional Readings:


 * Department of Health, Education, and Welfare, and National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 2014. “The Belmont Report. Ethical Principles and Guidelines for the Protection of Human Subjects of Research.” http://www.hhs.gov/ohrp/policy/belmont.html.
 * Frankel, Mark S., and Sanyin Siang. 1999. “Ethical and Legal Aspects of Human Subject Research on the Internet.” Workshop Report. Washington, DC: American Association for the Advancement of Science.

Monday April 4: Internet Data Collection
Required Readings:


 * Mislove, Alan, and Christo Wilson. 2018. “A Practitioner’s Guide to Ethical Web Data Collection.” In The Oxford Handbook of Networked Communication, edited by Brooke Foucault Welles and Sandra González-Bailón. London, UK: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190460518.001.0001.
 * Brügger, Niels. 2018. “Web History and Social Media.” In The SAGE Handbook of Social Media, edited by Jean Burgess, Alice Marwick, and Thomas Poell, 196–212. London, UK: SAGE Publications Ltd. https://doi.org/10.4135/9781473984066.
 * Shumate, Michelle, and Matthew S. Weber. 2015. “The Art of Web Crawling for Social Science Research.” In Digital Research Confidential: The Secrets of Studying Behavior Online, edited by Eszter Hargittai and Christian Sandvig, 234–59. Cambridge, MA: The MIT Press.
 * Freelon, Deen. 2018. “Computational Research in the Post-API Age.” Political Communication 35 (4): 665–68. https://doi.org/10.1080/10584609.2018.1477506.
 * [Example] Graeff, Erhardt, Matt Stempeck, and Ethan Zuckerman. 2014. “The Battle for ‘Trayvon Martin’: Mapping a Media Controversy Online and Off-Line.” First Monday 19 (2). http://firstmonday.org/ojs/index.php/fm/article/view/4947.

Optional Readings:


 * Ankerson, Megan Sapnar. 2015. “Read/Write the Digital Archive: Strategies for Historical Web Research.” In Digital Research Confidential: The Secrets of Studying Behavior Online, edited by Eszter Hargittai and Christian Sandvig, 29–54. Cambridge, MA: MIT Press.
 * Spaniol, Marc, Dimitar Denev, Arturas Mazeika, Gerhard Weikum, and Pierre Senellart. 2009. “Data Quality in Web Archiving.” In Proceedings of the 3rd Workshop on Information Credibility on the Web, 19–26. WICOW ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1526993.1526999.
 * Schneider, Steven M., and Kirsten A. Foot. 2004. “The Web as an Object of Study.” New Media & Society 6 (1): 114–22. https://doi.org/10.1177/1461444804039912.
 * Weber, Matthew S. 2014. “Observing the Web by Understanding the Past: Archival Internet Research.” In Proceedings of the Companion Publication of the 23rd International Conference on World Wide Web Companion, 1031–1036. WWW Companion ’14. Republic and Canton of Geneva, Switzerland: International World Wide Web Conferences Steering Committee. https://doi.org/10.1145/2567948.2579213.

Optional readings related to the ethics of data collection online:


 * Amy Bruckman's two 2016 blog posts about researchers violating terms of Service (TOS) while doing academic research: Do Researchers Need to Abide by Terms of Service (TOS)? An Answer. and More on TOS: Maybe Documenting Intent Is Not So Smart
 * Digital Millenium Copyright Act and these explanatory/commentary essays & sites:
 * The Electronic Frontier Foundation's page on the DMCA.
 * Templeton, Brad's A Brief Intro to Copyright & 10 Big Myths about Copyright Explained
 * Sections on Copyright, Privacy, and Social Media in the “Internet Case Digest” of the Perkins Coie LLP “Case Digest” site.
 * Narayanan, A., and V. Shmatikov. 2008. “Robust De-Anonymization of Large Sparse Datasets.” In IEEE Symposium on Security and Privacy, 2008. SP 2008, 111–25. https://doi.org/10.1109/SP.2008.33.

Two useful sources of data collection:


 * Archive Team is an online community that archives websites. They are a fantastic resource and include many pieces of detailed technical documentation on the practice of engaging in web archiving. For example, here are detailed explanations of mirroring a website with GNU wget which is the piece of free software I usually use to archive websites.
 * OpenHumans is an online community where people share personal data with each other and with researchers.

Wednesday April 6: Digital & Trace Ethnography
Required Readings:

More traditional ethnographic research in online settings:


 * Hine, Christine. 2017. “Ethnographies of Online Communities and Social Media: Modes, Varieties, Affordances.” In The SAGE Handbook of Online Research Methods, edited by Nigel G. Fielding, Raymond M. Lee, and Grant Blank, 2 edition, 401–15. London, UK: SAGE.
 * [Selections] Jemielniak, Dariusz. 2014. Common Knowledge?: An Ethnography of Wikipedia. Stanford, California: Stanford University Press. ''["Introduction" and "Appendix A: Methodology"]

Material on "Trace" and "network" ethnographies:


 * Geiger, R. Stuart, and David Ribes. 2011. “Trace Ethnography: Following Coordination Through Documentary Practices.” In Proceedings of the 2011 44th Hawaii International Conference on System Sciences, 1–10. HICSS ’11. Washington, DC, USA: IEEE Computer Society. https://doi.org/10.1109/HICSS.2011.455.
 * Geiger, R. Stuart, and Aaron Halfaker. 2017. “Operationalizing Conflict and Cooperation between Automated Software Agents in Wikipedia: A Replication and Expansion of ‘Even Good Bots Fight.’” Proceedings of the ACM on Human-Computer Interaction 1 (CSCW): 49:1–49:33. https://doi.org/10.1145/3134684.
 * Howard, Philip N. 2002. “Network Ethnography and the Hypermedia Organization: New Media, New Organizations, New Methods.” New Media & Society 4 (4): 550–74. https://doi.org/10.1177/146144402321466813.

Optional Readings:


 * Hine, Christine. 2000. Virtual Ethnography. London, UK: SAGE Publications. [Available from the Instructor]
 * This is the canonical book-length account and the main citation in this space.


 * Coleman, E. Gabriella. 2010. “Ethnographic Approaches to Digital Media.” Annual Review of Anthropology 39 (1): 487–505. https://doi.org/10.1146/annurev.anthro.012809.104945.
 * Response by danah boyd To Hine's "Question One: How Can Qualitative Internet Researchers Define the Boundaries of Their Projects?" from Internet Inquiry: Conversations About Method, Annette Markham and Nancy Baym (Eds.), Sage, 2009, pp. 1-32.
 * Note: You may also be interest in reading the essay by Hine that boyd is responding to.


 * Hjorth, Larissa, Heather Horst, Anne Galloway, and Genevieve Bell, eds. 2016. The Routledge Companion to Digital Ethnography. New York, NY: Routledge. [Available from the instructor]
 * Sinanan, Jolynna, and Tom McDonald. 2018. “Ethnography.” In The SAGE Handbook of Social Media, 179–95. 55 City Road: SAGE Publications Ltd. https://doi.org/10.4135/9781473984066.
 * Maxwell, Joseph A. 2002. “Understanding and Validity in Qualitative Research.” In The Qualitative Researcher’s Companion, edited by A. M. Huberman and Matthew B. Miles, 37–64. London, UK: SAGE.
 * Champion, Kaylea, Nora McDonald, Stephanie Bankes, Joseph Zhang, Rachel Greenstadt, Andrea Forte, and Benjamin Mako Hill. 2019. “A Forensic Qualitative Analysis of Contributions to Wikipedia from Anonymity Seeking Users.” Proceedings of the ACM on Human-Computer Interaction 3 (CSCW): 53:1–53:26. https://doi.org/10.1145/3359155.

These are all other interesting and/or frequently cited examples of Internet-based ethnographies:


 * Geiger, R. Stuart, and David Ribes. 2010. “The Work of Sustaining Order in Wikipedia:The Banning of a Vandal.” In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, 117–126. CSCW ’10. New York, NY, USA: ACM. https://doi.org/10.1145/1718918.1718941. — A trace ethnography and sort of the companion paper/substantive paper for the methods piece included in the required readings above.
 * Brotsky, Sarah R., and David Giles. 2007. “Inside the ‘Pro-Ana’ Community: A Covert Online Participant Observation.” Eating Disorders 15 (2): 93–109. https://doi.org/10.1080/10640260701190600.
 * Note: To conduct the study reported in this paper the authors created a used a fake profile in order to observe the psychological support offered to participants.


 * Williams, Matthew. 2007. “Avatar Watching: Participant Observation in Graphical Online Environments.” Qualitative Research 7 (1): 5–24. https://doi.org/10.1177/1468794107071408.
 * Note: Fantastic more general introduction but takeaways that are more specifically targeted toward people studying virtual reality type environments with virtual physicality.

Charlie's optional readings (virtual world ethnographies):


 * Bainbridge, William Sims. 2010. The Warcraft Civilization: Social Science in a Virtual World. Cambridge, Massachusetts: MIT. [mitpress https://mitpress.mit.edu/books/warcraft-civilization]
 * Nardi, Bonnie A. 2009. My Life as a Night Elf Priest: An Anthropological Account of World of Warcraft. Ann Arbor, Michigan: University of Michigan.
 * Pearce, Celia, Tom Boellstorff, and Bonnie A. Nardi. 2011. Communities of Play: Emergent Cultures in Multiplayer Games and Virtual Worlds. The MIT Press. [mitpress https://mitpress.mit.edu/books/communities-play]
 * Boellstorff, Tom, Bonnie Nardi, Celia Pearce, T. L. Taylor, and George E. Marcus. 2012. Ethnography and Virtual Worlds: A Handbook of Method. Princeton: Princeton University Press.

Monday April 11: Online Interviewing
Required Readings:


 * O’Connor, Henrietta, and Clare Madge. 2017. “Internet-Based Interviewing.” In The SAGE Handbook of Online Research Methods, edited by Nigel G. Fielding, Raymond M. Lee, and Grant Blank, 2 edition, 416–34. London, UK: SAGE.
 * Abrams, Katie M ., and Ted J. Gaiser. 2017. “Online Focus Groups.” In The SAGE Handbook of Online Research Methods, edited by Nigel G. Fielding, Raymond M. Lee, and Grant Blank, 2 edition, 435–50. London, UK: SAGE.
 * Hanna, Paul. 2012. “Using Internet Technologies (Such as Skype) as a Research Medium: A Research Note.” Qualitative Research 12 (2): 239–42. https://doi.org/10.1177/1468794111426607.
 * Note: Short article you can basically skim. Read it quickly so you can cite it later.


 * Dowling, Sally. 2012. “Online Asynchronous and Face-to-Face Interviewing: Comparing Methods for Exploring Women’s Experiences of Breastfeeding Long Term.” In Cases in Online Interview Research, edited by Janet Salmons, 277–303. 2455 Teller Road, Thousand Oaks  California  91320  United States: SAGE Publications, Inc. http://srmo.sagepub.com/view/cases-in-online-interview-research/n11.xml.

Optional Readings:


 * boyd, danah. 2015. “Making Sense of Teen Life: Strategies for Capturing Ethnographic Data in a Networked Era.” In Digital Research Confidential: The Secrets of Studying Behavior Online, edited by Eszter Hargittai and Christian Sandvig. Cambridge, Massachusetts: MIT Press.
 * Note: Strongly focused on ethnographic interviews with tons of very specific details. Fantastic article on interviewing, although perhaps a bit weak on Internet-specific advice.


 * Markham, Annette N. 1998. “The Shifting Project, The Shifting Self.” In Life Online: Researching Real Experience in Virtual Space, 61–83. Rowman Altamira. [Available from instructor]
 * Note: One of the earliest books on online life and one of the earliest attempts to do online interviewing. This is dated, but highlight some important challenge.


 * Hutchinson, Emma. 2016. “Digital Methods and Perpetual Reinvention? Asynchronous Interviewing and Photo Elicitation.” In Digital Methods for Social Science: An Interdisciplinary Guide to Research Innovation, edited by Helene Snee, Christine Hine, Yvette Morey, Steven Roberts, and Hayley Watson, 143–56. London: Palgrave Macmillan UK. https://doi.org/10.1057/9781137453662_9.
 * Hawkins, Janice. 2018. “The Practical Utility and Suitability of Email Interviews in Qualitative Research.” The Qualitative Report 23 (2). https://digitalcommons.odu.edu/nursing_fac_pubs/24.

Alternate Accounts:

These texts are largely redundant to the required texts above but do provide a different perspective and examples:


 * Salmons, Janet. 2014. Qualitative Online Interviews: Strategies, Design, and Skills. SAGE Publications. [Preface, TOC, and Chapter 1]
 * This is a book that lays out what claims to be a comprehensive account to online interviewing. I have the book and am happy to loan my copy to anybody in the class that thinks this will be a core part of their research.

Optional readings related to the ethics of identify subjects:


 * Markham, Annette. 2012. “Fabrication as Ethical Practice.” Information, Communication & Society 15 (3): 334–53. https://doi.org/10.1080/1369118X.2011.641993.
 * Trevisan, Filippo, and Paul Reilly. 2014. “Ethical Dilemmas in Researching Sensitive Issues Online: Lessons from the Study of British Disability Dissent Networks.” Information, Communication & Society 17 (9): 1131–46. https://doi.org/10.1080/1369118X.2014.889188.

Wednesday April 13: Discourse Analysis
Required Readings:


 * Mitra, Ananda. 1999. “Characteristics of the WWW Text: Tracing Discursive Strategies.” Journal of Computer-Mediated Communication 5 (1): 0–0. https://doi.org/10.1111/j.1083-6101.1999.tb00330.x.
 * Thurlow, Crispin. 2018. “Digital Discourse: Locating Language in New/Social Media.” In The SAGE Handbook of Social Media, edited by Jean Burgess, Alice Marwick, and Thomas Poell, 135–45. London, UK: SAGE. https://doi.org/10.4135/9781473984066.
 * Brock, André. 2018. “Critical Technocultural Discourse Analysis.” New Media & Society 20 (3): 1012–30. https://doi.org/10.1177/1461444816677532.
 * Bouvier, Gwen, and David Machin. 2018. “Critical Discourse Analysis and the Challenges and Opportunities of Social Media.” Review of Communication 18 (3): 178–92. https://doi.org/10.1080/15358593.2018.1479881.

Optional Readings:


 * Kaun, Anne. 2010. “Open-Ended Online Diaries: Capturing Life as It Is Narrated.” International Journal of Qualitative Methods 9 (2): 133–48. https://doi.org/10.1177/160940691000900202.

Monday April 18: Content analysis
Required Readings:


 * McMillan, Sally J. 2000. “The Microscope and the Moving Target: The Challenge of Applying Content Analysis to the World Wide Web.” Journalism & Mass Communication Quarterly 77 (1): 80–98. https://doi.org/10.1177/107769900007700107.
 * Zamith, Rodrigo, and Seth C. Lewis. 2015. “Content Analysis and the Algorithmic Coder: What Computational Social Science Means for Traditional Modes of Media Analysis.” The Annals of the American Academy of Political and Social Science 659 (1): 307–18. https://doi.org/10.1177/0002716215570576.
 * Grimmer, Justin, and Brandon M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.” Political Analysis, January, mps028. https://doi.org/10.1093/pan/mps028.
 * DiMaggio, Paul, Manish Nag, and David Blei. 2013. “Exploiting Affinities between Topic Modeling and the Sociological Perspective on Culture: Application to Newspaper Coverage of U.S. Government Arts Funding.” Poetics, Topic Models and the Cultural Sciences, 41 (6): 570–606. https://doi.org/10.1016/j.poetic.2013.08.004.

Optional Readings:


 * Trilling, Damian, and Jeroen G. F. Jonkman. 2018. “Scaling up Content Analysis.” Communication Methods and Measures 12 (2–3): 158–74. https://doi.org/10.1080/19312458.2018.1447655.
 * Leetaru, Kalev Hannes. 2011. Data Mining Methods for the Content Analyst: An Introduction to the Computational Analysis of Content. Routledge Communication Series. New York, NY: Taylor and Francis. [Available through UW libraries].

I'm assuming you have at least a rough familiarity with content analysis as a methodology. If your not as comfortable with this, check out content analysis|the content analysis Wikipedia article to start. These help provide more of a background into content analysis (in general, and online):


 * Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, Calif.: Sage Publications. [Available from Instructor]
 * Krippendorff, K. (2005). Content analysis: an introduction to its methodology. Thousand Oaks; London; New Delhi: Sage. [Available from Instructor]

Examples of more traditional content analysis using online content:


 * Trammell, K. D., Tarkowski, A., Hofmokl, J., & Sapp, A. M. (2006). Rzeczpospolita blogów (Republic of Blog): Examining Polish Bloggers Through Content Analysis. Journal of Computer-Mediated Communication, 11(3), 702–722.
 * Woolley, J. K., Limperos, A. M., & Oliver, M. B. (2010). The 2008 Presidential Election, 2.0: A Content Analysis of User-Generated Political Facebook Groups. Mass Communication and Society, 13(5), 631–652.
 * Maier, Daniel, A. Waldherr, P. Miltner, G. Wiedemann, A. Niekler, A. Keinert, B. Pfetsch, et al. 2018. “Applying LDA Topic Modeling in Communication Research: Toward a Valid and Reliable Methodology.” Communication Methods and Measures 12 (2–3): 93–118. https://doi.org/10.1080/19312458.2018.1430754.

A few other things related to topic modeling and sentiment analysis:


 * Barberá, P., Bonneau, R., Egan, P., Jost, J. T., Nagler, J., & Tucker, J. (2014). Leaders or Followers? Measuring Political Responsiveness in the US Congress Using Social Media Data. Presented at the Annual Meeting of the American Political Science Association.
 * Feldman, Ronen. 2013. “Techniques and Applications for Sentiment Analysis.” Communications of the ACM 56 (4): 82–90. https://doi.org/10.1145/2436256.2436274.
 * Baumer, Eric P. S., David Mimno, Shion Guha, Emily Quan, and Geri K. Gay. 2017. “Comparing Grounded Theory and Topic Modeling: Extreme Divergence or Unlikely Convergence?” Journal of the Association for Information Science and Technology 68 (6): 1397–1410. https://doi.org/10.1002/asi.23786.
 * Rudkowsky, Elena, Martin Haselmayer, Matthias Wastian, Marcelo Jenny, Štefan Emrich, and Michael Sedlmair. 2018. “More than Bags of Words: Sentiment Analysis with Word Embeddings.” Communication Methods and Measures 12 (2–3): 140–57. https://doi.org/10.1080/19312458.2018.1455817.

Wednesday April 20: Social network analysis
Required Readings:


 * Lazer, David. 2018. “Networks and Information Flow.” The Oxford Handbook of Networked Communication, April. https://doi.org/10.1093/oxfordhb/9780190460518.013.2.
 * Garton, Laura, Caroline Haythornthwaite, and Barry Wellman. 1997. “Studying Online Social Networks.” Journal of Computer-Mediated Communication 3 (1): 0–0. https://doi.org/10.1111/j.1083-6101.1997.tb00062.x.
 * Mislove, Alan, Massimiliano Marcon, Krishna P. Gummadi, Peter Druschel, and Bobby Bhattacharjee. 2007. “Measurement and Analysis of Online Social Networks.” In Proceedings of the 7th ACM SIGCOMM Conference on Internet Measurement, 29–42. IMC ’07. New York, NY, USA: ACM. https://doi.org/10.1145/1298306.1298311.
 * Shumate, Michelle, and Edward T. Palazzolo. 2010. “Exponential Random Graph (P*) Models as a Method for Social Network Analysis in Communication Research.” Communication Methods and Measures 4 (4): 341–71. https://doi.org/10.1080/19312458.2010.527869.
 * Foucault Welles, Brooke, Anthony Vashevko, Nick Bennett, and Noshir Contractor. 2014. “Dynamic Models of Communication in an Online Friendship Network.” Communication Methods and Measures 8 (4): 223–43. https://doi.org/10.1080/19312458.2014.967843.
 * Freelon, Deen. 2018. “Partition-Specific Network Analysis of Digital Trace Data.” The Oxford Handbook of Networked Communication, April. https://doi.org/10.1093/oxfordhb/9780190460518.013.3.

Optional Readings:


 * Wellman, Barry. 2016. “Challenges in Collecting Personal Network Data: The Nature of Personal Network Analysis - Barry Wellman, 2007.” Field Methods, July. http://journals.sagepub.com/doi/10.1177/1525822X06299133.
 * Yang, Jaewon, and Jure Leskovec. 2015. “Defining and Evaluating Network Communities Based on Ground-Truth.” Knowledge and Information Systems 42 (1): 181–213. https://doi.org/10.1007/s10115-013-0693-z.
 * Centola, Damon. 2010. “The Spread of Behavior in an Online Social Network Experiment.” Science 329 (5996): 1194–97. https://doi.org/10.1126/science.1185231.
 * [Example] Jackson, Sarah J., and Brooke Foucault Welles. 2015. “Hijacking #myNYPD: Social Media Dissent and Networked Counterpublics.” Journal of Communication 65 (6): 932–52. https://doi.org/10.1111/jcom.12185.

Network datasets:


 * Stanford Large Network Dataset Collection which contains a variety of network datasets. Many, but certainly not all, are social networks.

Monday April 25: Visual Analysis
Required Readings:


 * Faulkner, Simon, Farida Vis, and Francesco D’Orazio. 2018. “Analysing Social Media Images.” In The SAGE Handbook of Social Media, edited by Jean Burgess, Alice Marwick, and Thomas Poell, 160–78. London, UK: SAGE. https://doi.org/10.4135/9781473984066.
 * Casas, Andreu, and Nora Webb Williams. 2019. “Images That Matter: Online Protests and the Mobilizing Role of Pictures.” Political Research Quarterly 72 (2): 360–75. https://doi.org/10.1177/1065912918786805.
 * Casas, Andreu, and Nora Webb Williams. 2017. “Computer Vision for Political Science Research: A Study of Online Protest Images.” In. College Park, PA: Pennsylvania State University. http://andreucasas.com/casas_webb_williams_NewFaces2017_images_as_data.pdf.
 * Hochman, Nadav, and Raz Schwartz. 2012. “Visualizing Instagram: Tracing Cultural Visual Rhythms.” In Sixth International AAAI Conference on Weblogs and Social Media. https://pdfs.semanticscholar.org/780d/c7ff86eb36731d5faa043ac635cbae6bbe45.pdf.

Optional Readings:


 * Torralba, Antonio. 2009. “Understanding Visual Scenes.” Tutorial presented at the NIPS, Vancouver, BC, Canada. http://videolectures.net/nips09_torralba_uvs/.
 * Note: This is a two-part (each part is one hour) lecture and tutorial by an expert in computer vision. I strongly recommend watching Part I. I think this gives you a good sense of the nature of the kinds of challenges that were (and still are) facing the field of computer vision and anybody trying to have their computer look at images.


 * Hochman, Nadav, and Lev Manovich. 2013. “Zooming into an Instagram City: Reading the Local through Social Media.” First Monday 18 (7). https://firstmonday.org/article/view/4711/3698.

These five papers are all technical approaches to doing image classification using datasets from Internet-based datasets of images like Flickr, Google Image Search, Google Street View, or Instagram. Each of these describes interesting and challenges technical issues. If you're interested, it would be a great idea to read these to get a sense for the state of the art and what is and isn't possible:


 * Jaffe, Alexandar, Mor Naaman, Tamir Tassa, and Marc Davis. 2006. “Generating Summaries and Visualization for Large Collections of Geo-Referenced Photographs.” In Proceedings of the 8th ACM International Workshop on Multimedia Information Retrieval, 89–98. MIR ’06. New York, NY, USA: ACM. https://doi.org/10.1145/1178677.1178692.
 * Simon, Ian, Noah Snavely, and Steven M. Seitz. 2007. “Scene Summarization for Online Image Collections.” In Computer Vision, IEEE International Conference On, 0:1–8. Los Alamitos, CA, USA: IEEE Computer Society. https://doi.org/10.1109/ICCV.2007.4408863.
 * Crandall, David J., Lars Backstrom, Daniel Huttenlocher, and Jon Kleinberg. 2009. “Mapping the World’s Photos.” In Proceedings of the 18th International Conference on World Wide Web, 761–770. WWW ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1526709.1526812.
 * San Pedro, Jose, and Stefan Siersdorfer. 2009. “Ranking and Classifying Attractiveness of Photos in Folksonomies.” In Proceedings of the 18th International Conference on World Wide Web, 771–780. WWW ’09. New York, NY, USA: ACM. https://doi.org/10.1145/1526709.1526813.
 * Doersch, Carl, Saurabh Singh, Abhinav Gupta, Josef Sivic, and Alexei A. Efros. 2012. “What Makes Paris Look like Paris?” ACM Trans. Graph. 31 (4): 101:1–101:9. https://doi.org/10.1145/2185520.2185597.

Wednesday April 27: Design Research
Required Readings:


 * Bernstein, Michael S., Mark S. Ackerman, Ed H. Chi, and Robert C. Miller. 2011. “The Trouble with Social Computing Systems Research.” In CHI ’11 Extended Abstracts on Human Factors in Computing Systems, 389–98. CHI EA ’11. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/1979742.1979618.
 * Ackerman, Mark S. 2000. “The Intellectual Challenge of CSCW: The Gap between Social Requirements and Technical Feasibility.” Human–Computer Interaction 15 (2–3): 179–203. https://doi.org/10.1207/S15327051HCI1523_5.
 * Gilbert, Eric. 2012. “Designing Social Translucence over Social Networks.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2731–40. CHI ’12. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2207676.2208670.
 * Grevet, Catherine, and Eric Gilbert. 2015. “Piggyback Prototyping: Using Existing, Large-Scale Social Computing Systems to Prototype New Ones.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 4047–56. CHI ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702395.

Optional Readings:


 * Olsen, Dan R., Jr. 2007. “Evaluating User Interface Systems Research.” In Proceedings of the 20th Annual ACM Symposium on User Interface Software and Technology, 251–58. UIST ’07. New York, NY, USA: ACM. https://doi.org/10.1145/1294211.1294256.
 * Grudin, Jonathan. 1988. “Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces.” In Proceedings of the 1988 ACM Conference on Computer-Supported Cooperative Work, 85–93. CSCW ’88. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/62266.62273.
 * Zhang, Amy X., Grant Hugh, and Michael S. Bernstein. 2020. “PolicyKit: Building Governance in Online Communities.” In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 365–78. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3379337.3415858.

Monday May 2: Consulting Day
We will not meet together as a group today. Instead, I will schedule one-on-one in-person meetings of an hour with each student individually to catch up with you about your project and to work directly with you to resolve any technical issues you have run into with data collected.

Wednesday May 4: Consulting Day
We will not meet together as a group today. Instead, I will schedule one-on-one in-person meetings of an hour with each student individually to catch up with you about your project and to work directly with you to resolve any technical issues you have run into with data collected.

Monday May 9: Class Cancelled
Class was cancelled due to health issues.

Wednesday May 11: Experiments
Required Readings:


 * Reips, Ulf-Dietrich. 2002. “Standards for Internet-Based Experimenting.” Experimental Psychology 49 (4): 243–56. https://doi.org/10.1026//1618-3169.49.4.243.
 * Salganik, Matthew J., Peter Sheridan Dodds, and Duncan J. Watts. 2006. “Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market.” Science 311 (5762): 854–56. https://doi.org/10.1126/science.1121066.
 * Hergueux, Jérôme, and Nicolas Jacquemet. 2014. “Social Preferences in the Online Laboratory: A Randomized Experiment.” Experimental Economics 18 (2): 251–83. https://doi.org/10.1007/s10683-014-9400-5.
 * Rijt, Arnout van de, Soong Moon Kang, Michael Restivo, and Akshay Patil. 2014. “Field Experiments of Success-Breeds-Success Dynamics.” Proceedings of the National Academy of Sciences 111 (19): 6934–39. https://doi.org/10.1073/pnas.1316836111.
 * Narayan, Sneha, Nathan TeBlunthuis, Wm Salt Hale, Benjamin Mako Hill, and Aaron Shaw. 2019. “All Talk: How Increasing Interpersonal Communication on Wikis May Not Enhance Productivity.” Proceedings of the ACM on Human-Computer Interaction 3 (CSCW): 101:1–101:19. https://doi.org/10.1145/3359203.

Optional Readings:


 * Eckles, Dean, Brian Karrer, and Johan Ugander. 2017. “Design and Analysis of Experiments in Networks: Reducing Bias from Interference.” Journal of Causal Inference 5 (1). https://doi.org/10.1515/jci-2015-0021.
 * This piece is set as the intersection of networks and experiments. It's very important but is probably too technical to assign for the whole c.ass


 * Kohavi, Ron, Alex Deng, Brian Frasca, Toby Walker, Ya Xu, and Nils Pohlmann. 2013. “Online Controlled Experiments at Large Scale.” In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1168–1176. KDD ’13. Chicago, Illinois, USA: Association for Computing Machinery. https://doi.org/10.1145/2487575.2488217.
 * Reinecke, Katharina, and Krzysztof Z. Gajos. 2015. “LabintheWild: Conducting Large-Scale Online Experiments With Uncompensated Samples.” In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, 1364–1378. CSCW ’15. New York, NY, USA: ACM. https://doi.org/10.1145/2675133.2675246.
 * Zhu, Haiyi, Amy Zhang, Jiping He, Robert E. Kraut, and Aniket Kittur. 2013. “Effects of Peer Feedback on Contribution: A Field Experiment in Wikipedia.” In, 2253. ACM Press. https://doi.org/10.1145/2470654.2481311.
 * Zhang, Xiaoquan (Michael), and Feng Zhu. 2011. “Group Size and Incentives to Contribute: A Natural Experiment at Chinese Wikipedia.” American Economic Review 101 (4): 1601–15. https://doi.org/10.1257/aer.101.4.1601.
 * Weninger, Tim, Thomas James Johnston, and Maria Glenski. 2015. “Random Voting Effects in Social-Digital Spaces: A Case Study of Reddit Post Submissions.” Pp. 293–297 in Proceedings of the 26th ACM Conference on Hypertext & Social Media, HT ’15. Guzelyurt, Northern Cyprus: Association for Computing Machinery.

Monday May 16: Surveys
Required Readings:


 * Fricker, Jr., Ronald D., and Katja Lozar Manfreda. 2017. “Sampling Methods for Online Surveys.” In The SAGE Handbook of Online Research Methods, edited by Nigel G. Fielding, Raymond M. Lee, and Grant Blank, 2 edition, 162–83. London, UK: SAGE.
 * Walejko, Gina. 2009. “Online Survey: Instant Publication, Instant Mistake, All of the Above.” In Research Confidential: Solutions to Problems Most Social Scientists Pretend They Never Have, edited by Eszter Hargittai, 101–21. Ann Arbor, MI: University of Michigan Press.
 * Konstan, Joseph A., B. R. Simon Rosser, Michael W. Ross, Jeffrey Stanton, and Weston M. Edwards. 2005. “The Story of Subject Naught: A Cautionary but Optimistic Tale of Internet Survey Research.” Journal of Computer-Mediated Communication 10 (2): 00–00. https://doi.org/10.1111/j.1083-6101.2005.tb00248.x.
 * Hill, Benjamin Mako, and Aaron Shaw. 2013. “The Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with Propensity Score Estimation.” PLoS ONE 8 (6): e65782. https://doi.org/10.1371/journal.pone.0065782.
 * Salganik, Matthew J., and Karen E. C. Levy. 2015. “Wiki Surveys: Open and Quantifiable Social Data Collection.” PLOS ONE 10 (5): e0123483. https://doi.org/10.1371/journal.pone.0123483.
 * Note: This journalistic account of the research may also be useful.


 * Alperin, Juan Pablo, Erik Warren Hanson, Kenneth Shores, and Stefanie Haustein. 2017. “Twitter Bot Surveys: A Discrete Choice Experiment to Increase Response Rates.” In Proceedings of the 8th International Conference on Social Media & Society, 1–4. #SMSociety17. Toronto, ON, Canada: Association for Computing Machinery. https://doi.org/10.1145/3097286.3097313.

Optional Readings:


 * Van Selm, Martine, and Nicholas W. Jankowski. 2006. “Conducting Online Surveys.” Quality and Quantity 40 (3): 435–56. https://doi.org/10.1007/s11135-005-8081-8.
 * Vehovar, Vasja, and Katja Lozar Manfreda. 2017. “Overview: Online Surveys.” In The SAGE Handbook of Online Research Methods, edited by Nigel G. Fielding, Raymond M. Lee, and Grant Blank, 2 edition, 143–61. London, UK: SAGE.
 * Kaczmirek, Lars. 2017. “Online Survey Software.” In The SAGE Handbook of Online Research Methods, edited by Nigel G. Fielding, Raymond M. Lee, and Grant Blank, 2 edition, 203–19. London, UK: SAGE.
 * Toepoel, Vera. 2017. “Online Survey Design.” In The SAGE Handbook of Online Research Methods, edited by Nigel G. Fielding, Raymond M. Lee, and Grant Blank, 2 edition, 184–202. London, UK: SAGE.
 * Mavletova, Aigul, and Mick P. Couper. 2014. “Mobile Web Survey Design: Scrolling versus Paging, SMS versus E-Mail Invitations.” Journal of Survey Statistics and Methodology 2 (4): 498–518. https://doi.org/10.1093/jssam/smu015.
 * Yun, Gi Woong, and Craig W. Trumbo. 2000. “Comparative Response to a Survey Executed by Post, e-Mail, & Web Form.” Journal of Computer-Mediated Communication 6 (1): 0–0. https://doi.org/10.1111/j.1083-6101.2000.tb00112.x.
 * Hargittai, Eszter, and Chris Karr. 2009. “WAT R U DOIN? Studying the Thumb Generation Using Text Messaging.” In Research Confidential: Solutions to Problems Most Social Scientists Pretend They Never Have, edited by Eszter Hargittai, 192–216. Ann Arbor, MI: University of Michigan Press.

If you don't have a background in survey design, these two have been recommended by our guest speaker as good basic things to read:


 * Krosnick, Jon A. 1999. “Maximizing Measurement Quality: Principles of Good Questionnaire Design.” In Measures of Political Attitudes, edited by John P. Robinson, Phillip R. Shaver, and Lawrence S. Wrightsman. New York: Academic Press.
 * Krosnick, Jon A. 1999. “Survey Research.” Annual Review of Psychology 50 (1): 537–67. https://doi.org/10.1146/annurev.psych.50.1.537.

Tools for doing mobile surveys:


 * RapidSMS
 * Twilio

Wednesday May 18: Digital Trace and Sensor Data
Required Readings:


 * Müller, Jörg, Sergi Fàbregues, Elisabeth Anna Guenther, and María José Romano. 2019. “Using Sensors in Organizational Research—Clarifying Rationales and Validation Challenges for Mixed Methods.” Frontiers in Psychology 10. https://www.frontiersin.org/article/10.3389/fpsyg.2019.01188.
 * Eagle, Nathan. 2011. “Mobile Phones as Sensors for Social Research.” In The Handbook of Emergent Technologies in Social Research, 492–521. Oxford, UK: Oxford University Press.
 * Jiang, Jie, Riccardo Pozza, Nigel Gilbert, and Klaus Moessner. 2020. “MakeSense: An IoT Testbed for Social Research of Indoor Activities.” ACM Transactions on Internet of Things 1 (3): 17:1-17:25. https://doi.org/10.1145/3381914.
 * Note: I'm mostly thinking this is a useful example a sort of home/IoT based approach to sensors. There's a bunch of technical detail on the system here but please skip/skim the detail here.


 * Greshake Tzovaras, Bastian, Misha Angrist, Kevin Arvai, Mairi Dulaney, Vero Estrada-Galiñanes, Beau Gunderson, Tim Head, et al. 2019. “Open Humans: A Platform for Participant-Centered Research and Personal Data Exploration.” GigaScience 8 (6): giz076. https://doi.org/10.1093/gigascience/giz076.

Optional:


 * Blumenstock, Joshua, Gabriel Cadamuro, and Robert On. 2015. “Predicting Poverty and Wealth from Mobile Phone Metadata.” Science 350 (6264): 1073–76. https://doi.org/10.1126/science.aac4420.
 * Menchen-Trevino, Ericka. 2018. “Digital Trace Data and Social Research: A Proactive Research Ethics.” Edited by Brooke Foucault Welles and Sandra González-Bailón. In The Oxford Handbook of Networked Communication. Oxford, UK: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190460518.013.25. {avail-uw|https://doi.org/10.1093/oxfordhb/9780190460518.013.25}}
 * Rom, Adina, Isabel Günther, and Yael Borofsky. 2020. “Using Sensors to Measure Technology Adoption in the Social Sciences.” Development Engineering 5 (January): 100056. https://doi.org/10.1016/j.deveng.2020.100056.
 * Steele, Jessica E., Pål Roe Sundsøy, Carla Pezzulo, Victor A. Alegana, Tomas J. Bird, Joshua Blumenstock, Johannes Bjelland, et al. 2017. “Mapping Poverty Using Mobile Phone and Satellite Data.” Journal of The Royal Society Interface 14 (127): 20160690. https://doi.org/10.1098/rsif.2016.0690.
 * Struminskaya, Bella, Peter Lugtig, Florian Keusch, and Jan Karem Höhne. 2020. “Augmenting Surveys with Data from Sensors and Apps: Opportunities and Challenges.” Social Science Computer Review, December, 0894439320979951. https://doi.org/10.1177/0894439320979951.
 * Wiebe, Douglas J., and Chistopher N. Morrison. 2018. “Digital Mapping of Urban Mobility Patterns.” Edited by Brooke Foucault Welles and Sandra González-Bailón. In The Oxford Handbook of Networked Communication. Oxford, UK: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190460518.013.25.

Monday May 23: Consulting Day
We will not meet together as a group today. Instead, I will schedule one-on-one in-person meetings of an hour with each student individually to catch up with you about your project and to work directly with you to resolve any technical issues you have run into with data collected.

Wednesday May 25: Consulting Day
We will not meet together as a group today. Instead, I will schedule one-on-one in-person meetings of an hour with each student individually to catch up with you about your project and to work directly with you to resolve any technical issues you have run into with data collected.

Wednesday June 1: Virtual Final Presentations
The plan for final presentations is as follows:


 * 1) Everybody should record and a share copy of their final presentation by Thursday June 2nd 11:59pm.
 * 2) Everybody should view everybody else's presentations and give them feedback by Sunday June 5th 11:59pm.

I've pushed these deadlines back because my own travel schedule means I'm not likely to be able to review these before Friday June 3rd. I'm going to try to get everybody feedback by then.

Presentation content and form
Your projects are different stages so there be variation in terms of what is presented. That said, I expect nearly everyone will present one of two kinds of presentations:


 * 1) An overview and summary of your final project in its current state so that your classmates and I can give you feedback that is useful for your final written project due a week later. Present your research questions and context and walk us through the key deliverables and your current progress. Emphasize your methods since this is what we will be best positioned to provide you feedback. on If you have specific things you want feedback on, please communicate this during your talk and/or on Discord.
 * 2) If your project is a complete paper, you might want to instead do a full research presentation like what you would give at a conference. This would be fine as well.

Each presentation should between 8-12 minutes and absolutely not longer than 15m. I expect most people will use slides but walking through a posters could work too. I'm open/flexible and you're welcome to be creative.

Recording and sharing your presentation
My suggestion is that everybody share their presentation by placing a link to a video recording directly in the  channel on Discord. Just create a new message in the channel.

There are many ways to record your presentation. Here are some ideas:


 * Probably the easiest way is just to join the a Zoom room using your UW institutional Zoom account, sharing your screen, and recording it. If you ensure that you've enabled public link-sharing, you should be able to link directly to the Zoom recording.
 * Record using Open Broadcasting Software (OBS) which is used by lots of streamers.
 * Try any number of other options (I put a list together earlier this year).

Besides sharing directly from Zoom, you can share your file with Dropbox, Google Drive, an non-searchable Youtube video, etc.

Feedback
Once the videos are uploaded, everybody should watch every video and then provide feedback on Discord:


 * My expectations is that everybody will write feedback to every classmates for 10-15 minutes.
 * To leave feedback, leave it in a Discord thread associated with each videos. The threads will be listed underneath the channel in the channel listing sidebar. If the thread doesn't exist yet, you can just mouse over the message in the main channel and create a thread. Let's name them something like "Mako's Presentation"

There will be 8 presentations (there is one 2-person project) so this will work out to a maximum of 2 hours watching videos and about 2 hours leaving feedback. Since I had planned to do two classes for final presentations, this works out about right. I understand that you'll have more feedback to give to some folks than others but do try to keep this time target in mind and do try to give feedback to everybody.

<!--

Crowdsourcing, Digital Labor Markets, and Human Computation

 * Note: I've marked things as [Required] below if they are required because I thought it made more sense to keep the topics groups of articles below intact.

MTurk documentation and guidelines:


 * [Required] Amazon Mechanical Turk Requester UI Guide — Skim, but make sure you're ready to submit tasks.
 * [Required] Amazon Mechanical Turk Best Practices Guide — Skim, but make sure you're ready to submit tasks.
 * [Required] Shaw, Aaron. 2015. “Hired Hands and Dubious Guesses: Adventures in Crowdsourced Data Collection.” In Digital Research Confidential: The Secrets of Studying Behavior Online, edited by Eszter Hargittai and Christian Sandvig. The MIT Press.
 * [Required] Tutorials Posted on the MTurk blog — Skim and browse and pay attention to things that are like what you'd like to do in the class session.
 * [Required] Guidelines for Academic Requesters and Basics of How to be a good Requester from the We Are Dynamo — These sets of guidelines were created by Turkers as part of an effort to engage in collective actions and organizer of Turkers run by Niloufar Saleh in the paper below.
 * Mason, Winter, and Siddharth Suri. 2011. “Conducting Behavioral Research on Amazon’s Mechanical Turk.” Behavior Research Methods 44 (1): 1–23. https://doi.org/10.3758/s13428-011-0124-6. — Dated but still somewhat useful.

Overviews of MTurk and issues of data quality:


 * Horton, John J., David G. Rand, and Richard J. Zeckhauser. 2011. “The Online Laboratory: Conducting Experiments in a Real Labor Market.” Experimental Economics 14 (3): 399–425. https://doi.org/10.1007/s10683-011-9273-9.
 * Buhrmester, Michael, Tracy Kwang, and Samuel D. Gosling. 2011. “Amazon’s Mechanical Turk: A New Source of Inexpensive, yet High-Quality, Data?” Perspectives on Psychological Science, February. https://doi.org/10.1177/1745691610393980.
 * Casler, Krista, Lydia Bickel, and Elizabeth Hackett. 2013. “Separate but Equal? A Comparison of Participants and Data Gathered via Amazon’s MTurk, Social Media, and Face-to-Face Behavioral Testing.” Computers in Human Behavior 29 (6): 2156–60. https://doi.org/10.1016/j.chb.2013.05.009.
 * [Required] Weinberg, Jill, Jeremy Freese, and David McElhattan. 2014. “Comparing Data Characteristics and Results of an Online Factorial Survey between a Population-Based and a Crowdsource-Recruited Sample.” Sociological Science 1: 292–310. https://doi.org/10.15195/v1.a19.
 * Kees, Jeremy, Christopher Berry, Scot Burton, and Kim Sheehan. 2017. “An Analysis of Data Quality: Professional Panels, Student Subject Pools, and Amazon’s Mechanical Turk.” Journal of Advertising 46 (1): 141–55. https://doi.org/10.1080/00913367.2016.1269304.
 * [Required] Kennedy, Ryan, Scott Clifford, Tyler Burleigh, Ryan Jewell, and Philip Waggoner. 2018. “The Shape of and Solutions to the MTurk Quality Crisis.” SSRN Scholarly Paper ID 3272468. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3272468.

Culture and work conditions for Turkers:


 * Irani, Lilly. 2015. “The Cultural Work of Microwork.” New Media & Society 17 (5): 720–39. https://doi.org/10.1177/1461444813511926.
 * Kittur, Aniket, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. “The Future of Crowd Work.” In Proceedings of the 2013 Conference on Computer Supported Cooperative Work, 1301–1318. CSCW ’13. San Antonio, Texas, USA: Association for Computing Machinery. https://doi.org/10.1145/2441776.2441923.
 * Gray, Mary L., Siddharth Suri, Syed Shoaib Ali, and Deepti Kulkarni. 2016. “The Crowd Is a Collaborative Network.” In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, 134–147. CSCW ’16. San Francisco, California, USA: Association for Computing Machinery. https://doi.org/10.1145/2818048.2819942.
 * [Required] Semuels, Alana. 2018. “The Internet Is Enabling a New Kind of Poorly Paid Hell.” The Atlantic. January 23, 2018. https://www.theatlantic.com/business/archive/2018/01/amazon-mechanical-turk/551192/.

Systems to approve Turker experiences:


 * Salehi, Niloufar, Lilly C. Irani, Michael S. Bernstein, Ali Alkhatib, Eva Ogbe, Kristy Milland, and Clickhappier. 2015. “We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1621–1630. CHI ’15. Seoul, Republic of Korea: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702508.
 * Irani, Lilly C., and M. Six Silberman. 2013. “Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 611–620. CHI ’13. Paris, France: Association for Computing Machinery. https://doi.org/10.1145/2470654.2470742.

Assignments to complete before class:

The first task is to complete a task a crowd worker:


 * If you are a US citizen: Sign up as a worker on MTurk. Find and complete at least 5 "hits" as a worker on Amazon Mechanical Turk. Note that to do this you will need to create a worker account on Mturk.
 * If you are not a US citizen or if you cannot sign up on MTurk for some other reason: Complete at least 3-4 classification tasks in at least 2 different Zooniverse projects of your choice. Also, complete at least one "study" in Lab in the Wild
 * In either case: Record (write down) details and notes about your tasks: What did you do? Who was the requester? What could you was the purpose of the task (as best you could tell)? What was the experience like? What research applications can you (not) imagine for this kind of system?

The second task is to get ready to launch a task as a requestor. We will design and launch tasks in class but I want you to do the following ahead of time:


 * Create a "requester" account on Amazon Mechnical Turk. Doing so may require up top 48 hours to be approved so please do that immediately so you have it ready to go in class.
 * Put money onto your requestor account to pay workers. A $5 budget should be sufficient for our class. They should take any payment that Amazon does.
 * Think of at least one small classification or coding task (e.g., of Tweets, images, etc) and one human subjects data collection tasks like a survey, a survey experiment, etc, that you would like to run. You will have a budget of $5 to run the task!
 * If running this task will involve some data (e.g., a set of images or URLs, a set of Tweets, etc), collect that material in a spreadsheet before class. If it will involve a survey, create your survey in a Google Form and/or a Survey Monkey or Qualtrics survey before class.

Hyperlink Networks

 * Park, Han Woo. 2003. “Hyperlink Network Analysis: A New Method for the Study of Social Structure on the Web.” Connections 25 (1): 49–61. [Available through Canvas]
 * González-Bailón, Sandra. 2009. “Opening the Black Box of Link Formation: Social Factors Underlying the Structure of the Web.” Social Networks 31 (4): 271–80. https://doi.org/10.1016/j.socnet.2009.07.003.
 * [Example] Elgin, Dallas J. 2015. “Utilizing Hyperlink Network Analysis to Examine Climate Change Supporters and Opponents.” Review of Policy Research 32 (2): 226–45. https://doi.org/10.1111/ropr.12118.

Optional readings:


 * Jackson, Michele H. 1997. “Assessing the Structure of Communication on the World Wide Web.” Journal of Computer-Mediated Communication 3 (1). https://doi.org/10.1111/j.1083-6101.1997.tb00063.x.
 * Ackland, Robert. 2016. “WWW Hyperlink Networks.” Lecture Slides presented at the SOCR8006 Online Research Methods, Canberra, Australia. http://vosonlab.net/papers/ACSPRISummer2017/Lecture_Hyperlink_Networks.pdf.
 * Lusher, Dean, and Robert Ackland. 2011. “A Relational Hyperlink Analysis of an Online Social Movement.” Journal of Social Software 12 (5). https://www.cmu.edu/joss/content/articles/volume12/Lusher/.
 * [Example] Shumate, Michelle, and Lori Dewitt. 2008. “The North/South Divide in NGO Hyperlink Networks.” Journal of Computer-Mediated Communication 13 (2): 405–28. https://doi.org/10.1111/j.1083-6101.2008.00402.x.

Tools for collecting hyperlink network data:


 * Issue Crawler — network mapping software by the Govcom.org Foundation, Amsterdam in a group run by Richard Rogers
 * Virtual Observatory for the Study of Online Networks (VOSON) — "web-based software incorporating web mining, data visualisation, and traditional empirical social science methods (e.g. social network analysis, SNA). Text analysis, dataset manipulation and visualisation, and social network analysis (SNA) are available within an integrated environment."

-->

Teaching and learning with COVID-19
The COVID-19 pandemic will impact this course in various ways, some of them obvious and tangible and others harder to pin down. On the obvious and tangible front, some of us will likely be wearing masks. UW has made it very clear to all of us that if anyone of us feels sick, they cannot come to campus or class. This might translate into some hybrid course sessions at some point over the quarter. Since the room we'll be meeting in is not set up for hybrid learning, there's a possibility that we might end up having to move whole sessions online. All of this will reshape our collective "classroom" experience in major ways.

On the "harder to pin down" side, many of us may experience elevated levels of exhaustion, stress, uncertainty and distraction. We may need to provide unexpected support to family, friends, or others in our communities. I have personally experienced all of these things at various times over the pandemic and I expect that some of you have too. It is a difficult time.

I believe it is important to acknowledge these realities of the situation and create the space to discuss and process them in the context of our class throughout the quarter. As your instructor and colleague, I commit to do my best to approach the course in an adaptive, generous, and empathetic way. I will try to be transparent and direct with you throughout—both with respect to the course material as well as the pandemic and the university's evolving response to it. I ask that you try to extend a similar attitude towards everyone in the course. When you have questions, feedback, or concerns, please try to share them in an appropriate way. If you require accommodations of any kind at any time (directly related to the pandemic or not), please contact the teaching team.


 * This text is borrowed and adapted from Aaron Shaw's statistics course.

Your Presence in Class
As detailed in my detailed page on assessment, your participation in discussion is an important way that I will assess learning. Obviously, you must be in class in order to participate. In the event of an absence, you are responsible for obtaining notes, handouts, assignments, etc. If you can't come to campus due to COVID-19 related issues please be in contact as soon as you can and we'll figure this out. Don't risk the health of yourself or your classmates.

Religious Accommodations
Washington state law requires that UW develop a policy for accommodation of student absences or significant hardship due to reasons of faith or conscience, or for organized religious activities. The UW’s policy, including more information about how to request an accommodation, is available at Religious Accommodations Policy. Accommodations must be requested within the first two weeks of this course using the Religious Accommodations Request form.

Student Conduct
The University of Washington Student Conduct Code (WAC 478-121) defines prohibited academic and behavioral conduct and describes how the University holds students accountable as they pursue their academic goals. Allegations of misconduct by students may be referred to the appropriate campus office for investigation and resolution. More information can be found online at https://www.washington.edu/studentconduct/ Safety

Call SafeCampus at 206-685-7233 anytime–no matter where you work or study–to anonymously discuss safety and well-being concerns for yourself or others. SafeCampus’s team of caring professionals will provide individualized support, while discussing short- and long-term solutions and connecting you with additional resources when requested.

Academic Dishonesty
This includes: cheating on assignments, plagiarizing (misrepresenting work by another author as your own, paraphrasing or quoting sources without acknowledging the original author, or using information from the internet without proper citation), and submitting the same or similar paper to meet the requirements of more than one course without instructor approval. Academic dishonesty in any part of this course is grounds for failure and further disciplinary action. The first incident of plagiarism will result in the student’s receiving a zero on the plagiarized assignment. The second incident of plagiarism will result in the student’s receiving a zero in the class.

Disability Resources
If you have already established accommodations with Disability Resources for Students (DRS), please communicate your approved accommodations to uw at your earliest convenience so we can discuss your needs in this course.

If you have not yet established services through DRS, but have a temporary health condition or permanent disability that requires accommodations (conditions include but not limited to; mental health, attention-related, learning, vision, hearing, physical or health impacts), you are welcome to contact DRS at 206-543-8924 or uwdrs@uw.edu or disability.uw.edu. DRS offers resources and coordinates reasonable accommodations for students with disabilities and/or temporary health conditions. Reasonable accommodations are established through an interactive process between you, your instructor(s) and DRS. It is the policy and practice of the University of Washington to create inclusive and accessible learning environments consistent with federal and state law.

Other Student Support
Any student who has difficulty affording groceries or accessing sufficient food to eat every day, or who lacks a safe and stable place to live, and believes this may affect their performance in the course, is urged to contact the graduate program advisor for support. Furthermore, please notify the professors if you are comfortable in doing so. This will enable us to provide any resources that we may possess (adapted from Sara Goldrick-Rab). Please also note the student food pantry, Any Hungry Husky at the ECC.

Credit and Notes
This will be the third time I have taught this course at UW in its current form. This syllabus draws heavily from previous versions. Syllabuses from earlier classes can be found online at:


 * Internet Research Methods (Winter 2020)
 * Internet Research Methods (Spring 2016)
 * Internet Research Methods (Spring 2015)

This syllabus was inspired by and borrows with permission from, a syllabus from an earlier version of this class taught by Kirsten Foot. Professor Foot last taught the course in Spring 2014.