Content Integrity and Disinformation Risks Across Wikipedia Language Editions: Difference between revisions

From CommunityData
No edit summary
Line 6: Line 6:


==Purpose of the Study==
==Purpose of the Study==
The purpose of this study is to better understand the disinformation and content integrity risks across Wikipedia language editions. Specifically, we are interested in understanding whether some Wikipedia language editions are more vulnerable to systematic disinformation and ideologically motivated editing than others, and why. We are also interested in understanding the cross-wiki monitoring mechanisms currently in place to defend against systematic disinformation risks across Wikipedia editions.
The purpose of this study is to better understand the disinformation and content integrity risks across Wikipedia language editions. While there has been extensive research on “one-off” risks to knowledge integrity – in the form of vandalism, sockpuppet editing, and “edit wars” at the article level – there has been little empirical examination of systematic knowledge integrity risks to entire Wikipedia language projects. Specifically, we are interested in understanding whether some Wikipedia language editions are more vulnerable to systematic disinformation and ideologically motivated editing than others, and why. We are also interested in understanding the cross-wiki monitoring mechanisms currently in place to defend against systematic disinformation risks across Wikipedia editions.
 
In recent years, as large, for-profit platforms such as Facebook, YouTube, and Twitter have faced increasing scrutiny for their content moderation practices, Wikipedia has been hailed as an alternative model for how platforms can combat problematic information and maintain information integrity and reliability. Existing studies of Wikipedia and problematic information, however, focus largely on English-language Wikipedia, which is the largest and most well-resourced. Wikipedia currently consists of over 300 language editions. With each Wikipedia edition functioning somewhat independently, diverging on points of policy, and differing with respect to other key characteristics, such as size and community capacity, there is diversity with respect to how successful various language editions are in combating misinformation.
 
The recent case of Croatian Wikipedia, which has received both media attention and attention from the Wikimedia Foundation, demonstrates some of the difficulties that can befall small language communities on the platform. According to a [https://meta.wikimedia.org/wiki/Croatian_Wikipedia_Disinformation_Assessment-2021 2021 report] commissioned by the WMF, over a decade, a small group of ideologically motivated editors exploited the 2003 split of the larger Serbo-Croatian Wikipedia community into three additional national Wikipedias – Bosnian, Serbian, and Croatian – to seize control of the newly created Croatian Wikipedia. According to the report, this group systematically introduced far-right bias into articles and stamped out dissenting editorial voices that attempted to introduce a neutral point of view.
 
While Croatian Wikipedia may have been an extreme case, small language communities across Wikipedia may be uniquely vulnerable to particular disinformation risks, such as the form of project capture by a cohort of ideologically motivated individuals observed on Croatian Wikipedia. While there has been extensive research on “one-off” risks to knowledge integrity – in the form of vandalism, sockpuppet editing, and “edit wars” at the article level – there has been little empirical examination of systematic knowledge integrity risks to entire Wikipedia language projects.


==Eligibility to Participate==
==Eligibility to Participate==

Revision as of 21:38, 30 March 2022

This page provides information on a research study seeking to understand content integrity and disinformation risks across Wikipedia language editions. We ask that you read this page and ask any questions you may have before agreeing to be in the study.

The study is being conducted by PhD student Zarine Kharazian in the Department of Human-Centered Engineering, and Dr. Benjamin Mako Hill, Assistant Professor in the Department of Communication, at the University of Washington.

Purpose of the Study

The purpose of this study is to better understand the disinformation and content integrity risks across Wikipedia language editions. While there has been extensive research on “one-off” risks to knowledge integrity – in the form of vandalism, sockpuppet editing, and “edit wars” at the article level – there has been little empirical examination of systematic knowledge integrity risks to entire Wikipedia language projects. Specifically, we are interested in understanding whether some Wikipedia language editions are more vulnerable to systematic disinformation and ideologically motivated editing than others, and why. We are also interested in understanding the cross-wiki monitoring mechanisms currently in place to defend against systematic disinformation risks across Wikipedia editions.

Eligibility to Participate

Participants must be adults (i.e., at least 18 years old in the United States, or over the age of majority in your jurisdiction), be able to conduct an interview in English, and have experience in participating in or being impacted by community activities in this study.

Procedures for the Study

If you agree to be in the study, you will first complete a short questionnaire asking about your experience with Wikipedia and particular language editions. You will then participate in an interview conducted remotely by video either using Zoom or a platform of your choice. This interview will include questions about your experiences as a community member or stakeholder contributing to a Wikipedia language project or participating in cross-wiki monitoring or administrative activities, how you are involved in these processes, and how you perceive the community or communities with which you are involved.

The interview should last between 30 and 90 minutes. The interviews will be recorded and then transcribed by the research team. We will delete the recordings themselves once the interviews are fully transcribed. We will also edit out any identifiers from these transcripts so that all retained records are anonymous.

Voluntary Participation and Withdrawal

Taking part in this study is voluntary. You may choose not to take part or may leave the study at any time. Leaving the study will not result in any penalty or loss of benefits to which you are entitled. Your decision of whether or not to participate in this study will not affect your current or future relations with the University of Washington.

Confidentiality

We will endeavor to keep your personal information confidential. However, we cannot guarantee absolute confidentiality. Your personal information may be disclosed if required by law. Your identifying information, including our records of which communities we asked you about, and audio/video recording of our conversation, will be held in confidence until the reports resulting from the study are published, after which they will be deleted. Your email addresses and usernames will be retained as long as required by the university’s financial compliance and will be deleted as soon as possible.

Payment for Participation

You will be compensated with a $20 gift card, unless you choose to opt out of compensation.

Questions or Problems

If you have questions, concerns, or complaints, contact the lead researcher, Zarine Kharazian, at zkharaz@uw.edu.

If you have questions about your rights as a research subject, you may contact the UW Human Subjects Division at 206-543-0098 or hsdinfo@uw.edu.

Screening Questionnaire

To participate in this research study, please fill out this Google Form: https://forms.gle/U16bv6bHrGAR9oZq9