Editing Content Integrity and Disinformation Risks Across Wikipedia Language Editions
From CommunityData
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 6: | Line 6: | ||
==Purpose of the Study== | ==Purpose of the Study== | ||
The purpose of this study is to better understand the disinformation and content integrity risks across Wikipedia language editions. While there has been extensive research on “one-off” risks to knowledge | The purpose of this study is to better understand the disinformation and content integrity risks across Wikipedia language editions. While there has been extensive research on “one-off” risks to knowledge integrity – in the form of vandalism, sockpuppet editing, and “edit wars” at the article level – there has been little empirical examination of systematic knowledge integrity risks to entire Wikipedia language projects. | ||
Specifically, we are interested in understanding whether some Wikipedia language editions are more vulnerable to systematic disinformation and ideologically motivated editing than others, and why. We are also interested in understanding the cross-wiki monitoring mechanisms currently in place to defend against systematic disinformation risks across Wikipedia editions. | Specifically, we are interested in understanding whether some Wikipedia language editions are more vulnerable to systematic disinformation and ideologically motivated editing than others, and why. We are also interested in understanding the cross-wiki monitoring mechanisms currently in place to defend against systematic disinformation risks across Wikipedia editions. |