Editing COVID-19 Digital Observatory

From CommunityData
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 44: Line 44:
The SERP data in our initial data release includes the first search result page from Google and Bing for a variety of COVID-19 related terms gathered from Google Trends and Google and Bing's autocomplete "search suggestions." Specifically, using a set of six "stem keywords" about COVID-19 and online communities ("coronavirus", "coronavirus reddit", coronavirus wiki", "covid 19", "covid 19 reddit", and "covid 19 wiki"), we collect related keywords from Google Trends (using open source software[https://www.npmjs.com/package/google-trends-api]) and autocomplete suggestions from Google and Bing (using open source software[https://github.com/gitronald/suggests]). In addition to COVID-19 keywords, we also collect SERP data for the top daily trending queries. Currently, the SERP data collection process does not specify location in its searches. Consequently, the default location used is the location of our machine, at Northwestern University's Evanston campus. We are working on collecting SERP data with location specified beyond the Chicago area (aka other 'localized' content).  
The SERP data in our initial data release includes the first search result page from Google and Bing for a variety of COVID-19 related terms gathered from Google Trends and Google and Bing's autocomplete "search suggestions." Specifically, using a set of six "stem keywords" about COVID-19 and online communities ("coronavirus", "coronavirus reddit", coronavirus wiki", "covid 19", "covid 19 reddit", and "covid 19 wiki"), we collect related keywords from Google Trends (using open source software[https://www.npmjs.com/package/google-trends-api]) and autocomplete suggestions from Google and Bing (using open source software[https://github.com/gitronald/suggests]). In addition to COVID-19 keywords, we also collect SERP data for the top daily trending queries. Currently, the SERP data collection process does not specify location in its searches. Consequently, the default location used is the location of our machine, at Northwestern University's Evanston campus. We are working on collecting SERP data with location specified beyond the Chicago area (aka other 'localized' content).  


The SERP data is released as a series of compressed archives (7z), one archive per day, that follow the naming convention <code>covid_search_data-[YYYYMMDD].7z</code>. You will need a 7z extractor, "7z Opener" on windows worked well for me. Within these compressed archives, there is a folder for each device emulated in the data collection (currently two: Chrome on Windows and iPhone X) which contains all of the respective SERP data. Per each device subdirectory, SERP data itself is organized into folders that are titled by the URL of the search query (e.g. <code>'https---www.google.com-search?q=Krispy Kreme'</code>), and each SERP folder contains three data files:  
The SERP data is released as a series of compressed archives (7z), one archive per day, that follow the naming convention <code>covid_search_data-[YYYYMMDD].7z</code>. Within these compressed archives, there is a folder for each device emulated in the data collection (currently two: Chrome on Windows and iPhone X) which contains all of the respective SERP data. Per each device subdirectory, SERP data itself is organized into folders that are titled by the URL of the search query (e.g. <code>'https---www.google.com-search?q=Krispy Kreme'</code>), and each SERP folder contains three data files:  
* a PNG screenshot of the full first page of results,  
* a PNG screenshot of the full first page of results,  
* an mhtml "snapshot" (https://github.com/puppeteer/puppeteer/issues/3658),  
* an mhtml "snapshot" (https://github.com/puppeteer/puppeteer/issues/3658),  
Line 51: Line 51:
====Wikipedia data====
====Wikipedia data====


Our initial release provides exhaustive edit and [[Wikipedia:Pageview_statistics|pageview data]] for the list of English Wikipedia articles covered by [[Wikipedia:Wikiproject COVID-19|WikiProject Covid-19]]. Please note that the edit JSON data of revisions include the full text of every revision made to articles in [[:wikipedia:Wikipedia:WikiProject COVID-19|English Wikipedia's Wikiproject COVID-19]]. They are highly compressed and and expand to between 20G and 200GB of data per day. Depending on the computer you use, it may not work to load them into memory all at once for analysis.
Our initial release provides exhaustive edit and [[Wikipedia:Pageview_statistics|pageview data]] for the list of English Wikipedia articles covered by [[Wikipedia:Wikiproject COVID-19|WikiProject Covid-19]]. Please note that the edit JSON data of revisions include the full text of every revision made to articles in [[:wikipedia:Wikipedia:WikiProject COVID-19|English Wikipedia's Wikiproject COVID-19]]. They are highly compressed and and expand to more than 20GB of data. Depending on the computer you use, it may not work to load them into memory all at once for analysis.


Each are updated daily and we are working to add historical data from all other language Wikipedia editions.
Each are updated daily and we are working to add historical data from all other language Wikipedia editions.
Line 82: Line 82:
* [https://www.archiveteam.org/index.php?title=Coronavirus Archive team listing of archive sites related to the Coronavirus]
* [https://www.archiveteam.org/index.php?title=Coronavirus Archive team listing of archive sites related to the Coronavirus]
* [https://github.com/nychealth/coronavirus-data Repository with data on COVID-19 from the NYC Department of Health and Mental Hygiene (DOHMH)]
* [https://github.com/nychealth/coronavirus-data Repository with data on COVID-19 from the NYC Department of Health and Mental Hygiene (DOHMH)]
* The Citizens and Technology Lab is [https://covid-algotracker.citizensandtech.org/ tracking COVID-related posts on the Reddit front page] (Cornell and J. Nathan Mattias).
* The Citizens and Technology Lab is [[https://covid-algotracker.citizensandtech.org/ tracking COVID-related posts on the Reddit front page] (Cornell and J. Nathan Mattias).
* [https://covid-data.wmflabs.org/ General statistics about COVID-19 editing in Wikipedia projects] (Diego Saez from the Wikimedia Foundation)
* [https://covid-data.wmflabs.org/ General statistics about COVID-19 editing in Wikipedia projects] (Diego Saez from the Wikimedia Foundation)
* [https://github.com/nytimes/covid-19-data The New York Times data files] with cumulative counts of coronavirus cases in the United States, at the state and county level, over time.
* [https://github.com/nytimes/covid-19-data The New York Times data files] with cumulative counts of coronavirus cases in the United States, at the state and county level, over time.
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)

Template used on this page: