Not logged in
Talk
Contributions
Create account
Log in
Navigation
Main page
About
People
Publications
Teaching
Resources
Research Blog
Wiki Functions
Recent changes
Help
Licensing
Page
Discussion
Edit
View history
Editing
COVID-19 Digital Observatory
(section)
From CommunityData
Jump to:
navigation
,
search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Resources== The digital observatory data, code, and other resources will exist in a few locations, all linked from this page. More details on the different datasets and sources follow below. Our initial releases should provide a good starting point for investigating social computing and social media content related COVID-19. We're currently releasing three types of material: code, keywords, and data. === Code === For code used to produce the data and get started with analysis we have a [https://github.com/CommunityDataScienceCollective/COVID-19_Digital_Observatory github repository] where almost everything lives. If you want to get involved or start using our work please clone the repository! You'll find example analysis scripts that walk through downloading data directly into something like R and producing some minimal analysis to help you get started. The code used to generate the search engine results pages (SERP) data come from Nick Vincent's [https://github.com/nickmvincent/LinkCoordMin SERP scraping project]. ====Keywords==== We currently use and provide three different types of keywords and search terms: * Article names/topics from Wikipedia's [[:wikipedia:Wikipedia:WikiProject_COVID-19|WikiProject Covid-19]] * Wikidata entities generated via the "Main items" described by Wikidata's [https://www.wikidata.org/wiki/Wikidata:WikiProject_COVID-19 WikiProject COVID-19] * Top 25 daily trending search terms from Google and Bing. We also provide translations of keywords into many languages by collecting translations of labels from Wikidata related to the COVID-19 pandemic. This is done by passing keywords and trending Google "related searches" to the Wikidata search API. The resulting Wikidata items are tagged with labels and aliases in many languages. We hope this provides a useful starting point for searches to discover pandemic related social information in languages beyond English. Code for this part of the project, including examples for loading the data in Python and R, is under [https://github.com/CommunityDataScienceCollective/COVID-19_Digital_Observatory <code>keywords</code>] in our git repository. Similarly, resultant data is under [https://covid19.communitydata.science/datasets/keywords/csv/ <code>keywords/csv</code>] on our server. ===Data === The best way to find the data is to visit https://covid19.communitydata.science/datasets/. The <code>search_results</code> directory contains compressed raw data generated by Nick Vincent's [https://github.com/nickmvincent/LinkCoordMin SERP scraping project]. The <code>wikipedia</code> directory has view counts and revision histories for Wikipedia pages of COVID-19-related articles in <code>.json</code> and <code>.tsv</code> format. The <code>keywords</code> directory has <code>.csv</code> files with COVID-19 related keywords translated into many languages and associated Wikidata item identifiers. ====Search Engine Results Pages (SERP) Data==== The SERP data in our initial data release includes the first search result page from Google and Bing for a variety of COVID-19 related terms gathered from Google Trends and Google and Bing's autocomplete "search suggestions." Specifically, using a set of six "stem keywords" about COVID-19 and online communities ("coronavirus", "coronavirus reddit", coronavirus wiki", "covid 19", "covid 19 reddit", and "covid 19 wiki"), we collect related keywords from Google Trends (using open source software[https://www.npmjs.com/package/google-trends-api]) and autocomplete suggestions from Google and Bing (using open source software[https://github.com/gitronald/suggests]). In addition to COVID-19 keywords, we also collect SERP data for the top daily trending queries. Currently, the SERP data collection process does not specify location in its searches. Consequently, the default location used is the location of our machine, at Northwestern University's Evanston campus. We are working on collecting SERP data with location specified beyond the Chicago area (aka other 'localized' content). The SERP data is released as a series of compressed archives (7z), one archive per day, that follow the naming convention <code>covid_search_data-[YYYYMMDD].7z</code>. You will need a 7z extractor, "7z Opener" on windows worked well for me. Within these compressed archives, there is a folder for each device emulated in the data collection (currently two: Chrome on Windows and iPhone X) which contains all of the respective SERP data. Per each device subdirectory, SERP data itself is organized into folders that are titled by the URL of the search query (e.g. <code>'https---www.google.com-search?q=Krispy Kreme'</code>), and each SERP folder contains three data files: * a PNG screenshot of the full first page of results, * an mhtml "snapshot" (https://github.com/puppeteer/puppeteer/issues/3658), * and a json file with a variety of metadata (e.g. date, the device emulated) and a list of every link (<a>) element in the page with its coordinates (top, left, bottom, right) in pixels. ====Wikipedia data==== Our initial release provides exhaustive edit and [[Wikipedia:Pageview_statistics|pageview data]] for the list of English Wikipedia articles covered by [[Wikipedia:Wikiproject COVID-19|WikiProject Covid-19]]. Please note that the edit JSON data of revisions include the full text of every revision made to articles in [[:wikipedia:Wikipedia:WikiProject COVID-19|English Wikipedia's Wikiproject COVID-19]]. They are highly compressed and and expand to between 20G and 200GB of data per day. Depending on the computer you use, it may not work to load them into memory all at once for analysis. Each are updated daily and we are working to add historical data from all other language Wikipedia editions.
Summary:
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see
CommunityData:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)
Tools
What links here
Related changes
Special pages
Page information