CommunityData:Hyak Datasets: Difference between revisions
Groceryheist (talk | contribs) |
Groceryheist (talk | contribs) (→Reddit) |
||
Line 5: | Line 5: | ||
== Reddit == | == Reddit == | ||
We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [Pushshift.io] and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node. | We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [https://Pushshift.io pushshift] and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node. | ||
For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to [[User:Groceryheist|Nate]]. | For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to [[User:Groceryheist|Nate]]. |
Revision as of 21:10, 6 July 2020
This page is for documenting datasets available on Hyak and how to use them.
Datasets
We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by pushshift and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node.
For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to Nate.
Reading Reddit parquet datasets
The recommended way to pull data from parquet on Hyak is to use pyarrow, which makes it relatively easy to filter the data and load it into Pandas. The main alternative is Spark, which is a more complex system, but can read and write parquet and is useful for working with data that is too large to fit in memory.
This example loads all comments to the Seattle subreddit. You should try it out!
import pyarrow as pa
import pyarrow.parquet as pq
import pyarrow.dataset as ds
import pathlib
# A pyarrow dataset abstracts reading, writing, or filtering a parquet file
dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_comments_by_subreddit.parquet/'), format='parquet', partitioning='hive')
# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattle','seattlewa']
# a table is a low-level structured data format. The data still has not been read into memory.
table = dataset.to_table(filter = ds.field('subreddit').isin(subreddits_to_track), columns=['id','subreddit','link_id','parent_id','CreatedAt','author','ups','downs','score','subreddit_id','stickied','is_submitter','body'])
# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
df = table.to_pandas()
# We should save this smaller dataset so we don't have to wait 15 min to pull from parquet next time.
df.to_csv("mydataset.csv")