CommunityData:Hyak Datasets: Difference between revisions

From CommunityData
Line 13: Line 13:
<code>
<code>
/gscratch/comdata/output/reddit_submissions_by_author.parquet
/gscratch/comdata/output/reddit_submissions_by_author.parquet
/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet
/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet
/gscratch/comdata/output/reddit_comments_by_author.parquet
/gscratch/comdata/output/reddit_comments_by_author.parquet
/gscratch/comdata/output/reddit_comments_by_subreddit.parquet
/gscratch/comdata/output/reddit_comments_by_subreddit.parquet
</code>
</code>


"`by_author`" and "`by_subreddit`" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast and determines the order that the data will be read.  Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.
"`by_author`" and "`by_subreddit`" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast and determines the order that the data will be read.  Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.

Revision as of 22:37, 6 July 2020

This page is for documenting datasets available on Hyak and how to use them.

Datasets

Reddit

We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by pushshift and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node.

For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to Nate.

The parquet datasets are located at

/gscratch/comdata/output/reddit_submissions_by_author.parquet

/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet

/gscratch/comdata/output/reddit_comments_by_author.parquet

/gscratch/comdata/output/reddit_comments_by_subreddit.parquet

"`by_author`" and "`by_subreddit`" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast and determines the order that the data will be read. Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.