CommunityData:Hyak Datasets: Difference between revisions

From CommunityData
No edit summary
(2 intermediate revisions by the same user not shown)
Line 23: Line 23:
</code>
</code>


"`by_author`" and "`by_subreddit`" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast and determines the order that the data will be read.  Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.
"<code>by_author</code>" and "<code>by_subreddit</code>" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast and determines the order that the data will be read.  The <code>by_author</code> datasets are sorted by <code>author</code> and then by <code>CreatedAt</code>. The <code>by_subreddit</code> datasets are sorted by <code>subreddit</code> and then by <code>author</code>. Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.


=== Reading Reddit parquet datasets ===
=== Reading Reddit parquet datasets ===
Line 55: Line 55:
</syntaxhighlight>
</syntaxhighlight>


Parquet is a [https://en.wikipedia.org/wiki/Column-oriented_DBMS column-oriented format] which means that it is capable of reading each column independently of others. This confers two key advantages compared to unstructured formats that can make it very fast. First, the `filter` runs only on the `subreddit` column to figure out what rows need to be read for the other fields. Second, only the columns that are selected in `columns=` need to be read at all.  This is how arrow can pull data from parquet so fast.
Parquet is a [https://en.wikipedia.org/wiki/Column-oriented_DBMS column-oriented format] which means that it is capable of reading each column independently of others. This confers two key advantages compared to unstructured formats that can make it very fast. First, the <code>filter</code> runs only on the <code>subreddit</code> column to figure out what rows need to be read for the other fields. Second, only the columns that are selected in <code>columns=</code> need to be read at all.  This is how arrow can pull data from parquet so fast.
 
=== Streaming parquet datasets ===
If the data you want to pull exceed available memory, you have a few options.
 
One option is to just use [[CommunityData:Hyak_Spark|Spark]] which is likely a good option if you want to do large and complex joins or group-bys.  Downsides of Spark include issues of stability and complexity.  Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.
 
An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows.  The stream of rows will have the same order as the data on disk.  The <code>by_subreddit</code> parquet datasets are sorted first by <code>subreddit</code> and then by <code>author</code>.  The <code>by_author</code> subreddits are sorted by <code>author</code>, then by <code>CreatedAt</code>.

Revision as of 23:14, 6 July 2020

This page is for documenting datasets available on Hyak and how to use them.

Datasets

Reddit

We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by pushshift and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node.

Code for this project is located in the (currently private) cdsc_reddit git repository on code.communitydata.science.

For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to Nate.

The parquet datasets are located at

/gscratch/comdata/output/reddit_submissions_by_author.parquet

/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet

/gscratch/comdata/output/reddit_comments_by_author.parquet

/gscratch/comdata/output/reddit_comments_by_subreddit.parquet

"by_author" and "by_subreddit" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast and determines the order that the data will be read. The by_author datasets are sorted by author and then by CreatedAt. The by_subreddit datasets are sorted by subreddit and then by author. Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.

Reading Reddit parquet datasets

The recommended way to pull data from parquet on Hyak is to use pyarrow, which makes it relatively easy to filter the data and load it into Pandas. The main alternative is Spark, which is a more complex and less efficient system, but can read and write parquet and is useful for working with data that is too large to fit in memory.

This example loads all comments to the Seattle subreddit. You should try it out!

import pyarrow as pa
import pyarrow.parquet as pq
import pyarrow.dataset as ds
import pathlib

# A pyarrow dataset abstracts reading, writing, or filtering a parquet file
dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_comments_by_subreddit.parquet/'), format='parquet', partitioning='hive')

# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattle','seattlewa']

# a table is a low-level structured data format. The data still has not been read into memory. 
table = dataset.to_table(filter = ds.field('subreddit').isin(subreddits_to_track), columns=['id','subreddit','link_id','parent_id','CreatedAt','author','ups','downs','score','subreddit_id','stickied','is_submitter','body'])

# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
df = table.to_pandas()

# We should save this smaller dataset so we don't have to wait 15 min to pull from parquet next time.
df.to_csv("mydataset.csv")

Parquet is a column-oriented format which means that it is capable of reading each column independently of others. This confers two key advantages compared to unstructured formats that can make it very fast. First, the filter runs only on the subreddit column to figure out what rows need to be read for the other fields. Second, only the columns that are selected in columns= need to be read at all. This is how arrow can pull data from parquet so fast.

Streaming parquet datasets

If the data you want to pull exceed available memory, you have a few options.

One option is to just use Spark which is likely a good option if you want to do large and complex joins or group-bys. Downsides of Spark include issues of stability and complexity. Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.

An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows. The stream of rows will have the same order as the data on disk. The by_subreddit parquet datasets are sorted first by subreddit and then by author. The by_author subreddits are sorted by author, then by CreatedAt.