CommunityData:Hyak Datasets: Difference between revisions

From CommunityData
(→‎Reading Reddit parquet datasets: update file reading example.)
Line 59: Line 59:
One option is to just use [[CommunityData:Hyak_Spark|Spark]] which is likely a good option if you want to do large and complex joins or group-bys.  Downsides of Spark include issues of stability and complexity.  Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.
One option is to just use [[CommunityData:Hyak_Spark|Spark]] which is likely a good option if you want to do large and complex joins or group-bys.  Downsides of Spark include issues of stability and complexity.  Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.


An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows.  The stream of rows will have the same order as the data on disk. The <code>by_subreddit</code> parquet datasets are sorted first by <code>subreddit</code> and then by <code>author</code>. The <code>by_author</code> subreddits are sorted by <code>author</code>, then by <code>CreatedAt</code>.
An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows.  The stream of rows *will not* have the same order as the data on disk.
 
<syntaxhighlight language='python'>
import pyarrow.dataset as ds
from itertools import chain, groupby, islice
 
# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory.
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_author.parquet', format='parquet', partitioning='hive')
 
# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattlewa','seattle']
 
# instead of loading the data into a pandas dataframe all at once we can stream it. This lets us start working with it while it is read.
scan_tasks = dataset.scan(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext'])
 
# simple function to execute scantasks and create a stream of pydict rows
def execute_scan_task(st):
    # an executed scan task yields an iterator of record_batches
    def unroll_record_batch(rb):
        df = rb.to_pandas()
        return df.itertuples()
 
    for rb in st.execute():
        yield unroll_record_batch(rb)
 
 
# now we just need to flatten and we have our iterator
row_iter = chain.from_iterable(chain.from_iterable(map(lambda st: execute_scan_task(st), scan_tasks)))
 
# now we can use python's groupby function to read one author at a time
# note that the same author can appear more than once since the record batches may not be in the correct order.
author_submissions = groupby(row_iter, lambda row: row.author)
for auth, posts in author_submissions:
    print(f"{auth} has {len(list(posts))} posts")
</syntaxhighlight>

Revision as of 08:55, 7 July 2020

This page is for documenting datasets available on Hyak and how to use them.

Datasets

Reddit

We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by pushshift and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node.

Code for this project is located in the (currently private) cdsc_reddit git repository on code.communitydata.science.

For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to Nate.

The parquet datasets are located at

/gscratch/comdata/output/reddit_submissions_by_author.parquet

/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet

/gscratch/comdata/output/reddit_comments_by_author.parquet

/gscratch/comdata/output/reddit_comments_by_subreddit.parquet

"by_author" and "by_subreddit" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast and determines the order that the data will be read. The by_author datasets are sorted by author and then by CreatedAt. The by_subreddit datasets are sorted by subreddit and then by author. Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.

Reading Reddit parquet datasets

The recommended way to pull data from parquet on Hyak is to use pyarrow, which makes it relatively easy to filter the data and load it into Pandas. The main alternative is Spark, which is a more complex and less efficient system, but can read and write parquet and is useful for working with data that is too large to fit in memory.

This example loads all comments to the Seattle subreddit. You should try it out!

import pyarrow.dataset as ds

# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. 
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet', partitioning='hive')

# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattle','seattlewa']

# a table is a low-level structured data format.  This line pulls data into memory. Setting metadata_n_threads > 1 gives a little speed boost.
table = dataset.to_table(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext'])

# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
df = table.to_pandas()

# We should save this smaller dataset so we don't have to wait 15 min to pull from parquet next time.
df.to_csv("mydataset.csv")

Parquet is a column-oriented format which means that it is capable of reading each column independently of others. This confers two key advantages compared to unstructured formats that can make it very fast. First, the filter runs only on the subreddit column to figure out what rows need to be read for the other fields. Second, only the columns that are selected in columns= need to be read at all. This is how arrow can pull data from parquet so fast.

Streaming parquet datasets

If the data you want to pull exceed available memory, you have a few options.

One option is to just use Spark which is likely a good option if you want to do large and complex joins or group-bys. Downsides of Spark include issues of stability and complexity. Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.

An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows. The stream of rows *will not* have the same order as the data on disk.

import pyarrow.dataset as ds
from itertools import chain, groupby, islice

# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. 
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_author.parquet', format='parquet', partitioning='hive')

# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattlewa','seattle']

# instead of loading the data into a pandas dataframe all at once we can stream it. This lets us start working with it while it is read.
scan_tasks = dataset.scan(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext'])

# simple function to execute scantasks and create a stream of pydict rows 
def execute_scan_task(st):
    # an executed scan task yields an iterator of record_batches
    def unroll_record_batch(rb):
        df = rb.to_pandas()
        return df.itertuples()

    for rb in st.execute():
        yield unroll_record_batch(rb)


# now we just need to flatten and we have our iterator
row_iter = chain.from_iterable(chain.from_iterable(map(lambda st: execute_scan_task(st), scan_tasks)))

# now we can use python's groupby function to read one author at a time
# note that the same author can appear more than once since the record batches may not be in the correct order.
author_submissions = groupby(row_iter, lambda row: row.author)
for auth, posts in author_submissions:
    print(f"{auth} has {len(list(posts))} posts")