CommunityData:Hyak Datasets: Difference between revisions

From CommunityData
No edit summary
 
(11 intermediate revisions by 2 users not shown)
Line 7: Line 7:
We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [https://Pushshift.io pushshift] and tabular datasets derived from them.  Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you.  The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes.  In contrast it takes about a day to extract and parse the dumps on a mox node.  
We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [https://Pushshift.io pushshift] and tabular datasets derived from them.  Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you.  The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes.  In contrast it takes about a day to extract and parse the dumps on a mox node.  


Code for this project is located in the (currently private) cdsc_reddit [[CommunityData:git|git repository]] on code.communitydata.science.  
Code for this project is located in the [https://code.communitydata.science/cdsc_reddit.git cdsc_reddit] git repository. See [[CommunityData:git]] for help getting started with our git setup.


For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it.  Reach out to [[User:Groceryheist|Nate]].
For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it.  Reach out to [[User:Groceryheist|Nate]].
Line 23: Line 23:
</code>
</code>


"<code>by_author</code>" and "<code>by_subreddit</code>" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast. Spark can make good use of the sorting to make joins and groupbys faster. The <code>by_author</code> datasets are sorted by <code>author</code> and then by <code>CreatedAt</code>. The <code>by_subreddit</code> datasets are sorted by <code>subreddit</code> and then by <code>author</code>. Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.
"<code>by_author</code>" and "<code>by_subreddit</code>" refer to how the data is partitioned and sorted. This has important performance implications because filtering by partition column is fast. Spark can also make good use of the sorting to make joins and groupbys faster. These datasets are also designed to stream one user/author or subreddit at a time to support building subreddit or author level variables. All of the datasets have <code>CreatedAt</code> as a secondary sort so posts and comments by an author or subreddit are read in chronological order.


=== Reading Reddit parquet datasets ===
=== Reading Reddit parquet datasets ===
Line 35: Line 35:


# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory.  
# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory.  
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive')
 
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet')


# let's get all the comments to two subreddits:
# let's get all the comments to two subreddits:
Line 46: Line 46:
# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
df = table.to_pandas()
df = table.to_pandas()
# We should save this smaller dataset so we don't have to wait 15 min to pull from parquet next time.
df.to_csv("mydataset.csv")


</syntaxhighlight>
</syntaxhighlight>
Line 59: Line 56:
One option is to just use [[CommunityData:Hyak_Spark|Spark]] which is likely a good option if you want to do large and complex joins or group-bys.  Downsides of Spark include issues of stability and complexity.  Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.
One option is to just use [[CommunityData:Hyak_Spark|Spark]] which is likely a good option if you want to do large and complex joins or group-bys.  Downsides of Spark include issues of stability and complexity.  Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.


An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows.  The stream of rows *will not* have the same order as the data on disk.
An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows.  The stream of rows will have the same order as the data on disk. In the example below the datasets are partitoned by author and the partitions are sorted so edits can be read one author at a time.  This is convenient as a starting point for building author-level variables.  


<syntaxhighlight lang='python'>
<syntaxhighlight lang='python'>
import pyarrow.dataset as ds
import pyarrow.dataset as ds
from itertools import chain, groupby, islice
from itertools import groupby


# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory.  
# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read data into memory.  
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive')
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_author.parquet/'), format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_author.parquet', format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_author.parquet', format='parquet')


# let's get all the comments to two subreddits:
# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattlewa','seattle']
subreddits_to_pull = ['seattlewa','seattle']


# instead of loading the data into a pandas dataframe all at once we can stream it. This lets us start working with it while it is read.
# instead of loading the data into a pandas dataframe all at once we can stream it.
scan_tasks = dataset.scan(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext'])
scan_tasks = dataset.scan(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext'])


# simple function to execute scantasks and create a stream of pydict rows  
# simple function to execute scantasks and generate rows
def execute_scan_task(st):
def iterate_rows(scan_tasks):
     # an executed scan task yields an iterator of record_batches
     for st in scan_tasks:
    def unroll_record_batch(rb):
        for rb in st.execute():
        df = rb.to_pandas()
            df = rb.to_pandas()
        return df.itertuples()
            for t in df.itertuples():
                yield t


    for rb in st.execute():
row_iter = iterate_rows(scan_tasks)
        yield unroll_record_batch(rb)
 
 
# now we just need to flatten and we have our iterator
row_iter = chain.from_iterable(chain.from_iterable(map(lambda st: execute_scan_task(st), scan_tasks)))


# now we can use python's groupby function to read one author at a time
# now we can use python's groupby function to read one author at a time
# note that the same author can appear more than once since the record batches may not be in the correct order.
# note that the same author can appear more than once since the record batches may not be in the correct order.
author_submissions = groupby(row_iter, lambda row: row.author)
author_submissions = groupby(row_iter, lambda row: row.author)
count_dict = {}
for auth, posts in author_submissions:
for auth, posts in author_submissions:
     print(f"{auth} has {len(list(posts))} posts")
     if auth in count_dict:
        count_dict[auth] = count_dict[auth] + 1
    else:
        count_dict[auth] = 1
 
# since it's partitioned and sorted by author, we get one group for each author
any([ v != 1 for k,v in count_dict.items()])
</syntaxhighlight>
 
=== Install Arrow for R ===
 
If you want to use Arrow in R and your R on Hyak doesn't already have Arrow installed, follow these steps. On computers not running CentOS you'll probably be fine just running <code>install.packages("arrow")</code>.  These instructions are derived from this debugging session on the [https://issues.apache.org/jira/browse/ARROW-9303 Arrow bug tracker].
 
First you need to load a modern cmake and set some environment variables.
 
<syntaxhighlight lang='bash'>
module load cmake/3.11.2
 
export ARROW_WITH_LZ4=ON; export ARROW_WITH_ZSTD=ON; export ARROW_WITH_BZ2=ON; export ARROW_WITH_GZIP=ON; export ARROW_WITH_LZ4_FRAME=ON; export ARROW_WITH_SNAPPY=ON; export ARROW_WITH_LZO=ON; ARROW_WITH_BROTLI=ON;
export LIBARROW_MINIMAL=FALSE
</syntaxhighlight>
</syntaxhighlight>
Now, start R and '''download''' (not install!) the <code>arrow</code> package.
<syntaxhighlight lang='R'>
download.packages("arrow",destdir='.')
</syntaxhighlight>
Now, you need to unpack <code>arrow_0.17.1.tar.gz</code> and edit <code>arrow/inst/build_arrow_static.sh</code>.
<syntaxhighlight lang='bash'>
tar xvzf arrow_0.17.1.tar.gz
nano arrow/inst/build_arrow_static.sh
</syntaxhighlight>
In <code>build_arrow_static.sh</code>, modify the value of <code>DARROW_DEPENDENCY_SOURCE</code> and set it to <code>BUNDLED</code>.
<syntaxhighlight lang='bash'>
# build_arrow_static.sh
...
-DARROW_DEPENDENCY_SOURCE=BUNDLED
...
</syntaxhighlight>
Finally, go back into R and finish installing arrow.
<syntaxhighlight lang='R'>
install.packages("arrow",repos=NULL)
</syntaxhighlight>
== Wikia / Fandom ==
[[CommunityData:Wikia_data]] contains some information about where this data comes from.
Locations:
<code>
/gscratch/comdata/outpub/wiki*
</code>

Latest revision as of 01:41, 28 June 2021

This page is for documenting datasets available on Hyak and how to use them.

Datasets[edit]

Reddit[edit]

We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by pushshift and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node.

Code for this project is located in the cdsc_reddit git repository. See CommunityData:git for help getting started with our git setup.

For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to Nate.

The parquet datasets are located at

/gscratch/comdata/output/reddit_submissions_by_author.parquet

/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet

/gscratch/comdata/output/reddit_comments_by_author.parquet

/gscratch/comdata/output/reddit_comments_by_subreddit.parquet

"by_author" and "by_subreddit" refer to how the data is partitioned and sorted. This has important performance implications because filtering by partition column is fast. Spark can also make good use of the sorting to make joins and groupbys faster. These datasets are also designed to stream one user/author or subreddit at a time to support building subreddit or author level variables. All of the datasets have CreatedAt as a secondary sort so posts and comments by an author or subreddit are read in chronological order.

Reading Reddit parquet datasets[edit]

The recommended way to pull data from parquet on Hyak is to use pyarrow, which makes it relatively easy to filter the data and load it into Pandas. The main alternative is Spark, which is a more complex and less efficient system, but can read and write parquet and is useful for working with data that is too large to fit in memory.

This example loads all comments to the Seattle subreddit. You should try it out!

import pyarrow.dataset as ds

# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. 

dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet')

# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattle','seattlewa']

# a table is a low-level structured data format.  This line pulls data into memory. Setting metadata_n_threads > 1 gives a little speed boost.
table = dataset.to_table(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext'])

# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
df = table.to_pandas()

Parquet is a column-oriented format which means that it is capable of reading each column independently of others. This confers two key advantages compared to unstructured formats that can make it very fast. First, the filter runs only on the subreddit column to figure out what rows need to be read for the other fields. Second, only the columns that are selected in columns= need to be read at all. This is how arrow can pull data from parquet so fast.

Streaming parquet datasets[edit]

If the data you want to pull exceed available memory, you have a few options.

One option is to just use Spark which is likely a good option if you want to do large and complex joins or group-bys. Downsides of Spark include issues of stability and complexity. Spark is capable, can be fast, and can scale to many nodes, but it can also crash and be complex to program.

An alternative is to stream data from parquet using pyarrow. Pyarrow can load a large dataset one chunk at a time and you can turn these chunks into stream of rows. The stream of rows will have the same order as the data on disk. In the example below the datasets are partitoned by author and the partitions are sorted so edits can be read one author at a time. This is convenient as a starting point for building author-level variables.

import pyarrow.dataset as ds
from itertools import groupby

# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read data into memory. 
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_author.parquet/'), format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_author.parquet', format='parquet')

# let's get all the comments to two subreddits:
subreddits_to_pull = ['seattlewa','seattle']

# instead of loading the data into a pandas dataframe all at once we can stream it.
scan_tasks = dataset.scan(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext'])

# simple function to execute scantasks and generate rows
def iterate_rows(scan_tasks):
    for st in scan_tasks:
        for rb in st.execute():
            df = rb.to_pandas()
            for t in df.itertuples():
                yield t

row_iter = iterate_rows(scan_tasks)

# now we can use python's groupby function to read one author at a time
# note that the same author can appear more than once since the record batches may not be in the correct order.
author_submissions = groupby(row_iter, lambda row: row.author)

count_dict = {}

for auth, posts in author_submissions:
    if auth in count_dict:
        count_dict[auth] = count_dict[auth] + 1
    else:
        count_dict[auth] = 1

# since it's partitioned and sorted by author, we get one group for each author 
any([ v != 1 for k,v in count_dict.items()])

Install Arrow for R[edit]

If you want to use Arrow in R and your R on Hyak doesn't already have Arrow installed, follow these steps. On computers not running CentOS you'll probably be fine just running install.packages("arrow"). These instructions are derived from this debugging session on the Arrow bug tracker.

First you need to load a modern cmake and set some environment variables.

module load cmake/3.11.2

export ARROW_WITH_LZ4=ON; export ARROW_WITH_ZSTD=ON; export ARROW_WITH_BZ2=ON; export ARROW_WITH_GZIP=ON; export ARROW_WITH_LZ4_FRAME=ON; export ARROW_WITH_SNAPPY=ON; export ARROW_WITH_LZO=ON; ARROW_WITH_BROTLI=ON;
 
export LIBARROW_MINIMAL=FALSE


Now, start R and download (not install!) the arrow package.

download.packages("arrow",destdir='.')

Now, you need to unpack arrow_0.17.1.tar.gz and edit arrow/inst/build_arrow_static.sh.

tar xvzf arrow_0.17.1.tar.gz
nano arrow/inst/build_arrow_static.sh

In build_arrow_static.sh, modify the value of DARROW_DEPENDENCY_SOURCE and set it to BUNDLED.

# build_arrow_static.sh
...
-DARROW_DEPENDENCY_SOURCE=BUNDLED
...

Finally, go back into R and finish installing arrow.

install.packages("arrow",repos=NULL)

Wikia / Fandom[edit]

CommunityData:Wikia_data contains some information about where this data comes from.

Locations:

/gscratch/comdata/outpub/wiki*