Editing CommunityData:Hyak Datasets

From CommunityData

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 7: Line 7:
We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [https://Pushshift.io pushshift] and tabular datasets derived from them.  Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you.  The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes.  In contrast it takes about a day to extract and parse the dumps on a mox node.  
We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [https://Pushshift.io pushshift] and tabular datasets derived from them.  Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you.  The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes.  In contrast it takes about a day to extract and parse the dumps on a mox node.  


Code for this project is located in the [https://code.communitydata.science/cdsc_reddit.git cdsc_reddit] git repository. See [[CommunityData:git]] for help getting started with our git setup.
Code for this project is located in the (currently private) cdsc_reddit [[CommunityData:git|git repository]] on code.communitydata.science.  


For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it.  Reach out to [[User:Groceryheist|Nate]].
For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it.  Reach out to [[User:Groceryheist|Nate]].
Line 23: Line 23:
</code>
</code>


"<code>by_author</code>" and "<code>by_subreddit</code>" refer to how the data is partitioned and sorted. This has important performance implications because filtering by partition column is fast. Spark can also make good use of the sorting to make joins and groupbys faster. These datasets are also designed to stream one user/author or subreddit at a time to support building subreddit or author level variables. All of the datasets have <code>CreatedAt</code> as a secondary sort so posts and comments by an author or subreddit are read in chronological order.
"<code>by_author</code>" and "<code>by_subreddit</code>" refer to how the data is sorted. Sorting the data makes filtering by the sorted column fast. Spark can make good use of the sorting to make joins and groupbys faster. The <code>by_author</code> datasets are sorted by <code>author</code> and then by <code>CreatedAt</code>. The <code>by_subreddit</code> datasets are sorted by <code>subreddit</code> and then by <code>author</code>. Sorting by author makes it possible to stream the dataset one user at a time to build user-level variables without resorting to Spark.


=== Reading Reddit parquet datasets ===
=== Reading Reddit parquet datasets ===
Line 35: Line 35:


# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory.  
# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory.  
 
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet')
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet', partitioning='hive')


# let's get all the comments to two subreddits:
# let's get all the comments to two subreddits:
Line 46: Line 46:
# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe.
df = table.to_pandas()
df = table.to_pandas()
# We should save this smaller dataset so we don't have to wait 15 min to pull from parquet next time.
df.to_csv("mydataset.csv")


</syntaxhighlight>
</syntaxhighlight>
Line 97: Line 100:
any([ v != 1 for k,v in count_dict.items()])
any([ v != 1 for k,v in count_dict.items()])
</syntaxhighlight>
</syntaxhighlight>
=== Install Arrow for R ===
If you want to use Arrow in R and your R on Hyak doesn't already have Arrow installed, follow these steps. On computers not running CentOS you'll probably be fine just running <code>install.packages("arrow")</code>.  These instructions are derived from this debugging session on the [https://issues.apache.org/jira/browse/ARROW-9303 Arrow bug tracker].
First you need to load a modern cmake and set some environment variables.
<syntaxhighlight lang='bash'>
module load cmake/3.11.2
export ARROW_WITH_LZ4=ON; export ARROW_WITH_ZSTD=ON; export ARROW_WITH_BZ2=ON; export ARROW_WITH_GZIP=ON; export ARROW_WITH_LZ4_FRAME=ON; export ARROW_WITH_SNAPPY=ON; export ARROW_WITH_LZO=ON; ARROW_WITH_BROTLI=ON;
export LIBARROW_MINIMAL=FALSE
</syntaxhighlight>
Now, start R and '''download''' (not install!) the <code>arrow</code> package.
<syntaxhighlight lang='R'>
download.packages("arrow",destdir='.')
</syntaxhighlight>
Now, you need to unpack <code>arrow_0.17.1.tar.gz</code> and edit <code>arrow/inst/build_arrow_static.sh</code>.
<syntaxhighlight lang='bash'>
tar xvzf arrow_0.17.1.tar.gz
nano arrow/inst/build_arrow_static.sh
</syntaxhighlight>
In <code>build_arrow_static.sh</code>, modify the value of <code>DARROW_DEPENDENCY_SOURCE</code> and set it to <code>BUNDLED</code>.
<syntaxhighlight lang='bash'>
# build_arrow_static.sh
...
-DARROW_DEPENDENCY_SOURCE=BUNDLED
...
</syntaxhighlight>
Finally, go back into R and finish installing arrow.
<syntaxhighlight lang='R'>
install.packages("arrow",repos=NULL)
</syntaxhighlight>
== Wikia / Fandom ==
[[CommunityData:Wikia_data]] contains some information about where this data comes from.
Locations:
<code>
/gscratch/comdata/outpub/wiki*
</code>
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)