Editing CommunityData:Hyak Datasets
From CommunityData
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 7: | Line 7: | ||
We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [https://Pushshift.io pushshift] and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node. | We maintain an archive of Reddit submissions and comments going back to Reddit's early history that is up-do-date with January 2019 (for comments) and August 2019 (for submissions). We have copies of dumps collected and published by [https://Pushshift.io pushshift] and tabular datasets derived from them. Compared to obtaining data from the Reddit (or pushshift) APIs, working with these archival datasets will be faster and less work for you. The tabular datasets in particular are quite fast thanks to the parquet file format making it possible to pull subsets of the data (e.g. complete history of a subreddit) in as little as 15 minutes. In contrast it takes about a day to extract and parse the dumps on a mox node. | ||
Code for this project is located in the | Code for this project is located in the (currently private) cdsc_reddit [[CommunityData:git|git repository]] on code.communitydata.science. | ||
For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to [[User:Groceryheist|Nate]]. | For computational efficiency it is best to parse the dumps as little as possible. So if it is possible for you to work with the tabular datasets, please do so. The tabular datasets currently have the variables that most projects will want to use, but there are many other metadata variables including ones related to moderation, media, Reddit gold and more. If you want a variable from the pushshift json that isn't in parquet tables, don't fret! It will not be too much work to add it. Reach out to [[User:Groceryheist|Nate]]. | ||
Line 35: | Line 35: | ||
# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. | # A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. | ||
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive') | |||
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet') | dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet', partitioning='hive') | ||
# let's get all the comments to two subreddits: | # let's get all the comments to two subreddits: | ||
Line 46: | Line 46: | ||
# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe. | # Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe. | ||
df = table.to_pandas() | df = table.to_pandas() | ||
# We should save this smaller dataset so we don't have to wait 15 min to pull from parquet next time. | |||
df.to_csv("mydataset.csv") | |||
</syntaxhighlight> | </syntaxhighlight> | ||
Line 96: | Line 99: | ||
# since it's partitioned and sorted by author, we get one group for each author | # since it's partitioned and sorted by author, we get one group for each author | ||
any([ v != 1 for k,v in count_dict.items()]) | any([ v != 1 for k,v in count_dict.items()]) | ||
</syntaxhighlight> | </syntaxhighlight> | ||