Editing CommunityData:Hyak Datasets
From CommunityData
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 35: | Line 35: | ||
# A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. | # A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. | ||
#dataset = ds.dataset(pathlib.Path('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/'), format='parquet', partitioning='hive') | |||
dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet') | dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet', partitioning='hive') | ||
# let's get all the comments to two subreddits: | # let's get all the comments to two subreddits: | ||
Line 46: | Line 46: | ||
# Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe. | # Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe. | ||
df = table.to_pandas() | df = table.to_pandas() | ||
# We should save this smaller dataset so we don't have to wait 15 min to pull from parquet next time. | |||
df.to_csv("mydataset.csv") | |||
</syntaxhighlight> | </syntaxhighlight> |