Not logged in
Talk
Contributions
Create account
Log in
Navigation
Main page
About
People
Publications
Teaching
Resources
Research Blog
Wiki Functions
Recent changes
Help
Licensing
Project page
Discussion
Edit
View history
Editing
CommunityData:Hyak Datasets
(section)
From CommunityData
Jump to:
navigation
,
search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Reading Reddit parquet datasets === The recommended way to pull data from parquet on Hyak is to use [https://arrow.apache.org/docs/python/ pyarrow], which makes it relatively easy to filter the data and load it into Pandas. The main alternative is [[CommunityData:Hyak_Spark| Spark]], which is a more complex and less efficient system, but can read and write parquet and is useful for working with data that is too large to fit in memory. This example loads all comments to the Seattle subreddit. You should try it out! <syntaxhighlight lang='python'> import pyarrow.dataset as ds # A pyarrow dataset abstracts reading, writing, or filtering a parquet file. It does not read dataa into memory. dataset = ds.dataset('/gscratch/comdata/output/reddit_submissions_by_subreddit.parquet/', format='parquet') # let's get all the comments to two subreddits: subreddits_to_pull = ['seattle','seattlewa'] # a table is a low-level structured data format. This line pulls data into memory. Setting metadata_n_threads > 1 gives a little speed boost. table = dataset.to_table(filter = ds.field('subreddit').isin(subreddits_to_pull), columns=['id','subreddit','CreatedAt','author','ups','downs','score','subreddit_id','stickied','title','url','is_self','selftext']) # Since data from just these 2 subreddits fits in memory we can just turn our table into a pandas dataframe. df = table.to_pandas() </syntaxhighlight> Parquet is a [https://en.wikipedia.org/wiki/Column-oriented_DBMS column-oriented format] which means that it is capable of reading each column independently of others. This confers two key advantages compared to unstructured formats that can make it very fast. First, the <code>filter</code> runs only on the <code>subreddit</code> column to figure out what rows need to be read for the other fields. Second, only the columns that are selected in <code>columns=</code> need to be read at all. This is how arrow can pull data from parquet so fast.
Summary:
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see
CommunityData:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)
Tools
What links here
Related changes
Special pages
Page information