Not logged in
Talk
Contributions
Create account
Log in
Navigation
Main page
About
People
Publications
Teaching
Resources
Research Blog
Wiki Functions
Recent changes
Help
Licensing
Project page
Discussion
Edit
View history
Editing
CommunityData:CDSC Reddit
From CommunityData
Jump to:
navigation
,
search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
The reddit_cdsc [https://code.communitydata.science/cdsc_reddit.git/ git repository] contains tools for working with Reddit data. The project is designed for the hyak super computing system at The University of Washington. It consists of a set of python and bash scripts and uses [https://spark.apache.org/docs/latest/api/python/index.html pyspark] and [https://arrow.apache.org/docs/python/ pyarrow] to process large datasets. As of March 1st 2021, the project is under active development by [https://wiki.communitydata.science/People#Nathan_TeBlunthuis_.28University_of_Washington.29 Nate TeBlunthuis] and provides scripts for: * Pulling and updating dumps from [https://pushshift.io Pushshift] in <code>dumps/pull_pushshift_comments.sh</code> and <code>dumps/pull_pushshift_submissions.sh</code>. * Uncompressing and parsing the dumps into [https://parquet.apache.org/ Parquet] [https://wiki.communitydata.science/CommunityData:Hyak_Datasets#Reading_Reddit_parquet_datasets datasets] using scripts in <code>datasets</code> * Running text analysis based on [https://en.wikipedia.org/wiki/Tf%E2%80%93idf TF-IDF] including ** Extracting terms from Reddit comments in <code>ngrams/tf_comments.py</code> ** Detecting common phrases based on [https://en.wikipedia.org/wiki/Pointwise_mutual_information Pointwise mutual information] “Wikipedia article on pointwise mutual information”) in <code>ngrams/top_comment_phrases</code> ** Building TF-IDF vectors for each subreddit <code>similarities/tfidf.py</code> and also at the subreddit-week level. ** Computing cosine similarities between subreddits based on TF-IDF in <code>similarities/cosine_similarities.py</code>. * Measuring similarity and clustering subreddits based on user overlaps using TF-IDF (and also just frequency) cosine similarities of commenters. ** Clustering subreddits based on user and term similarities in <code>clustering/clustering.py</code> * [https://github.com/google/python-fire Fire-based] command line interfaces to make it easier for others to extend and resuse this work in your projects! [[File:Reddit Dataflow.jpg|left|thumb|Dataflow diagram illustrating what pieces of code and data go into producing subreddit similarity measures and clusters [https://miro.com/app/board/o9J_lSiN4TM=/ (link to miro board)]]] The TF-IDF for comments still has some kinks to iron out to remove hyper links and bot comments. == Pulling data from [https://pushshift.io Pushshift] == * <code>pull_pushshift_comments.sh</code> uses wget to download comment dumps to <code>/gscratch/comdata/raw_data/reddit_dumps/comments</code>. It doesn’t download files that already exists and runs <code>check_comments_shas.sh</code> to verify the files downloaded correctly. * <code>pull_pushshift_submissions.sh</code> does the same for submissions and puts them in <code>/gscratch/comdata/raw_data/reddit_dumps/comments</code>. == Building Parquet Datasets == Pushshift dumps are huge compressed json files with a lot of metadata that we may not need. It isn’t indexed so it’s expensive to pull data from just a handful of subreddits. It also turns out that it’s a pain to read these compressed files straight into spark. Extracting useful variables from the dumps and building parquet datasets will make them easier to work with. This happens in two steps: # Extracting json into (temporary, unpartitioned) parquet files using pyarrow. # Repartitioning and sorting the data using pyspark. The final datasets are in <code>/gscratch/comdata/output.</code> * <code>reddit_comments_by_author.parquet</code> has comments partitioned and sorted by username (lowercase). * <code>reddit_comments_by_subreddit.parquet</code> has comments partitioned and sorted by subreddit name (lowercase). * <code>reddit_submissions_by_author.parquet</code> has submissions partitioned and sorted by username (lowercase). * <code>reddit_submissions_by_subreddit.parquet</code> has submissions partitioned and sorted by subreddit name (lowercase). Breaking this down into two steps is useful because it allows us to decompress and parse the dumps in the backfill queue and then sort them in spark. Partitioning the data makes it possible to efficiently read data for specific subreddits or authors. Sorting it means that you can efficiently compute agreggations at the subreddit or user level. More documentation on using these files is available [https://wiki.communitydata.science/CommunityData:Hyak_Datasets#Reading_Reddit_parquet_datasets here]. == Subreddit Similarity == By default, the scripts in <code>similarities</code> take a <code>TopN</code> parameter which selects the subreddits to include in the similarity dataset according to how many total comments they have. You can alternatively pass a value to the <code>included_subreddits</code> parameter to a file with the names of the subreddits you would like to include on each line. === Datasets === Subreddit similarity datasets based on comment terms and comment authors are available on hyak in <code>/gscratch/comdata/output/reddit_similarity</code>. The overall approach to subreddit similarity seems to work reasonably well and the code is stabilizing. If you want help using these similarities in a project, just reach out to [[User:groceryheist | Nate]]. === Methods === [https://en.wikipedia.org/wiki/Tf%E2%80%93idf TF-IDF] is common and simple information retrieval technique that we can use to quantify the topic of a subreddit. The goal of TF-IDF is to build a vector for each subreddit that scores every term (or phrase) according to how characteristic it is of the overall lexicon used in that subreddit. For example, the most characteristic terms in the subreddit /r/christianity in the current version of the TF-IDF model are: {| !align="center"| Term !align="center"| tf_idf |- |align="center"| christians |align="center"| 0.581 |- |align="center"| christianity |align="center"| 0.569 |- |align="center"| kjv |align="center"| 0.568 |- |align="center"| bible |align="center"| 0.557 |- |align="center"| scripture |align="center"| 0.55 |} TF-IDF stands for “term frequency - inverse document frequency” because it is the product of two terms “term frequency” and “inverse document frequency.” Term frequency quantifies the amount that a term appears in a subreddit (document). Inverse document frequency quantifies how much that term appears in other subreddits (documents). As you can see on the Wikipedia page, there are many possible ways of constructing and combining these terms. <math display="inline">x + y = z_{1,d}</math> I chose to normalize term frequency by the maximum (raw) term frequency for each subreddit: <math display="inline">\mathrm{tf}_{t,d} = \frac{f_{t,d}}{\sum_{t^{'} \in d}{f_{t^{'},d}}}</math> I use the log inverse document frequency: <math display="inline">\mathrm{idf}_{t} = log\frac{N}{| {d \in D : t \in d} |}</math> I then combine them using some smoothing to get: <math display="inline">\mathrm{tfidf}_{t,d} = (0.5 + 0.5 \cdot \mathrm{tf}_{t,d}) \cdot \mathrm{idf}_{t}</math> === Building TF-IDF vectors === The process for building TF-IDF vectors has four steps: # Extracting terms using <code>tf_comments.py</code> # Detecting common phrases using <code>top_comment_phrases.py</code> # Extracting terms and common phrases using <code>tf_comments.py --mwe-pass='second'</code> # Building idf and tf-idf scores in <code>idf_comments.py</code> ==== Running <code>tf_comments.py</code> on the backfill queue ==== The main reason that I did it in 4 steps instead of one is to take advantage of the backfill queue for running <code>tf_comments.py</code>. This step requires reading all of the text in every comment and converting it to a bag of words at the subreddit-level. This is a lot of computation that is easily parallelizable. The script <code>run_tf_jobs.sh</code> partially automates running steps 1 (or 3) on the backfill queue. ==== Phrase detection using Pointwise Mutual Information ==== TF-IDF is simple, but only uses single words (unigrams). Sequences of multiple words can be important to account for how words have different meanings in different contexts or how sequences of words refer to distinct things like names. Dealing with context or longer sequences of words is a common challenge in natural language processing since the number of possible n-grams grows like crazy as n gets bigger. Phrase detection helps this problem by limiting the set of n-grams to those most informative. But how do we detect phrases? I implemented [https://en.wikipedia.org/wiki/Pointwise_mutual_information Pointwise mutual information] “Wikipedia article on pointwise mutual information”), which is a pretty simple way, but seems to work pretty well. PMI is an quantity derived from information theory. The intuition is that if two words occur together quite frequently compared to how often they appear separately then the cooccurrance is likely to be informative. <math display="inline">\operatorname{pmi}(x;y) \equiv \log\frac{p(x,y)}{p(x)p(y)} = \log\frac{p(x|y)}{p(x)} = \log\frac{p(y|x)}{p(y)}.</math> In <code>tf_comments.py</code> if <code>--mwe-pass=first</code> then a 10% sample of 1-4-grams (sequences of terms up to length 4) will be written to a file to be consumed by <code>top_comment_phrases.py</code>. <code>top_comment_phrases.py</code> computes the PMI for these possible phrases and writes those that occur at least 3500 times in the sample of n-grams and have a PWMI of at least 3 (about 65000 expressions). <code>tf_comments.py --mwe-pass=second</code> then uses the detected phrases and adds them to the term frequency data. === Cosine Similarity === Once the tf-idf vectors are built, making a similarity score between two subreddits is straightforward using cosine similarity. <math display="inline">\text{similarity} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\| \|\mathbf{B}\|} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{A_i^2}} \sqrt{\sum\limits_{i=1}^{n}{B_i^2}} }</math> Intuitively, we represent two subreddits as lines in a high-dimensional space (tf-idf vectors). In linear algebra, the dot product (<math display="inline">\cdot</math>) between two vectors takes their weighted sum (e.g. linear regression is a dot product of a vector of covariates and a vector of weights).<br /> The vectors might have different lengths like if one subreddit has more words in comments than the other, so in cosine similarity the dot product is normalized by the magnitude (lengths) of the vectors. It turns out that this is equivalent to taking the cosine of the two vectors. So cosine similarity in essence quantifies the angle between the two lines in high-dimensional space. If the cosine similarity between two subreddits is greater then their tf-idf vectors are more correlated. Cosine similarity with tf-idf is popular (indeed it has been applied to Reddit in research several times before) because it quantifies the correlation between the most characteristic terms for two communities. Compared to other approach to similarity like those using word embeddings or topic models it may struggle to handle polysemy, synonymy, or correlations between different terms. Using phrase detection helps with this a little bit. The advantages of this approach are simplicity and scalability. I’m thinking about using [https://en.wikipedia.org/wiki/Latent_semantic_analysis Latent Semantic Analysis] as an intermediate step to improve upon similarities based on raw tf-idfs. Even still, computing similarities between a large number of subreddits is computationally expensive and requires <math display="inline">n(n-1)/2</math> dot-product evaluations. This can be sped up by passing <code>similarity-threshold=X</code> where <math display="inline">X>0</math> into <code>term_comment_similarity.py</code>. I used a cosine similarity function that’s built into the spark matrix library which supports the <code>DIMSUM</code> algorithm for approximating matrix-matrix products. This algorithm is commonly used in industry (i.e. at Twitter, Google) for large-scale similarity scoring.
Summary:
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see
CommunityData:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)
Tools
What links here
Related changes
Special pages
Page information