CommunityData:CDSC Reddit

The reddit_cdsc project contains tools for working with Reddit data. The project is designed for the hyak super computing system at The University of Washington. It consists of a set of python and bash scripts and uses the pyspark and pyarrow to process large datasets. As of November 1st 2020, the project is under active development by Nate TeBlunthuis and provides scripts for:


 * Pulling and updating dumps from Pushshift in  and.
 * Uncompressing and parsing the dumps into Parquet datasets.
 * Running text analysis based on TF-IDF including
 * Extracting terms from Reddit comments in
 * Detecting common phrases based on Pointwise mutual information “Wikipedia article on pointwise mutual information”)
 * Building TF-IDF vectors for each subreddit  and (more experimentally) at the subreddit-week level
 * Computing cosine similarities between subreddits based on TF-IDF.

Right now, two steps are still in earlier stages of progress:


 * Tf-idf based on comment authors for similarity between subreddits to measure user overlaps.
 * Clustering subreddits based on cosine-similarities using power iteration clustering (PIC)

The TF-IDF for comments still has some kinks to iron out to remove hyper links and bot comments. Right now subreddits that have similar automoderation messages appear very similar.

The user interfaces for most of the scripts are pretty crappy and need to be refined for re-use by others.

Pulling data from Pushshift

 * uses wget to download comment dumps to . It doesn’t download files that already exists and runs   to verify the files downloaded correctly.
 * does the same for submissions and puts them in.

Building Parquet Datasets
Pushshift dumps are huge compressed json files with a lot of metadata that we may not need. It isn’t indexed so it’s expensive to pull data from just a handful of subreddits. It also turns out that it’s a pain to read these compressed files straight into spark. Extracting useful variables from the dumps and building parquet datasets will make them easier to work with. This happens in two steps:


 * 1) Extracting json into (temporary, unpartitioned) parquet files using pyarrow.
 * 2) Repartitioning and sorting the data using pyspark.

The final datasets are in


 * has comments partitioned and sorted by username (lowercase).
 * has comments partitioned and sorted by subreddit name (lowercase).
 * has submissions partitioned and sorted by username (lowercase).
 * has submissions partitioned and sorted by subreddit name (lowercase).

Breaking this down into two steps is useful because it allows us to decompress and parse the dumps in the backfill queue and then sort them in spark. Partitioning the data makes it possible to efficiently read data for specific subreddits or authors. Sorting it means that you can efficiently compute agreggations at the subreddit or user level. More documentation on using these files is available here.

TF-IDF Subreddit Similarity
TF-IDF is common and simple information retrieval technique that we can use to quantify the topic of a subreddit. The goal of TF-IDF is to build a vector for each subreddit that scores every term (or phrase) according to how characteristic it is of the overall lexicon used in that subreddit. For example, the most characteristic terms in the subreddit /r/christianity in the current version of the TF-IDF model are:

TF-IDF stands for “term frequency - inverse document frequency” because it is the product of two terms “term frequency” and “inverse document frequency.” Term frequency quantifies the amount that a term appears in a subreddit (document). Inverse document frequency quantifies how much that term appears in other subreddits (documents). As you can see on the Wikipedia page, there are many possible ways of constructing and combining these terms.

$x + y = z_{1,d}$

I chose to normalize term frequency by the maximum (raw) term frequency for each subreddit: $\mathrm{tf}_{t,d} = \frac{f_{t,d}}{\sum_{t^{'} \in d}{f_{t^{'},d}}}$

I use the log inverse document frequency: $\mathrm{idf}_{t} = log\frac{N}{| {d \in D : t \in d} |}$

I then combine them using some smoothing to get:

$\mathrm{tfidf}_{t,d} = (0.5 + 0.5 \cdot \mathrm{tf}_{t,d}) \cdot \mathrm{idf}_{t}$

Building TF-IDF vectors
The process for building TF-IDF vectors has four steps:


 * 1) Extracting terms using
 * 2) Detecting common phrases using
 * 3) Extracting terms and common phrases using
 * 4) Building idf and tf-idf scores in

Running on the backfill queue
The main reason that I did it in 4 steps instead of one is to take advantage of the backfill queue for running. This step requires reading all of the text in every comment and converting it to a bag of words at the subreddit-level. This is a lot of computation that is easily parallelizable. The script  partially automates running steps 1 (or 3) on the backfill queue.

Phrase detection using Pointwise Mutual Information
TF-IDF is simple, but only uses single words (unigrams). Sequences of multiple words can be important to account for how words have different meanings in different contexts or how sequences of words refer to distinct things like names. Dealing with context or longer sequences of words is a common challenge in natural language processing since the number of possible n-grams grows like crazy as n gets bigger. Phrase detection helps this problem by limiting the set of n-grams to those most informative.

But how do we detect phrases? I implemented Pointwise mutual information “Wikipedia article on pointwise mutual information”), which is a pretty simple way, but seems to work pretty well.

PMI is an quantity derived from information theory. The intuition is that if two words occur together quite frequently compared to how often they appear separately then the cooccurrance is likely to be informative.

$\operatorname{pmi}(x;y) \equiv \log\frac{p(x,y)}{p(x)p(y)} = \log\frac{p(x|y)}{p(x)} = \log\frac{p(y|x)}{p(y)}.$

In  if   then a 10% sample of 1-4-grams (sequences of terms up to length 4) will be written to a file to be consumed by. computes the PMI for these possible phrases and writes those that occur at least 3500 times in the sample of n-grams and have a PWMI of at least 3 (about 65000 expressions).

then uses the detected phrases and adds them to the term frequency data.

Cosine Similarity
Once the tf-idf vectors are built, making a similarity score between two subreddits is straightforward using cosine similarity.

$\text{similarity} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\| \|\mathbf{B}\|} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{A_i^2}} \sqrt{\sum\limits_{i=1}^{n}{B_i^2}} }$

Intuitively, we represent two subreddits as lines in a high-dimensional space (tf-idf vectors). In linear algebra, the dot product ($\cdot$ ) between two vectors takes their weighted sum (e.g. linear regression is a dot product of a vector of covariates and a vector of weights).

The vectors might have different lengths like if one subreddit has words in comments than the other, so in cosine similarity the dot product is normalized by the magnitude (lengths) of the vectors. It turns out that this is equivalent to taking the cosine of the two vectors. So cosine similarity in essence quantifies the angle between the two lines in high-dimensional space. If the cosine similarity between two subreddits is greater then their tf-idf vectors are more correlated.

Cosine similarity with tf-idf is popular (indeed it has been applied to Reddit in research several times before) because it quantifies the correlation between the most characteristic terms for two communities.

Compared to other approach to similarity like those using word embeddings or topic models it may struggle to handle polysemy, synonymy, or correlations between different terms. Using phrase detection helps with this a little bit. The advantages of this approach are simplicity and scalability. I’m thinking about using Latent Semantic Analysis as an intermediate step to improve upon similarities based on raw tf-idfs.

Even still, computing similarities between a large number of subreddits is computationally expensive and requires $n(n-1)/2$ dot-product evaluations. This can be sped up by passing  where $X>0$  into. I used a cosine similarity function that’s built into the spark matrix library which supports the  algorithm for approximating matrix-matrix products. This algorithm is commonly used in industry (i.e. at Twitter, Google) for large-scale similarity scoring.