Community Data Science Workshops (Spring 2016)/Day 3 Projects/Twitter

From CommunityData
Jump to navigation Jump to search

Getting Started[edit]

Download zipped file with a 10% sample of tweets from here.

Extract the zipped file to your desktop and cd into it.

Analyzing Tweet Data with Python[edit]

Note: you can attend this session even if you didn't do Twitter last week.

Last week, the Twitter API Session covered accessing data from the Twitter API, including accessing streaming data.

Last fall, we set up a streaming collection from the code you modified in the workshop to track all earthquake-related tweets. That stream captured 3.5GB of tweets (!). We've given you a sample of those tweets in the zipped file above.

Our goal in this workshop is to explore the data we collected. As always, we've suggested questions below, but the real goal is for you to find something that you find interesting from the data.

Your data consists of 1 Tweet per line encoded in JSON exactly as Twitter returns it.

Looking at the Twitter API documentation will help you make sense of that funny looking JSON data: [1]

Goals[edit]

Starting with the sample program, try to answer the following questions. The first one is done for you!

  • Build a histogram of tweets per day. Use it to identify potential events. We'll use python to make a CSV file that we can import to a spreadsheet.
  • Change your program to plot the number of users that are tweeting.
  • Pick an interesting time period from the data. Can you figure out where an event took place based on the locale of the user? What about geographic information?
  • Who gets retweeted the most in the data set?
  • Modify your histogram to look at tweets per hour rather than per day.
  • Use your imagination! What else can you find?

Hints and New Material[edit]

This section lists a few new topics.

CSV Files

Comma-Separated Values (CSV) is a common way to store structured data in a text file. An example CSV file would be:

Day,NumUsers
2015-11-03,345
2015-11-04,451
...

The nice thing about CSV files is that spreadsheet programs like Microsoft Excel can import them, so it's easy to plot columns.

Dates and times

Python provides a rich datetime functionality available via import datetime. We encourage you to look at the documentation at [2]. In our examples, the only function we'll use is the ability to create strings from datetimes. Everything we need is in this example.


import datetime
for line in tweets:
    tweet_as_dictionary = json.loads(line)
    tweet_daytime = datetime.datetime.fromtimestamp(int(tweet_as_dictionary['timestamp_ms']) / 1000)
    tweet_day = tweet_daytime.strftime('%Y-%m-%d')

The line int(tweet_as_dictionary['timestamp_ms']) / 1000 does a few things:

  • extract the timestamp_ms field from the tweet. This is the tweet time measured in number of milliseconds since Jan 1, 1970. See [3] for more details.
  • divide the timestamp in milliseconds by 1000 to get the number of seconds since Jan 1, 1970.
  • convert it to a datetime in python.

The other line that matters is tweet_day = tweet_daytime.strftime('%Y-%m-%d'), which takes the datetime and converts it to a string. The characters %Y-%m-%d describe the string format. The codes can be found in the documentation [4]. For our purposes, here are the codes and the portion of the date they represent for the date "2015-11-06 14:52:12":

  • %Y - 2015
  • %m - 11
  • %d - 06
  • %H - 14 (note 24 hour time)
  • %M - 52
  • %S - 12

Details[edit]

Sampling

Since the raw data is too large for everyone to download locally, we had to down sample to a reasonable sub-set. Down sampling usually involves choosing a random subset of the raw data, but it's important to think about how that sample is constructed. For instance, consider two sampling criteria for a 1% sample: by tweet and by user. Sampling by tweet means that every single tweet has a 1 in 100 chance of being in the sample. However, what if I want to count the number of tweets per user? Since I don't have all the tweets for any particular user, my estimate is going to be wrong. Sampling by user lets me capture all of the tweets by an account or none of them. This way I can still estimate the number of tweets in a time period and I can also make valid measurements of per-user metrics.

English

By now you've seen that most earthquakes appear to have occurred in places that don't use English. Our tracker uses only English words, which means that we only see earthquakes when screennames or retweets or the odd person tweeting in English mentions them. There are serious implications about the number of tweets in our sample for each earthquake.

Full source code

The full source code of this exercise is available on github at [5]. The examples are in the section students