Twitter analysis (CDSW): Difference between revisions

From CommunityData
(added more content for twitter. Still TODOs)
(remove link todo)
Line 8: Line 8:
== Analyzing Tweet Data with Python ==
== Analyzing Tweet Data with Python ==


Last week, the TODO link Twitter Session covered accessing data from the Twitter API, including accessing streaming data. After the session, we set up a streaming sample from the code you modified in the workshop to track all earthquake-related tweets. That stream captured 3.5GB of tweets (!). We've given you a sample of those tweets in the zipped file above.
Last week, the [[Community_Data_Science_Workshops_%28Fall_2015%29/Day_2_projects/Twitter|Twitter API Session]] covered accessing data from the Twitter API, including accessing streaming data. After the session, we set up a streaming sample from the code you modified in the workshop to track all earthquake-related tweets. That stream captured 3.5GB of tweets (!). We've given you a sample of those tweets in the zipped file above.


Our goal in this workshop is to use those tweets to answer questions. As always, we've suggested questions below, but the real goal is for you to find something that you find interesting from the data.
Our goal in this workshop is to use those tweets to answer questions. As always, we've suggested questions below, but the real goal is for you to find something that you find interesting from the data.


Your data consists of 1 Tweet per line encoded in JSON exactly as Twitter returns it.  
Your data consists of 1 Tweet per line encoded in JSON exactly as Twitter returns it.


== Goals ==
== Goals ==

Revision as of 04:28, 7 November 2015

Getting Started

Download zipped file with a 1% sample of tweets from TODO (backup: onedrive)

Extract the zipped file to your desktop and cd into it.

Analyzing Tweet Data with Python

Last week, the Twitter API Session covered accessing data from the Twitter API, including accessing streaming data. After the session, we set up a streaming sample from the code you modified in the workshop to track all earthquake-related tweets. That stream captured 3.5GB of tweets (!). We've given you a sample of those tweets in the zipped file above.

Our goal in this workshop is to use those tweets to answer questions. As always, we've suggested questions below, but the real goal is for you to find something that you find interesting from the data.

Your data consists of 1 Tweet per line encoded in JSON exactly as Twitter returns it.

Goals

Starting with the sample program, try to answer the following questions. The first one is done for you!

  • Build a histogram of tweets per day. Use it to identify potential events. We'll use python to make a CSV file that we can import to a spreadsheet.
  • Change your program to plot the number of users that are tweeting.
  • Pick an interesting time period from the data. Can you figure out where the even took place based on the locale of the user? What about geographic information?
  • Who gets retweeted the most in the data set?
  • Modify your histogram to look at tweets per hour rather than per day.
  • Use your imagination! What else can you find?

Hints and New Material

This section lists a few new topics.

CSV Files

Comma-Separated Values (CSV) is a common way to store structured data in a text file. An example CSV file would be:

Day,NumUsers
2015-11-03,345
2015-11-04,451
...

The nice thing about CSV files is that spreadsheet programs like Microsoft Excel can import them, so it's easy to plot columns.

Dates and times

Python provides a rich datetime functionality available via import datetime. We encourage you to look at the documentation at TODO. In our examples, the only function we'll use is the ability to create strings from datetimes.

TODO: example

Details

Sampling

Since the raw data is too large for everyone to download locally, we had to down sample to a reasonable sub-set. Down sampling usually involves choosing a random subset of the raw data, but it's important to think about how that sample is constructed. For instance, consider two sampling criteria for a 1% sample: by tweet and by user. Sampling by tweet means that every single tweet has a 1 in 100 chance of being in the sample. However, what if I want to count the number of tweets per user? Since I don't have all the tweets for any particular user, my estimate is going to be wrong. Sampling by user lets me capture all of the tweets by an account or none of them. This way I can still estimate the number of tweets in a time period and I can also make valid measurements of per-user metrics.

Full source code

The full source code of this exercise is available on github at [1]. The examples are in the section students