Editing Twitter analysis (CDSW)

From CommunityData
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then publish the changes below to finish undoing the edit.

Latest revision Your text
Line 2: Line 2:
== Getting Started ==
== Getting Started ==


Download zipped file with a 10% sample of tweets from [https://s3-us-west-2.amazonaws.com/cc.communitydata.tweetstream/twitter_wk3_withdata.zip here].
Download zipped file with a 1% sample of tweets from TODO (backup: onedrive)


Extract the zipped file to your desktop and cd into it.
Extract the zipped file to your desktop and cd into it.
Line 8: Line 8:
== Analyzing Tweet Data with Python ==
== Analyzing Tweet Data with Python ==


Last week, the [[Community_Data_Science_Workshops_%28Fall_2015%29/Day_2_projects/Twitter|Twitter API Session]] covered accessing data from the Twitter API, including accessing streaming data. After the session, we set up a streaming collection from the code you modified in the workshop to track all earthquake-related tweets. That stream captured 3.5GB of tweets (!). We've given you a sample of those tweets in the zipped file above.
Last week, the [[Community_Data_Science_Workshops_%28Fall_2015%29/Day_2_projects/Twitter|Twitter API Session]] covered accessing data from the Twitter API, including accessing streaming data. After the session, we set up a streaming sample from the code you modified in the workshop to track all earthquake-related tweets. That stream captured 3.5GB of tweets (!). We've given you a sample of those tweets in the zipped file above.


Our goal in this workshop is to use those tweets to answer questions. As always, we've suggested questions below, but the real goal is for you to find something that you find interesting from the data.
Our goal in this workshop is to use those tweets to answer questions. As always, we've suggested questions below, but the real goal is for you to find something that you find interesting from the data.


Your data consists of 1 Tweet per line encoded in JSON exactly as Twitter returns it.
Your data consists of 1 Tweet per line encoded in JSON exactly as Twitter returns it.
Looking at the Twitter API documentation will help you make sense of that funny looking JSON data: [https://dev.twitter.com/overview/api]


== Goals ==
== Goals ==
Line 22: Line 20:
* Build a histogram of tweets per day. Use it to identify potential events. We'll use python to make a CSV file that we can import to a spreadsheet.
* Build a histogram of tweets per day. Use it to identify potential events. We'll use python to make a CSV file that we can import to a spreadsheet.
* Change your program to plot the number of users that are tweeting.
* Change your program to plot the number of users that are tweeting.
* Pick an interesting time period from the data. Can you figure out where an event took place based on the locale of the user? What about geographic information?  
* Pick an interesting time period from the data. Can you figure out where the even took place based on the locale of the user? What about geographic information?  
* Who gets retweeted the most in the data set?
* Who gets retweeted the most in the data set?
* Modify your histogram to look at tweets per hour rather than per day.
* Modify your histogram to look at tweets per hour rather than per day.
Line 44: Line 42:
'''Dates and times'''
'''Dates and times'''


Python provides a rich datetime functionality available via <code>import datetime</code>. We encourage you to look at the documentation at [https://github.com/offbyone/tweetstream]. In our examples, the only function we'll use is the ability to create strings from datetimes. Everything we need is in this example.
Python provides a rich datetime functionality available via <code>import datetime</code>. We encourage you to look at the documentation at TODO. In our examples, the only function we'll use is the ability to create strings from datetimes.  
 
 
<syntaxhighlight lang="python">
import datetime
for line in tweets:
    tweet_as_dictionary = json.loads(line)
    tweet_daytime = datetime.datetime.fromtimestamp(int(tweet_as_dictionary['timestamp_ms']) / 1000)
    tweet_day = tweet_daytime.strftime('%Y-%m-%d')
</syntaxhighlight>
 
The line <code>int(tweet_as_dictionary['timestamp_ms']) / 1000</code> does a few things:
* extract the timestamp_ms field from the tweet. This is the tweet time measured in number of milliseconds since Jan 1, 1970. See [https://en.wikipedia.org/wiki/Unix_time] for more details.
* divide the timestamp in milliseconds by 1000 to get the number of seconds since Jan 1, 1970.
* convert it to a datetime in python.
 
The other line that matters is <code>tweet_day = tweet_daytime.strftime('%Y-%m-%d')</code>, which takes the datetime and converts it to a string. The characters <code>%Y-%m-%d</code> describe the string format. The codes can be found in the documentation [https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior]. For our purposes, here are the codes and the portion of the date they represent for the date "2015-11-06 14:52:12":


* %Y - 2015
TODO: example
* %m - 11
* %d - 06
* %H - 14 (note 24 hour time)
* %M - 52
* %S - 12


== Details ==  
== Details ==  
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see CommunityData:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel Editing help (opens in new window)