Not logged in
Talk
Contributions
Create account
Log in
Navigation
Main page
About
People
Publications
Teaching
Resources
Research Blog
Wiki Functions
Recent changes
Help
Licensing
Page
Discussion
Edit
View history
Editing
Community Data Science Workshops (Spring 2016)/Day 3 Projects/Twitter
From CommunityData
Jump to:
navigation
,
search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
__NOTOC__ == Getting Started == Download zipped file with a 10% sample of tweets from [https://s3-us-west-2.amazonaws.com/cc.communitydata.tweetstream/twitter_wk3_withdata.zip here]. Extract the zipped file to your desktop and cd into it. == Analyzing Tweet Data with Python == ''Note'': you can attend this session even if you didn't do Twitter last week. Last week, the [[Community_Data_Science_Workshops_%28Spring_2016%29/Day_2_Projects/Twitter|Twitter API Session]] covered accessing data from the Twitter API, including accessing streaming data. Last fall, we set up a streaming collection from the code you modified in the workshop to track all earthquake-related tweets. That stream captured 3.5GB of tweets (!). We've given you a sample of those tweets in the zipped file above. Our goal in this workshop is to explore the data we collected. As always, we've suggested questions below, but the real goal is for you to find something that you find interesting from the data. Your data consists of 1 Tweet per line encoded in JSON exactly as Twitter returns it. Looking at the Twitter API documentation will help you make sense of that funny looking JSON data: [https://dev.twitter.com/overview/api] == Goals == Starting with the sample program, try to answer the following questions. The first one is done for you! * Build a histogram of tweets per day. Use it to identify potential events. We'll use python to make a CSV file that we can import to a spreadsheet. * Change your program to plot the number of users that are tweeting. * Pick an interesting time period from the data. Can you figure out where an event took place based on the locale of the user? What about geographic information? * Who gets retweeted the most in the data set? * Modify your histogram to look at tweets per hour rather than per day. * Use your imagination! What else can you find? == Hints and New Material == This section lists a few new topics. '''CSV Files''' Comma-Separated Values (CSV) is a common way to store structured data in a text file. An example CSV file would be: Day,NumUsers 2015-11-03,345 2015-11-04,451 ... The nice thing about CSV files is that spreadsheet programs like Microsoft Excel can import them, so it's easy to plot columns. '''Dates and times''' Python provides a rich datetime functionality available via <code>import datetime</code>. We encourage you to look at the documentation at [https://github.com/offbyone/tweetstream]. In our examples, the only function we'll use is the ability to create strings from datetimes. Everything we need is in this example. <syntaxhighlight lang="python"> import datetime for line in tweets: tweet_as_dictionary = json.loads(line) tweet_daytime = datetime.datetime.fromtimestamp(int(tweet_as_dictionary['timestamp_ms']) / 1000) tweet_day = tweet_daytime.strftime('%Y-%m-%d') </syntaxhighlight> The line <code>int(tweet_as_dictionary['timestamp_ms']) / 1000</code> does a few things: * extract the timestamp_ms field from the tweet. This is the tweet time measured in number of milliseconds since Jan 1, 1970. See [https://en.wikipedia.org/wiki/Unix_time] for more details. * divide the timestamp in milliseconds by 1000 to get the number of seconds since Jan 1, 1970. * convert it to a datetime in python. The other line that matters is <code>tweet_day = tweet_daytime.strftime('%Y-%m-%d')</code>, which takes the datetime and converts it to a string. The characters <code>%Y-%m-%d</code> describe the string format. The codes can be found in the documentation [https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior]. For our purposes, here are the codes and the portion of the date they represent for the date "2015-11-06 14:52:12": * %Y - 2015 * %m - 11 * %d - 06 * %H - 14 (note 24 hour time) * %M - 52 * %S - 12 == Details == '''Sampling''' Since the raw data is too large for everyone to download locally, we had to down sample to a reasonable sub-set. Down sampling usually involves choosing a random subset of the raw data, but it's important to think about how that sample is constructed. For instance, consider two sampling criteria for a 1% sample: by tweet and by user. Sampling by tweet means that every single tweet has a 1 in 100 chance of being in the sample. However, what if I want to count the number of tweets per user? Since I don't have all the tweets for any particular user, my estimate is going to be wrong. Sampling by user lets me capture all of the tweets by an account or none of them. This way I can still estimate the number of tweets in a time period and I can also make valid measurements of per-user metrics. '''English''' By now you've seen that most earthquakes appear to have occurred in places that don't use English. Our tracker uses only English words, which means that we only see earthquakes when screennames or retweets or the odd person tweeting in English mentions them. There are serious implications about the number of tweets in our sample for each earthquake. '''Full source code''' The full source code of this exercise is available on github at [https://github.com/offbyone/tweetstream]. The examples are in the section <code>students</code>
Summary:
Please note that all contributions to CommunityData are considered to be released under the Attribution-Share Alike 3.0 Unported (see
CommunityData:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:
Cancel
Editing help
(opens in new window)
Tools
What links here
Related changes
Special pages
Page information