Twitter (CDSW)
In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that, we will extend the example code to create our own dataset of tweets. In the final workshop (Feb 15), we will ask and answer questions with the data we've collected.
Goals
- Get set up to build datasets with the Twitter API
- Have fun collecting different types of tweets using a variety of ways to search
- Practice reading and extending other people's code
- Create a few collections of Tweets to use in your projects
Prerequisite
To participate in the Twitter afternoon session, you must have registered with Twitter as a developer before the session by following the Twitter authentication setup instructions. If you did not do this, or if you tried but did not succeed, please attend one of the other two sessions instead.
Download and test the Twitter project
If you are confused by these steps, go back and refresh your memory with the Day 0 setup instructions
(Estimated time: 10 minutes)
Download the Twitter API project
- Download the following zip file: https://github.com/CommunityDataScienceCollective/twitter-cdsw/archive/master.zip
- Extract the zip folder into a new folder on your Desktop.
Enter your API information
- Start Juypter notebook and navigate to the folder you just created on your desktop.
- Double click to open the file "twitter_authentication.py". This is a python file, meaning it contains python code, but it is not a notebook.
- You will see four lines that include four variables in ALL CAPITALS that are being assigned, in the normal ways we learned about last session, to strings. At the moment, all of the strings say CHANGE_ME.
- Go find the four keys, tokens, and secrets you created and wrote-down when you followed the Twitter authentication setup. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.
Once you have done this, your example programs are set up to use the Twitter API!
Test the Twitter API code
Open the notebook "ex0_print_a_tweet.py" in jupyter. Execute all of the cells. You should see the text of 100 tweets in the second to last cell. If you see an error, you probably have a problem with the API information you entered in the previous step. A volunteer can help you.
Potential exercises
Who are my followers?
- Alter code example 2 (twitter2.py) to get your followers.
- For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)
- Identify the follower you have that also follows the most of your followers.
- How many handles follow you but none of your followers?
- Repeat this for people you follow, rather than that follow you.
Topics and Trends
- Alter code example 3 (twitter3.py) to produce a list of 1000 tweets about a topic.
- Look at those tweets. How does twitter interpret a two word query like "data science"
- Eliminate retweets [hint: look at the tweet object! https://dev.twitter.com/overview/api/tweets]
- For each tweet original tweet, list the number of times you see it retweeted.
- Get a list of the URLs that are associated with your topic.
Geolocation from Search API
- Get the last 50 tweets from Ballard.
- Get the last 50 tweets from Times Square.
- Using timestamps, can you estimate whether people tweet more often in Ballard or Times Square?
- A baseball game happened today (May 11) between the Seattle Mariners and the Tampa Bay Rays. Using two geo searches, see if you can tell which city hosted the game. Note: if you do this some other day, you should pick a new sporting event.
Geolocation
- Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates. (Recall Control C will stop an active process like the stream.)
- What are people tweeting about in Times Square today? (Bonus points: set up a bounding box around TS and around NYC as a whole.)
- Can you find words that are more likely to appear in TS?
- Boston is playing Houston in baseball right now. Set up a bounding box around the Houston and Boston. Can you identify tweets about baseball? Who tweets more about the game? Can you tell which team is the home team? (you can use
d = api.search(geocode='-95.355549,29.757480,5mi)
to get Tweets from Houston. Use Google or Bing maps to get a similar bounding box around Fenway Park).
Congratulations!!!!
You now know how to capture data from Twitter that you can use in your research!!! Next workshop we'll play with some fun analytical tools. In the meantime, here are a few words of caution about using Twitter data for science.