In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that, we will extend the example code to create our own dataset of tweets. In the final workshop (Feb 15), we will ask and answer questions with the data we've collected.
- Get set up to build datasets with the Twitter API
- Have fun collecting different types of tweets using a variety of ways to search
- Practice reading and extending other people's code
- Create a few collections of Tweets to use in your projects
To participate in the Twitter afternoon session, you must have registered with Twitter as a developer before the session by following the Twitter authentication setup instructions. If you did not do this, or if you tried but did not succeed, please attend one of the other two sessions instead.
Download and test the Twitter project
If you are confused by these steps, go back and refresh your memory with the Day 0 setup instructions
(Estimated time: 10 minutes)
Download the Twitter API project
- Download the following zip file: https://github.com/CommunityDataScienceCollective/twitter-cdsw/archive/master.zip
- Extract the zip folder into a new folder on your Desktop.
Enter your API information
- Start Juypter notebook and navigate to the folder you just created on your desktop.
- Double click to open the file "twitter_authentication.py". This is a python file, meaning it contains python code, but it is not a notebook.
- You will see four lines that include four variables in ALL CAPITALS that are being assigned, in the normal ways we learned about last session, to strings. At the moment, all of the strings say CHANGE_ME.
- Go find the four keys, tokens, and secrets you created and wrote-down when you followed the Twitter authentication setup. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.
Once you have done this, your example programs are set up to use the Twitter API!
Test the Twitter API code
Open the notebook "ex0_print_a_tweet.py" in jupyter. Execute all of the cells. You should see the text of 100 tweets in the second to last cell. If you see an error, you probably have a problem with the API information you entered in the previous step. A volunteer can help you.
Making your own notebooks
You will do the exercises below in your own notebook, which you will create. In every notebook you make, put the following python code in the first cell:
import tweepy from twitter_authentication import CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, ACCESS_TOKEN_SECRET auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) api = tweepy.API(auth)
This will enable your authenticated Twitter API calls via the variable
Who are my followers?
- Alter code example 1 (ex1_get_user_info.ipynb) to get your followers.
- For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)
- Identify the follower you have that also follows the most of your followers.
- How many handles follow you but none of your followers?
- Repeat this for people you follow, rather than that follow you.
Topics and Trends
- Alter code example 2 (ex2_search.ipynb) to produce a list of 1000 tweets about a topic.
- How does twitter interpret a two word query like "data science"
- Eliminate retweets [hint: look at the tweet object! https://dev.twitter.com/overview/api/tweets]
- For each tweet original tweet, list the number of times you see it retweeted.
- Get a list of the URLs that are embedded in Tweets with your topic.
Geolocation from Search API This section will require you to investigate the filter function in example 2 in more detail.
- Get the last 50 tweets from Ballard.
- Get the last 50 tweets from Times Square.
- Using timestamps, can you estimate whether people tweet more often in Ballard or Times Square?
- A baseball game happened today (May 11) between the Seattle Mariners and the Tampa Bay Rays. Using two geo searches, see if you can tell which city hosted the game. Note: if you do this some other day, you should pick a new sporting event.
Geolocation in the streaming APi
- Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates. (Recall the stop button will stop an active process like the stream.)
- What are people tweeting about in Times Square today? (Bonus points: set up a bounding box around TS and around NYC as a whole.)
- Can you find words that are more likely to appear in Time's Square (hint: you'll need two bounding boxes)?
- Oregon State is playing basketball against UC Berkeley. Set up a bounding box around Berkeley and Corvallis, Oregon. Can you identify tweets about basketball? Who tweets more about the game? Can you tell which team is the home team?
Geolocation hint: You can use
d = api.search(geocode='[lng],[lat],5mi) to get Tweets from a 5 mile radius around a point. Use Google or Bing maps to get a similar bounding box around Fenway Park.
You now know how to capture data from Twitter that you can use in your research!!! Next workshop we'll play with some fun analytical tools. In the meantime, here are a few words of caution about using Twitter data for science.