Twitter (CDSW)

From CommunityData
Twitter.png


In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that, we will extend the example code to create our own dataset of tweets. In the final workshop (Feb 15), we will ask and answer questions with the data we've collected.

Goals

  • Get set up to build datasets with the Twitter API
  • Have fun collecting different types of tweets using a variety of ways to search
  • Practice reading and extending other people's code
  • Create a few collections of Tweets to use in your projects

Prerequisite

To participate in the Twitter afternoon session, you must have registered with Twitter as a developer before the session by following the Twitter authentication setup instructions. If you did not do this, or if you tried but did not succeed, please attend one of the other two sessions instead.

Download and test the Twitter project

If you are confused by these steps, go back and refresh your memory with the Day 0 setup instructions

(Estimated time: 10 minutes)

Download the Twitter API project

Enter your API information

  • Start Juypter notebook and navigate to the folder you just created on your desktop.
  • Double click to open the file "twitter_authentication.py". This is a python file, meaning it contains python code, but it is not a notebook.
  • You will see four lines that include four variables in ALL CAPITALS that are being assigned, in the normal ways we learned about last session, to strings. At the moment, all of the strings say CHANGE_ME.
  • Go find the four keys, tokens, and secrets you created and wrote-down when you followed the Twitter authentication setup. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.

Once you have done this, your example programs are set up to use the Twitter API!

Test the Twitter API code

Open the notebook "ex0_print_a_tweet.py" in jupyter. Execute all of the cells. You should see the text of 100 tweets in the second to last cell. If you see an error, you probably have a problem with the API information you entered in the previous step. A volunteer can help you.


Making your own notebooks

You will do the exercises below in your own notebook, which you will create. In every notebook you make, put the following python code in the first cell:

import tweepy
from twitter_authentication import CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, ACCESS_TOKEN_SECRET

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)


This will enable your authenticated Twitter API calls via the variable api

Potential exercises

Who are my followers?

  1. Alter code example 2 (twitter2.py) to get your followers.
  2. For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)
  3. Identify the follower you have that also follows the most of your followers.
  4. How many handles follow you but none of your followers?
  5. Repeat this for people you follow, rather than that follow you.

Topics and Trends

  1. Alter code example 3 (twitter3.py) to produce a list of 1000 tweets about a topic.
  2. Look at those tweets. How does twitter interpret a two word query like "data science"
  3. Eliminate retweets [hint: look at the tweet object! https://dev.twitter.com/overview/api/tweets]
  4. For each tweet original tweet, list the number of times you see it retweeted.
  5. Get a list of the URLs that are associated with your topic.

Geolocation from Search API

  1. Get the last 50 tweets from Ballard.
  2. Get the last 50 tweets from Times Square.
  3. Using timestamps, can you estimate whether people tweet more often in Ballard or Times Square?
  4. A baseball game happened today (May 11) between the Seattle Mariners and the Tampa Bay Rays. Using two geo searches, see if you can tell which city hosted the game. Note: if you do this some other day, you should pick a new sporting event.

Geolocation

  1. Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates. (Recall Control C will stop an active process like the stream.)
  2. What are people tweeting about in Times Square today? (Bonus points: set up a bounding box around TS and around NYC as a whole.)
  3. Can you find words that are more likely to appear in TS?
  4. Boston is playing Houston in baseball right now. Set up a bounding box around the Houston and Boston. Can you identify tweets about baseball? Who tweets more about the game? Can you tell which team is the home team? (you can use d = api.search(geocode='-95.355549,29.757480,5mi) to get Tweets from Houston. Use Google or Bing maps to get a similar bounding box around Fenway Park).

Congratulations!!!!

You now know how to capture data from Twitter that you can use in your research!!! Next workshop we'll play with some fun analytical tools. In the meantime, here are a few words of caution about using Twitter data for science.