Twitter (CDSW): Difference between revisions

From CommunityData
(copied from Spring 2016 Day 2 Twitter)
 
 
(5 intermediate revisions by 3 users not shown)
Line 3: Line 3:
__NOTOC__
__NOTOC__


In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that, we will extend the example code to create our own dataset of tweets. In the final workshop (Nov. 5), we will ask and answer questions with the data we've collected.
In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that, we will extend the example code to create our own dataset of tweets. In the final workshop (Feb 15), we will ask and answer questions with the data we've collected.


== Goals ==
== Goals ==
Line 24: Line 24:
===Download the Twitter API project===
===Download the Twitter API project===


* Right click the following file, click "Save Target as..." or "Save link as...", and save it to your Desktop directory: https://mako.cc/teaching/2015/cdsw-autumn/twitter-api-cdsw.zip
* Download the following zip file: https://github.com/CommunityDataScienceCollective/twitter-cdsw/archive/master.zip
* The ".zip" extension on the above file indicates that it is a compressed Zip archive. We need to "extract" its contents. To do this on Windows, click on "Start", then "Computer".  If you are a Mac, open Finder and navigate to your Desktop directory. Find <code>twitter-api-cdsw.zip</code> on your Desktop and double-click on it to "unzip" it. That will create a folder called <code>twitter-api-cdsw</code> containing several files.
* Extract the zip folder into a new folder on your Desktop.


===Enter your API information===
===Enter your API information===


<div style="background-color:#CEE7DA; width:80%; padding:1.2em;">
* Start Juypter notebook and navigate to the folder you just created on your desktop.
'''On Windows'''
* Double click to open the file "twitter_authentication.py". This is a python file, meaning it contains python code, but it is not a notebook.
 
* Start your text editor (probably Notepad++ if you [[Windows text editor|installed it following our instructions last time]]). Navigate to the directory that contains Twitter API (probably something of the form <code>C:\Users\'''YOURUSERNAME'''\Desktop\twitter-api-cdsw</code>).
 
</div>
 
<div style="background-color:#D8E8FF; width:80%; padding:1.2em;">
'''On Mac'''
 
* Start your text editor (probably TextWrangler if you installed it following [[OSX text editor|our instructions]]). Navigate to the directory that contains the Twitter API project (probably something of the form <code>~/Desktop/twitter-api-cdsw</code>).
 
</div>
 
* Open up the file <code>twitter_authentication.py</code> in your text editor.
* You will see four lines that include four variables in ALL CAPITALS that are being assigned, in the normal ways we learned about last session, to strings. At the moment, all of the strings say CHANGE_ME.
* You will see four lines that include four variables in ALL CAPITALS that are being assigned, in the normal ways we learned about last session, to strings. At the moment, all of the strings say CHANGE_ME.
* Go find the four keys, tokens, and secrets you created and wrote-down when you followed the [[Community Data Science Workshops/Twitter authentication setup|Twitter authentication setup]]. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.
* Go find the four keys, tokens, and secrets you created and wrote-down when you followed the [[Community Data Science Workshops/Twitter authentication setup|Twitter authentication setup]]. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.
Line 51: Line 38:
===Test the Twitter API code===
===Test the Twitter API code===


<div style="background-color:#CEE7DA; width:80%; padding:1.2em;">
Open the notebook "ex0_print_a_tweet.py" in jupyter. Execute all of the cells. You should see the text of 100 tweets in the second to last cell. If you see an error, you probably have a problem with the API information you entered in the previous step. A volunteer can help you.
'''On Windows'''


Start up PowerShell and navigate to the Desktop\twitter-api-cdsw directory where the Twitter API code lives. For example, if the Twitter API project is at <code>C:\Users\'''YOURUSERNAME'''\Desktop\twitter-api-cdsw</code>,


cd C:\Users\'''YOURUSERNAME'''\Desktop\twitter-api-cdsw
===Making your own notebooks===


</div>
You will do the exercises below in your own notebook, which you will create. In every notebook you make, put the following python code in the first cell:
<div style="background-color:#D8E8FF; width:80%; padding:1.2em;">
'''On Mac'''


Start a command prompt and navigate to the Desktop/twitter-api-cdsw directory where the Twitter API code lives. For example, if the Twitter API project is at ~/Desktop/twitter-api-cdsw,
import tweepy
from twitter_authentication import CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, ACCESS_TOKEN_SECRET
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)


cd ~/Desktop/twitter-api-cdsw


</div>
This will enable your authenticated Twitter API calls via the variable <code>api</code>
 
This will change you into the right directory.
 
<code>ls</code> will show you the source code files in that directory. One of the files is "<code>twitter1.py</code>", which has a "<code>.py</code>" extension indicating that it is a Python script. Type:
 
python twitter1.py
 
at the command prompt to execute the <code>twitter1.py</code> Python script. Wait a little while while your computer connects to Twitter. You should see a series of tweets run by your screen. If you don't, let a mentor know.


== Potential exercises ==
== Potential exercises ==


'''Who are my followers?'''
# Alter code example 2 (twitter2.py) to get your followers.
# For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)
# Identify the follower you have that also follows the most of your followers.
# How many handles follow you but none of your followers?
# Repeat this for people you follow, rather than that follow you.


'''Topics and Trends'''
'''Topics and Trends'''


# Alter code example 3 (twitter3.py) to produce a list of 1000 tweets about a topic.
# Alter code example 2 (ex2_search.ipynb) to produce a list of 1000 tweets about a topic.
# Look at those tweets. How does twitter interpret a two word query like "data science"
# How does twitter interpret a two word query like "data science"
# Eliminate retweets [hint: look at the tweet object!  https://dev.twitter.com/overview/api/tweets]
# Eliminate retweets [hint: look at the tweet object!  https://dev.twitter.com/overview/api/tweets]
# For each tweet original tweet, list the number of times you see it retweeted.
# For each tweet original tweet, list the number of times you see it retweeted.
# Get a list of the URLs that are associated with your topic.
# Get a list of the URLs that are embedded in Tweets with your topic.


'''Geolocation from Search API'''
'''Geolocation from Search API'''
This section will require you to investigate the filter function in example 2 in more detail.


# Get the last 50 tweets from Ballard.
# Get the last 50 tweets from Ballard.
Line 101: Line 74:
# A baseball game happened today (May 11) between the Seattle Mariners and the Tampa Bay Rays. Using two geo searches, see if you can tell which city hosted the game. Note: if you do this some other day, you should pick a new sporting event.
# A baseball game happened today (May 11) between the Seattle Mariners and the Tampa Bay Rays. Using two geo searches, see if you can tell which city hosted the game. Note: if you do this some other day, you should pick a new sporting event.


'''Geolocation'''
'''Geolocation in the streaming APi'''


# Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates.  (Recall Control C will stop an active process like the stream.)
# Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates.  (Recall the stop button will stop an active process like the stream.)
# What are people tweeting about in Times Square today? (Bonus points: set up a bounding box around TS and around NYC as a whole.)
# What are people tweeting about in Times Square today? (Bonus points: set up a bounding box around TS and around NYC as a whole.)
# Can you find words that are more likely to appear in TS?
# Can you find words that are more likely to appear in Time's Square (hint: you'll need two bounding boxes)?
# Boston is playing Houston in baseball right now. Set up a bounding box around the Houston and Boston. Can you identify tweets about baseball? Who tweets more about the game? Can you tell which team is the home team? (you can use <code>d = api.search(geocode='-95.355549,29.757480,5mi)</code> to get Tweets from Houston. Use Google or Bing maps to get a similar bounding box around Fenway Park).
# Oregon State is playing basketball against UC Berkeley. Set up a bounding box around Berkeley and Corvallis, Oregon. Can you identify tweets about basketball? Who tweets more about the game? Can you tell which team is the home team?  
 
Geolocation hint: You can use <code>d = api.search(geocode='[lng],[lat],5mi)</code> to get Tweets from a 5 mile radius around a point. Use Google or Bing maps to get a similar bounding box around Fenway Park.
 
 
'''Who are my followers?'''
 
# Alter code example 1 (ex1_get_user_info.ipynb) to get your followers.
# For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)
# Identify the follower you have that also follows the most of your followers.
# How many handles follow you but none of your followers?
# Repeat this for people you follow, rather than that follow you.


== Congratulations!!!!==
== Congratulations!!!!==

Latest revision as of 17:41, 1 February 2020


In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that, we will extend the example code to create our own dataset of tweets. In the final workshop (Feb 15), we will ask and answer questions with the data we've collected.

Goals[edit]

  • Get set up to build datasets with the Twitter API
  • Have fun collecting different types of tweets using a variety of ways to search
  • Practice reading and extending other people's code
  • Create a few collections of Tweets to use in your projects

Prerequisite[edit]

To participate in the Twitter afternoon session, you must have registered with Twitter as a developer before the session by following the Twitter authentication setup instructions. If you did not do this, or if you tried but did not succeed, please attend one of the other two sessions instead.

Download and test the Twitter project[edit]

If you are confused by these steps, go back and refresh your memory with the Day 0 setup instructions

(Estimated time: 10 minutes)

Download the Twitter API project[edit]

Enter your API information[edit]

  • Start Juypter notebook and navigate to the folder you just created on your desktop.
  • Double click to open the file "twitter_authentication.py". This is a python file, meaning it contains python code, but it is not a notebook.
  • You will see four lines that include four variables in ALL CAPITALS that are being assigned, in the normal ways we learned about last session, to strings. At the moment, all of the strings say CHANGE_ME.
  • Go find the four keys, tokens, and secrets you created and wrote-down when you followed the Twitter authentication setup. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.

Once you have done this, your example programs are set up to use the Twitter API!

Test the Twitter API code[edit]

Open the notebook "ex0_print_a_tweet.py" in jupyter. Execute all of the cells. You should see the text of 100 tweets in the second to last cell. If you see an error, you probably have a problem with the API information you entered in the previous step. A volunteer can help you.


Making your own notebooks[edit]

You will do the exercises below in your own notebook, which you will create. In every notebook you make, put the following python code in the first cell:

import tweepy
from twitter_authentication import CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, ACCESS_TOKEN_SECRET

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)


This will enable your authenticated Twitter API calls via the variable api

Potential exercises[edit]

Topics and Trends

  1. Alter code example 2 (ex2_search.ipynb) to produce a list of 1000 tweets about a topic.
  2. How does twitter interpret a two word query like "data science"
  3. Eliminate retweets [hint: look at the tweet object! https://dev.twitter.com/overview/api/tweets]
  4. For each tweet original tweet, list the number of times you see it retweeted.
  5. Get a list of the URLs that are embedded in Tweets with your topic.

Geolocation from Search API This section will require you to investigate the filter function in example 2 in more detail.

  1. Get the last 50 tweets from Ballard.
  2. Get the last 50 tweets from Times Square.
  3. Using timestamps, can you estimate whether people tweet more often in Ballard or Times Square?
  4. A baseball game happened today (May 11) between the Seattle Mariners and the Tampa Bay Rays. Using two geo searches, see if you can tell which city hosted the game. Note: if you do this some other day, you should pick a new sporting event.

Geolocation in the streaming APi

  1. Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates. (Recall the stop button will stop an active process like the stream.)
  2. What are people tweeting about in Times Square today? (Bonus points: set up a bounding box around TS and around NYC as a whole.)
  3. Can you find words that are more likely to appear in Time's Square (hint: you'll need two bounding boxes)?
  4. Oregon State is playing basketball against UC Berkeley. Set up a bounding box around Berkeley and Corvallis, Oregon. Can you identify tweets about basketball? Who tweets more about the game? Can you tell which team is the home team?

Geolocation hint: You can use d = api.search(geocode='[lng],[lat],5mi) to get Tweets from a 5 mile radius around a point. Use Google or Bing maps to get a similar bounding box around Fenway Park.


Who are my followers?

  1. Alter code example 1 (ex1_get_user_info.ipynb) to get your followers.
  2. For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)
  3. Identify the follower you have that also follows the most of your followers.
  4. How many handles follow you but none of your followers?
  5. Repeat this for people you follow, rather than that follow you.

Congratulations!!!![edit]

You now know how to capture data from Twitter that you can use in your research!!! Next workshop we'll play with some fun analytical tools. In the meantime, here are a few words of caution about using Twitter data for science.