Intro to Programming and Data Science (Spring 2020)/Day 8 Coding Challenges

From CommunityData
< Intro to Programming and Data Science (Spring 2020)
Revision as of 01:46, 11 March 2020 by Jdfoote (talk | contribs) (→‎Exercises)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Twitter.png


In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that, we will extend the example code to create our own dataset of tweets.

Goals[edit]

  • Get set up to build datasets with the Twitter API
  • Have fun collecting different types of tweets using a variety of ways to search
  • Practice reading and extending other people's code
  • Create a few collections of Tweets to use in your project

Prerequisite[edit]

To get this code to work, you must have registered with Twitter as a developer by following the Twitter authentication setup instructions.


Download the Twitter API project[edit]

We will be building on material created for the [Community Data Science Workshops].

Enter your API information[edit]

  • Start Juypter notebook and navigate to the folder you just created on your desktop.
  • Double click to open the file "twitter_authentication.py". This is a python file, meaning it contains python code, but it is not a notebook.
  • You will see four lines that include four variables in ALL CAPITALS. At the moment, all of the strings say CHANGE_ME.
  • Go find the four keys, tokens, and secrets you created when you followed the Twitter authentication setup. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.

Once you have done this, your example programs are set up to use the Twitter API!

Test the Twitter API code[edit]

Open the notebook "ex0_print_a_tweet.py" in jupyter. Execute all of the cells. You should see the text of 100 tweets in the second to last cell. If you see an error, you probably have a problem with the API information you entered in the previous step.


Making your own notebooks[edit]

we are using tweepy, a python library that simplifies accessing the Twitter API.

You will do the exercises below in your own notebook, which you will create. In every notebook you make, put the following python code in the first cell:

import tweepy
from twitter_authentication import CONSUMER_KEY, CONSUMER_SECRET, ACCESS_TOKEN, ACCESS_TOKEN_SECRET

auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth)


This will enable your authenticated Twitter API calls via the variable api


Exercises[edit]

Read through the example notebooks and try to figure out what they are doing. It may also be helpful to look at the Tweepy documentation.

Topics and Trends

  1. Alter code example 2 (ex2_search.ipynb) to produce a list of 1000 tweets about a topic.
  2. How does twitter interpret a two word query like "data science"
  3. Eliminate retweets [hint: look at the tweet object! https://dev.twitter.com/overview/api/tweets]
  4. For each tweet original tweet, list the number of times you see it retweeted.
  5. Get a list of the URLs that are embedded in Tweets with your topic.

Geolocation from Search API This section will require you to investigate the filter function in example 2 in more detail.

  1. Get the last 50 tweets from West Lafayette.
  2. Get the last 50 tweets from Times Square.
  3. Using timestamps, can you estimate whether people tweet more often in West Lafayette or Times Square?
  4. A Premier League soccer game happened today between Liverpool and Chelsea. Using two geo searches, see if you can tell which city hosted the game. Note: if you do this some other day, you should pick a new sporting event.

Geolocation in the streaming API

  1. Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates. (Recall the stop button will stop an active process like the stream.)
  2. What are people tweeting about in Times Square today? (Bonus points: set up a bounding box around TS and around NYC as a whole.)
  3. Can you find words that are more likely to appear in Times Square (hint: you'll need two bounding boxes)?
  4. Purdue is playing basketball against Iowa tonight. Set up a bounding box around West Lafayette and Iowa City, Iowa. Can you identify tweets about basketball? Who tweets more about the game? Can you tell which team is the home team?

Geolocation hint: You can use d = api.search(geocode='[lng],[lat],5mi) to get Tweets from a 5 mile radius around a point. Use Google or Bing maps to get a similar bounding box around Fenway Park.


Who are my followers?

  1. Alter code example 1 (ex1_get_user_info.ipynb) to get your followers.
  2. For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)
  3. Identify the follower you have that also follows the most of your followers.
  4. How many handles follow you but none of your followers?
  5. Repeat this for people you follow, rather than those that follow you.

Congratulations!!!![edit]

You now know how to capture data from Twitter that you can use in your research!!! Next workshop we'll play with some fun analytical tools. In the meantime, here are a few words of caution about using Twitter data for science.