Community Data Science Course (Spring 2015)/Day 6 Project

From CommunityData
Twitter.png


In this project, we will explore a few ways to gather data using the Twitter API. Once we've done that done, we will extend this to code to create our own datasets of tweets that we might be able to use to ask and answer questions in the final session.

Goals

  • Get set up to build datasets with the Twitter API
  • Have fun collecting different types of tweets using a variety of ways to search
  • Practice reading and extending other people's code
  • Create a few collections of Tweets you can do research with in the final section

Download and test the Twitter project

  1. Right click the following file, click "Save Target as..." or "Save link as...", and save it to your Desktop directory: http://mako.cc/teaching/2015/cdsw-spring/twitter-data-examples.zip
  2. Unpack the zip file as we have in previous projects.

Enter your API information

  • Start your text editor and navigate to the directory that contains Twitter API.
  • Open up the file twitter_authentication.py in your text editor.
  • You will see four lines that include four strings that are being assigned to variables in ALL CAPITALS. At the moment, all of the strings say CHANGE_ME.
  • Go find the four keys, tokens, and secrets you created and wrote-down when you followed the Twitter authentication setup. Change every string that says CHANGE_ME into a string that includes the key, token, or secret you downloaded. Remember that since these are strings, we need to include quotations marks around them. Also make sure that you match up the right keys and tokens with the right variables.

Once you have done this, your example programs are set up to use the Twitter API!

Test the Twitter API code

Start up your terminal and navigate to the directory that contains your Twitter API code.

One of the files is "twitter1.py", which has a ".py" extension indicating that it is a Python script. Type:

python twitter1.py

at the command prompt to execute the twitter1.py Python script. Wait a little while while your computer connects to Twitter. You should see a series of tweets run by your screen. If you don't, let a member of the teaching team know.

Success!

You are done downloading the Twitter API project!

Champagne.pngParty.png

Potential exercises

Who are my followers?

1) Use sample 2 to get your followers.

2) For each of your followers, get *their* followers (investigate time.sleep to throttle your computation)

3) Identify the follower you have that also follows the most of your followers.

4) How many handles follow you but none of your followers?

5) Repeat this for people you follow, rather than that follow you.


Topics and Trends

1) Use sample 3 to produce a list of 1000 tweets about a topic.

2) Look at those tweets. How does twitter interpret a two word query like "data science"

3) Eliminate retweets [hint: look at the tweet object!]

4) For each tweet original tweet, list the number of times you see it retweeted.

5) Get a list of the URLs that are associated with your topic.

Geolocation

1) Alter the streaming algorithm to include a "locations" filter. You need to use the order sw_lng, sw_lat, ne_lng, ne_lat for the four coordinates.

2) What are people tweeting about in Times Square today?

2.5) Bonus points: set up a bounding box around TS and around NYC as a whole. Can you find words that are more likely to appear in TS?

3) UW is playing Arizona in football today. Set up a bounding box around the Arizona stadium and around UW. Can you identify tweets about football? Who tweets more about the game?

  1. you can use d = api.search(geocode='37.781157,-122.398720,1mi') to do
  2. static geo search.

Congratulations!!!!

You now know how to capture data from Twitter that you can use in your research!!! Next workshop we'll play with some fun analytical tools. In the meantime, here are A_Few_Words_of_Caution_About_Using_Twitter_Data_for_Science