Community Data Science Course (Spring 2019)/Day 5 Notes

From CommunityData
< Community Data Science Course (Spring 2019)
Revision as of 22:40, 30 April 2019 by Guyrt (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Downloading data from the internet[edit]

API (Application Programmer Interface): a structured way for two programs to communicate. Think of it like a contract or a secret handshake.


What to look for when looking at an API:

  1. Where is the documentation?
  2. What kinds of information can I request?
  3. How to I request information from this API?
  4. Are there any rate limits or restrictions on use? For instance, Twitter doesn't want you downloading tweets. Zillow forbids storing bulk results. (Why?)
  5. Is there a python package that will help me? For instance, Twitter has a great python package called tweepy that will simplify access.

Example We're going to spend today looking at Open Street Map's api called Nominatim.

Structured data and JSON

  • HTML is the markup language your browser uses to display information.

Do this: Go to[bakery]+seattle+wa (you'll want to copy the whole thing) and view source to see the raw html. See if you can find the structured data embedded in there somewhere. It's there, but it's often difficult to teach computers to find it.

Sometimes, the only way to get data is to extract it from potentially messy HTML. This is called scrapping and python has a library called BeautifulSoup to help with that.

Often, data providers make it easier for computers to extract information from their API by providing it in a structured format.

Do this: Go to[bakery]+seattle+wa&format=json to see the same query in JSON format.

JSON: javascript object notation. JSON data looks like python lists and dictionaries, and we'll see that it's easy to turn it into a python variable that is a list or dictionary. Here's a sample:

     "licence":"Data © OpenStreetMap contributors, ODbL 1.0. http:\/\/\/copyright",
     "display_name":"The Confectional, 618, Broadway East, Eastlake, Capitol Hill, Seattle, King County, Washington, 98102, United States of America",


Do this copy the json output from the query above into the json parser at What is the structure of this document?

The python requests library We can use the following program to download json data from Nominatim:

   import requests  # this imports a new package called requests
   response = requests.get('', {'q': '[bakery] seattle wa', 'format': 'json'})
   print response.status_code  # 200 means it worked.
   data = response.json()

Here, we make use of the requests library, which has excellent documentation here.

Let's break down each line:

This is the most important line! Here, we "get" information from the web server. Note that we pass the url up to the "?" character as the first argument. Compare the dictionary second argument to the query we did above in our browser. How do they differ? How are they the same?

  • print(response.status_code) the response is a python object that contains the actual contents of the web page as well as some status information. Here, we're getting the status_code, which tells us whether the call succeeded. 200 is "good", and you will sometimes see 404 for "not found" or 500 for "server error".
  • print(response.content) this wasn't above, but try it anyway. response.content contains the raw contents of the webpage as a string.
  • data = response.json() response.json tries to convert the string content to a python list or dictionary if the content was stored in JSON format. (What happens if the content wasn't JSON?)