SERP Tips

From CommunityData
Revision as of 20:05, 16 October 2020 by Kaylea (talk | contribs) (→‎Using the JSON data)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Using the JSON data[edit]

The .7z files from SERP include .json files.

They're not "pretty printed" -- they're not nicely formatted, they're collapsed into big long strings. Fortunately there are tools out there to pretty print JSON files. To get a sense for the structure, you can copy the .json text into the box on a tool like the JSON formatter and hit the pretty print button for a look at the file in a way that respects the structure created by the symbols. This will let you figure out how to navigate the tree down to the data you want.

You can use the jq tool to quickly navigate the .json and dig out just what you care about. For example, if you are a command-line user and wanted only URLS from a Google results JSON, but you don't want the many places where google links to just itself, with jq plus the magic of standard linux commandline tools you can:

 cat 'Sat Mar 28 2020 19-12-13 GMT-0500 (Central Daylight Time).json' | jq '.linkElements | .[] | .href' | grep -v google.com | tr -d '"' | sort | uniq > newfile.txt

This will:

  1. send the text of the .json file into jq
  2. navigate the tree to just the 'linkElements' list of links
  3. then iterate over each item in the list
  4. select only the 'href' trait (i.e. the URL) from each link in the list
  5. filter out all instances of google.com to remove self-linkage
  6. pull out the pesky quotemarks from the list
  7. sort the list
  8. de-duplicate the list of URLs
  9. save all the output into newfile.txt