SERP Tips: Difference between revisions

From CommunityData
(→‎Using the JSON data: clearly I'm just documenting a one-liner I'm laboring over but I hope it helps someone else)
 
Line 3: Line 3:
The .7z files from SERP include .json files.  
The .7z files from SERP include .json files.  


They're not "pretty printed" -- they're not nicely formatted, they're collapsed into big long strings. Fortunately there are tools out there to pretty print JSON files. Copy the .json text into the box on a tool like [https://jsonformatter.org/json-pretty-print the JSON formatter] and hit the pretty print button for a look at the file in a way that respects the structure created by the symbols.
They're not "pretty printed" -- they're not nicely formatted, they're collapsed into big long strings. Fortunately there are tools out there to pretty print JSON files. To get a sense for the structure, you can copy the .json text into the box on a tool like [https://jsonformatter.org/json-pretty-print the JSON formatter] and hit the pretty print button for a look at the file in a way that respects the structure created by the symbols. This will let you figure out how to navigate the tree down to the data you want.


You can use the ''jq'' tool to quickly navigate the .json and dig out just what you care about. For example, if you are a command-line user and wanted only URLS, with jq plus the magic of standard linux commandline tools you can:
You can use the ''jq'' tool to quickly navigate the .json and dig out just what you care about. For example, if you are a command-line user and wanted only URLS from a Google results JSON, but you don't want the many places where google links to just itself, with jq plus the magic of standard linux commandline tools you can:


   cat 'Sat Mar 28 2020 19-12-13 GMT-0500 (Central Daylight Time).json' | jq '.linkElements | .[] | .href' | grep -v google.com | tr -d '"' | sort | uniq > newfile.txt
   cat 'Sat Mar 28 2020 19-12-13 GMT-0500 (Central Daylight Time).json' | jq '.linkElements | .[] | .href' | grep -v google.com | tr -d '"' | sort | uniq > newfile.txt

Latest revision as of 22:05, 16 October 2020

Using the JSON data[edit]

The .7z files from SERP include .json files.

They're not "pretty printed" -- they're not nicely formatted, they're collapsed into big long strings. Fortunately there are tools out there to pretty print JSON files. To get a sense for the structure, you can copy the .json text into the box on a tool like the JSON formatter and hit the pretty print button for a look at the file in a way that respects the structure created by the symbols. This will let you figure out how to navigate the tree down to the data you want.

You can use the jq tool to quickly navigate the .json and dig out just what you care about. For example, if you are a command-line user and wanted only URLS from a Google results JSON, but you don't want the many places where google links to just itself, with jq plus the magic of standard linux commandline tools you can:

 cat 'Sat Mar 28 2020 19-12-13 GMT-0500 (Central Daylight Time).json' | jq '.linkElements | .[] | .href' | grep -v google.com | tr -d '"' | sort | uniq > newfile.txt

This will:

  1. send the text of the .json file into jq
  2. navigate the tree to just the 'linkElements' list of links
  3. then iterate over each item in the list
  4. select only the 'href' trait (i.e. the URL) from each link in the list
  5. filter out all instances of google.com to remove self-linkage
  6. pull out the pesky quotemarks from the list
  7. sort the list
  8. de-duplicate the list of URLs
  9. save all the output into newfile.txt