Learn to scrape data in Google Docs using ImportFeed, ImportHTML, and ImportXML. Annie Cushing, Senior SEO at SEER Interactive (@AnnieCushing on Twitter) isn't a developer, so she breaks this process down into easy-to-understand steps - and provides a link to a Google Doc where you can follow along and learn from!
11. ImportHTML
TWO OPTIONS
• Table
• List
@AnnieCushing 11
12. =ImportHtml(URL, query, index)
URL: “www.domain.com/whatever” OR
cell reference
query: “table” or “list” OR cell reference
index: If multiple lists or tables, which
one (3 = 3rd table)
@AnnieCushing 12
36. Can even be in the middle of the XPath
//div[@class=„main‟]//blockquote[2]
@AnnieCushing 36
37. Other ways to tell “which one” in XPath
STARTS-WITH
@AnnieCushing 37
38. Other ways to tell “which one” in XPath
CONTAINS
@AnnieCushing 38
39. Other ways to tell “which one” in XPath
@AnnieCushing 39
40. Other ways to tell “which one” in XPath
INDEX VALUE
@AnnieCushing 40
41. Other ways to tell “which one” in XPath
LAST()
@AnnieCushing 41
42. Become a scraping FOOL
• Pull queries from Topsy
• Pull product feeds
• Pull specific elements from a sitemap
• Scrape Twitter followers
• Pull GA metrics
• Scrape HTML tables (e.g., list of countries from Wikipedia)
• Scrape lists (e.g., scraped lists of consumer review sites to create a custom
search engine, top sports blogs, etc.)
• Scrape rankings
• Scrape GA codes / Adsense IDs / IPs / IP Country Codes
• Find de-indexed sites
• Scrape directories
• Scrape Yahoo / Google for relevant pages from directory listings
• Scraping title / h1 / meta descriptions
• Scrape page URLs to find if someone is linking to you
• Scrape Google to find snippets of text on a list of domains (for link networks)
• Scrape Quora
@AnnieCushing @NicoMiceli
43. SEE IMPORT FUNCTIONS IN
THEIR NATURAL HABITAT!
@AnnieCushing http://bit.ly/annies-gdoc
43
I’m a data wrangler. I collect and drill through data like it’s my job.Because it kind of is. But I found that since coming to SEER my need for data collection at times surpassed what I could get in tools. So I turned to Gdocs and its ability to scrape.
In order of complexity
I always prefer to chop my Import functions into cells. Easier to troubleshoot and modify. And you don’t have to worry about parentheses b/c you don’t need them.When you get your web feet you can start getting tricky w/ the optional arguments.
To learn more check out how to scrape feeds all over the place by checking out Wil’spreso.That wasn’t the original graphic. But you’ll see why it’s fitting by the time you get to the end.Point out URL.
Every once in a while it’s 0-based. Honestly, if there are multiple tables (like Wikipedia pages), I just guess and change the number until it pulls the data I need.
Basically, anything that’s in a table or bulleted list you can scrape.I recently pulled together a CSE of review sites. And I used ImportHTML quite a bit – to scrape both lists and tables.
We’re entering the deep end of the scraping pool.
Okay, so ImportXML uses Xpath. And here’s everything you need to know about Xpath …
Yeah, I have no idea what that really means, and I suffer from a deplorable lack of curiosity.
I’ll be showing one example of the text node that I actually used when scraping Craigslist once. (Don’t judge.)
If it’s inside brackets, it’s an element.
If it has an = sign inside brackets, that’s an attribute.
@ … AttributeSquare brackets: which one?Ryan O and F.
We have this page of content from Barry Schwartz’s blog.Let’s say we want to scrape all of the anchors (the text part of a link).We would write something like this in Google Docs …
This basically means scrape all the anchors!
Now if you want to also scrape the URLs, you add /@href. And why do you need the @ before href? …
Don’t believe me? Check it …
Okay, it’s rare that your XPath is going to be that simple.I stole this from Distilled’s Import XML Guide for Google Docs.Point out the link.
When I first started scraping I’d look at the code and try to figure out the hierarchy judging by the indentation.But sometimes your child nodes can look like this …
And then it gets tricky!Eventually I figured out that I could just use the bar at the bottom b/c it shows the actual hierarchy.
Eventually I figured out that I could just use the bar at the bottom b/c it shows the actual hierarchy.
So you could be precise and write out the XPath from the root on down the food chain. This says, “Start at the HTML element, then drill down …
But you’ll look like a dork.
So instead what the cool kids do is just use the double slash and grab the div you want. You just need as much detail as it takes to get that list.
You can even use it in the middle of your XPath.
The more complex your scraping requirement is, the more complex your XPath becomes. So some other ways to tell “which one” are with the starts-with predicate.
Here I wanted to see if I could scrape all the iPad links, then use that scraped URL as a reference point to scrape the email address on that page. You’ve heard of Will It Blend? I’ve been playing my own game of Will It Scrape?
This is where I used the text b/c I only wanted links that had iPad in the anchor.
This is a compiled list from Nico, Ethan, Chris, and WilGive Nico a shout out!