Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Scraping from the Web: An Overview That Does Not Contain Too Much Cussing

on

  • 3,765 views

A high level overview of how we did scraping at EveryBlock.

A high level overview of how we did scraping at EveryBlock.

Statistics

Views

Total Views
3,765
Views on SlideShare
3,752
Embed Views
13

Actions

Likes
6
Downloads
47
Comments
0

2 Embeds 13

http://localhost 9
http://lanyrd.com 4

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Scraping from the Web: An Overview That Does Not Contain Too Much Cussing Presentation Transcript

  • 1. SCRAPING FROM THE WEB An Overview That Does Not Contain Too Much Cussing Feihong Hsu ChiPy February 14, 2013
  • 2. OrganizationDefinition of scraperCommon types of scrapersComponents of a scraping systemPro tips
  • 3. What I mean when I say scraperAny program that retrieves structured data from the web, andthen transforms it to conform with a different structure.Wait, isn’t that just ETL? (extract, transform, load)Well, sort of, but I don’t want to call it that...
  • 4. NotesSome people would say that “scraping” only applies to webpages. I would argue that getting data from a CSV or JSON file isqualitatively not all that different. So I lump them all together.Why not ETL? Because ETL implies that there are rules andexpectations, and these two things don’t exist in the world ofopen government data. They can change the structure of theirdataset without telling you, or even take the dataset down on awhim. A program that pulls down government data is often goingto be a bit hacky by necessity, so “scraper” seems like a goodterm for that.
  • 5. Main types of scrapersCSV PDFRSS/Atom Database dumpJSON GISXML MixedHTML crawlerWeb browser
  • 6. CSVimport csvYou should usually use csv.DictReader.If the column names are all caps, consider making themlowercase.Watch out for CSV datasets that don’t have the same number ofelements on each row.
  • 7. def get_rows(csv_file): reader = csv.reader(open(csv_file)) # Get the column names, lowercased. column_names = tuple(k.lower() for k in reader.next()) for row in reader: yield dict(zip(column_names, row))
  • 8. JSONimport json
  • 9. XMLimport lxml.etreeGet rid of namespaces in the input document. http://bit.ly/LO5x7HA lot of XML datasets have a fairly flat structure. In these cases,convert the elements to dictionaries.
  • 10. <root> <items> <item> <id>3930277-ac</id> <name>Frodo Samwise</name> <age>56</age> <occupation>Tolkien scholar</occupation> <description>Short, with hairy feet</description> </item> ... </items></root>
  • 11. import lxml.etreetree = lxml.etree.fromstring(SOME_XML_STRING)for el in tree.findall(items/item): children = el.getchildren() # Keys are element names. keys = (c.tag for c in children) # Values are element text contents. values = (c.text for c in children) yield dict(zip(keys, values))
  • 12. HTMLimport requestsimport lxml.htmlI generally use XPath, but pyquery seems fine too.If the HTML is very funky, use html5lib as the parser.Sometimes data can be scraped from a chunk of JavaScriptembedded in the page.
  • 13. NotesPlease don’t use urllib2.If you do use html5lib for parsing, remember that you can do sofrom within lxml itself. http://lxml.de/html5parser.html
  • 14. Web browserIf you need a real browser to scrape the data, it’s often not worthit.But there are tools out there.I wrote PunkyBrowster, but I cant really recommend it overghost.py. It seems to have a better API, supports PySide and Qt,and has a more permissive license (MIT).
  • 15. PDFNot as hard as it looks.There are no Python libraries that handle all kinds of PDFdocuments in the wild.Use the pdftohtml command to convert the PDF to XML.When debugging, use pdftohtml to generate HTML that you caninspect in the browser.If the text in the PDF is in tabular format, you can group text cellsby proximity.
  • 16. NotesThe “group by proximity” strategy works like this:1. Find a text cell that has a very distinct pattern (probably a datecell). This is your “anchor”.2. Find all cells that have the same row position as the anchor(possibly off by a few pixels).3. Figure out which grouped cells belong to which fields basedon column position.
  • 17. RSS/Atomimport feedparserSometimes feedparser can’t handle custom fields, and you’ll haveto fall back to lxml.etree.Unfortunately, plenty of RSS feeds are not compliant XML.Either do some custom munging or try html5lib.
  • 18. Database dumpIf it’s a Microsoft Access file, use mbtools to dump the data.Sometimes it’s a ZIP file containing CSV files, each of whichcorresponds to a separate table dump.Just load it all into a SQLite database and run queries on it.
  • 19. NotesWe wrote code that simulated joins using lists of dictionaries.This was painful to write and not so much fun to read. Don’t dothis.
  • 20. GISI haven’t worked much with KML or SHP files.If an organization provides GIS files for download, they usuallyoffer other options as well. Look for those instead.
  • 21. MixedThis is very common.For example: an organization offers a CSV download, but youhave to scrape their web page to find the link for it.
  • 22. Components of a scraping systemDownloaderCacherRaw item retrieverExisting item detectorItem transformerStatus reporter
  • 23. NotesCaching is essential when scraping a dataset that involves a largenumber of HTML pages. Test runs can take hours if you’remaking requests over the network. A good caching system prettyprints the files it downloads so you can more easily inspect them.Reporting is essential if you’re managing a group of scrapers.Since you KNOW that at least one of your scrapers will bebroken at any time, you might as well know which ones arebroken. A good reporting mechanism shows when your scrapersbreak, as well as when the dataset itself has issues (determinedheuristically).
  • 24. Steps to writing a scraperFind the data sourceFind the metadataAnalysis (verify the primary key)DevelopTestFix (repeat ∞ times)
  • 25. NotesThe Analysis step should also include noting which fields shouldbe lookup fields (see design pattern slide).The Testing step is always done on real data and has threephases: dry run (nothing added or updated), dry run withlookups (only lookups are added), and production run. I run allthree phases on my local instance before deploying toproduction.
  • 26. A very useful tool for HTML scrapingFirefinder (http://bit.ly/kr0UOY)Extension for FirebugAllows you to test CSS and XPath expressions on any page, andvisually inspect the results.
  • 27. Look, it’s Firefinder!
  • 28. Storing scraped dataDon’t create tables before you understand how you want to usethe data.Consider using ZODB (or another nonrelational DB)Adrian Holovaty’s talk on how EveryBlock avoided creating newtables for each dataset: http://bit.ly/Yl6VAZ (relevant partstarts at 7:10)
  • 29. Design patternsIf a field contains a finite number of possible values, use a lookuptable instead of storing each value.Make a scraper superclass that incorporates common scraperlogic.
  • 30. NotesThe scraper superclass will probably have convenience methodsfor converting dates/times, cleaning HTML, looking for existingitems, etc. It should also incorporate the caching and reportinglogic.
  • 31. Working with government dataSome data sources are only available at certain times of day.Be careful about rate limiting and IP blocking.Data scraped from a web page shouldn’t be used for analyzingtrends.When you’re stuck, give them a phone call.
  • 32. NotesIf you do manage to find an actual person to talk to you, keep arecord of their contact information and do NOT lose it! They areyour first line of defense when a dataset you rely on goes down.
  • 33. Pro tipsWhen you don’t know what encoding the content is in, usecharade, not chardet.Remember to clean any HTML you intend to display.If the dataset doesn’t allow filtering by date, it’s a lost cause(unless you just care about historical data).When your scraper fails, do NOT fix it. If a user complains,consider fixing it.
  • 34. I am doneQuestions?