Scraping from the Web: An Overview That Does Not Contain Too Much Cussing

Web Developer at EveryBlock
Feb. 15, 2013
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
Scraping from the Web: An Overview That Does Not Contain Too Much Cussing
1 of 34

More Related Content

Viewers also liked

Seminar_3D INTERNETSeminar_3D INTERNET
Seminar_3D INTERNETPreeti Rajak
Seminar on 3 d internetSeminar on 3 d internet
Seminar on 3 d internetPabitra Padhy
3D Internet 3D Internet
3D Internet Abhishek Abhi
Cyber TerrorismCyber Terrorism
Cyber Terrorismloverakk187
Selenium pptSelenium ppt
Selenium pptPavan Kumar
3d internet3d internet
3d internetVikas Sarwara

Recently uploaded

dvss.pptdvss.ppt
dvss.pptSaikrishnaCheruvu1
Performance Tuning Using oratop Performance Tuning Using oratop
Performance Tuning Using oratop Sandesh Rao
Hyperledger FireFly - HYPERLEDGER Workshop, WebXHyperledger FireFly - HYPERLEDGER Workshop, WebX
Hyperledger FireFly - HYPERLEDGER Workshop, WebXHyperleger Tokyo Meetup
Workshop on IoT and Basic Home Automation_BAIUST.pptxWorkshop on IoT and Basic Home Automation_BAIUST.pptx
Workshop on IoT and Basic Home Automation_BAIUST.pptxRedwan Ferdous
How SACCOs can increase their memberships  AD_compressed (1).pdfHow SACCOs can increase their memberships  AD_compressed (1).pdf
How SACCOs can increase their memberships AD_compressed (1).pdfCoretecDigital
[KCD GT 2023] Demystifying etcd failure scenarios for Kubernetes.pdf[KCD GT 2023] Demystifying etcd failure scenarios for Kubernetes.pdf
[KCD GT 2023] Demystifying etcd failure scenarios for Kubernetes.pdfWilliam Caban

Scraping from the Web: An Overview That Does Not Contain Too Much Cussing

  1. SCRAPING FROM THE WEB An Overview That Does Not Contain Too Much Cussing Feihong Hsu ChiPy February 14, 2013

  2. Organization Definition of scraper Common types of scrapers Components of a scraping system Pro tips

  3. What I mean when I say scraper Any program that retrieves structured data from the web, and then transforms it to conform with a different structure. Wait, isn’t that just ETL? (extract, transform, load) Well, sort of, but I don’t want to call it that...

  4. Notes Some people would say that “scraping” only applies to web pages. I would argue that getting data from a CSV or JSON file is qualitatively not all that different. So I lump them all together. Why not ETL? Because ETL implies that there are rules and expectations, and these two things don’t exist in the world of open government data. They can change the structure of their dataset without telling you, or even take the dataset down on a whim. A program that pulls down government data is often going to be a bit hacky by necessity, so “scraper” seems like a good term for that.

  5. Main types of scrapers CSV PDF RSS/Atom Database dump JSON GIS XML Mixed HTML crawler Web browser

  6. CSV import csv You should usually use csv.DictReader. If the column names are all caps, consider making them lowercase. Watch out for CSV datasets that don’t have the same number of elements on each row.

  7. def get_rows(csv_file): reader = csv.reader(open(csv_file)) # Get the column names, lowercased. column_names = tuple(k.lower() for k in reader.next()) for row in reader: yield dict(zip(column_names, row))

  8. JSON import json

  9. XML import lxml.etree Get rid of namespaces in the input document. http://bit.ly/ LO5x7H A lot of XML datasets have a fairly flat structure. In these cases, convert the elements to dictionaries.

  10. <root> <items> <item> <id>3930277-ac</id> <name>Frodo Samwise</name> <age>56</age> <occupation>Tolkien scholar</occupation> <description>Short, with hairy feet</description> </item> ... </items> </root>

  11. import lxml.etree tree = lxml.etree.fromstring(SOME_XML_STRING) for el in tree.findall('items/item'): children = el.getchildren() # Keys are element names. keys = (c.tag for c in children) # Values are element text contents. values = (c.text for c in children) yield dict(zip(keys, values))

  12. HTML import requests import lxml.html I generally use XPath, but pyquery seems fine too. If the HTML is very funky, use html5lib as the parser. Sometimes data can be scraped from a chunk of JavaScript embedded in the page.

  13. Notes Please don’t use urllib2. If you do use html5lib for parsing, remember that you can do so from within lxml itself. http://lxml.de/html5parser.html

  14. Web browser If you need a real browser to scrape the data, it’s often not worth it. But there are tools out there. I wrote PunkyBrowster, but I can't really recommend it over ghost.py. It seems to have a better API, supports PySide and Qt, and has a more permissive license (MIT).

  15. PDF Not as hard as it looks. There are no Python libraries that handle all kinds of PDF documents in the wild. Use the pdftohtml command to convert the PDF to XML. When debugging, use pdftohtml to generate HTML that you can inspect in the browser. If the text in the PDF is in tabular format, you can group text cells by proximity.

  16. Notes The “group by proximity” strategy works like this: 1. Find a text cell that has a very distinct pattern (probably a date cell). This is your “anchor”. 2. Find all cells that have the same row position as the anchor (possibly off by a few pixels). 3. Figure out which grouped cells belong to which fields based on column position.

  17. RSS/Atom import feedparser Sometimes feedparser can’t handle custom fields, and you’ll have to fall back to lxml.etree. Unfortunately, plenty of RSS feeds are not compliant XML. Either do some custom munging or try html5lib.

  18. Database dump If it’s a Microsoft Access file, use mbtools to dump the data. Sometimes it’s a ZIP file containing CSV files, each of which corresponds to a separate table dump. Just load it all into a SQLite database and run queries on it.

  19. Notes We wrote code that simulated joins using lists of dictionaries. This was painful to write and not so much fun to read. Don’t do this.

  20. GIS I haven’t worked much with KML or SHP files. If an organization provides GIS files for download, they usually offer other options as well. Look for those instead.

  21. Mixed This is very common. For example: an organization offers a CSV download, but you have to scrape their web page to find the link for it.

  22. Components of a scraping system Downloader Cacher Raw item retriever Existing item detector Item transformer Status reporter

  23. Notes Caching is essential when scraping a dataset that involves a large number of HTML pages. Test runs can take hours if you’re making requests over the network. A good caching system pretty prints the files it downloads so you can more easily inspect them. Reporting is essential if you’re managing a group of scrapers. Since you KNOW that at least one of your scrapers will be broken at any time, you might as well know which ones are broken. A good reporting mechanism shows when your scrapers break, as well as when the dataset itself has issues (determined heuristically).

  24. Steps to writing a scraper Find the data source Find the metadata Analysis (verify the primary key) Develop Test Fix (repeat ∞ times)

  25. Notes The Analysis step should also include noting which fields should be lookup fields (see design pattern slide). The Testing step is always done on real data and has three phases: dry run (nothing added or updated), dry run with lookups (only lookups are added), and production run. I run all three phases on my local instance before deploying to production.

  26. A very useful tool for HTML scraping Firefinder (http://bit.ly/kr0UOY) Extension for Firebug Allows you to test CSS and XPath expressions on any page, and visually inspect the results.

  27. Look, it’s Firefinder!

  28. Storing scraped data Don’t create tables before you understand how you want to use the data. Consider using ZODB (or another nonrelational DB) Adrian Holovaty’s talk on how EveryBlock avoided creating new tables for each dataset: http://bit.ly/Yl6VAZ (relevant part starts at 7:10)

  29. Design patterns If a field contains a finite number of possible values, use a lookup table instead of storing each value. Make a scraper superclass that incorporates common scraper logic.

  30. Notes The scraper superclass will probably have convenience methods for converting dates/times, cleaning HTML, looking for existing items, etc. It should also incorporate the caching and reporting logic.

  31. Working with government data Some data sources are only available at certain times of day. Be careful about rate limiting and IP blocking. Data scraped from a web page shouldn’t be used for analyzing trends. When you’re stuck, give them a phone call.

  32. Notes If you do manage to find an actual person to talk to you, keep a record of their contact information and do NOT lose it! They are your first line of defense when a dataset you rely on goes down.

  33. Pro tips When you don’t know what encoding the content is in, use charade, not chardet. Remember to clean any HTML you intend to display. If the dataset doesn’t allow filtering by date, it’s a lost cause (unless you just care about historical data). When your scraper fails, do NOT fix it. If a user complains, consider fixing it.

  34. I am done Questions?