Web Scraping with PHP Matthew Turland September 16, 2008
Lead Programmer for surgiSYS, LLC
PHP Community member
What is Web Scraping? 2 Stage Process Stage 1 : Retrieval GET /some/resource ... HTTP/1.1 200 OK ... Resource with data you want Stage 2 : Analysis Raw resource Usable data
How is it different from... Data Mining Focus in data mining Focus in web scraping Consuming Web Services Web service data formats Web scraping data formats
Potential Applications What Data source When Web service is unavailable or data access is one-time only. Crawlers and indexers Remote data search offers no capabilities for search or data source integration. Integration testing Applications must be tested by simulating client behavior and ensuring responses are consistent with requirements.
Disadvantages Δ == vs
Legal Concerns TOS TOU EUA Original source Illegal syndicate IANAL!
Know enough HTTP to... Use one like this: To do this:
Know enough HTTP to... PEAR::HTTP_Client pecl_http Zend_Http_Client Learn to use and troubleshoot one like this: Or roll your own! cURL Filesystem + Streams
GET /wiki/Main_Page HTTP/1.1
Let's GET Started method or operation URI address for the desired resource protocol version in use by the client header name header value request line header more headers follow...
Warning about GET In principle: "Let's do this by the book." GET In reality: "' Safe operation '? Whatever." GET
URI vs URL 1. Uniquely identifies a resource 2. Indicates how to locate a resource 3. Does both and is thus human-usable. URI URL More info in RFC 3986 Sections 1.1.3 and 1.2.2
Query Strings http://en.wikipedia.org/w/index.php? title=Query_string&action=edit URL Query String Question mark to separate the resource address and query string Equal signs to separate parameter names and respective values Ampersands to separate parameter name-value pairs. Parameter Value
URL Encoding Parameter Value first second this is a field was it clear enough (already)? Query String first=this+is+a+field&second=was+it+clear+%28already%29%3F Also called percent encoding . urlencode and urlencode : Handy PHP URL functions $_SERVER ['QUERY_STRING'] / http_build_query ( $_GET ) More info on URL encoding in RFC 3986 Section 2.1
POST Requests Most Common HTTP Operations 1. GET 2. POST ... /w/index.php POST /new/resource -or- /updated/resource GET /some/resource HTTP/1.1 Header: Value ... POST /some/resource HTTP/1.1 Header: Value request body none
POST Request Example
POST /w/index.php?title=Wikipedia:Sandbox HTTP/1.1
Blank line separates request headers and body Content type for data submitted via HTML form (multipart/form-data for file uploads ) Request body ... look familiar? Note : Most browsers have a query string length limit. Lowest known common denominator: IE7 – strlen(entire URL) <= 2,048 bytes. This limit is not standardized and only applies to query strings, not request bodies.
HEAD /wiki/Main_Page HTTP/1.1 Host: en.wikipedia.org
Same as GET with two exceptions: 1
HTTP/1.1 200 OK Header: Value
2 No response body HEAD vs GET Sometimes headers are all you want ?
Responses HTTP/1.0 200 OK Server: Apache X-Powered-By: PHP/5.2.5 ... [body] Lowest protocol version required to process the response Response status code Response status description Status line Same header format as requests, but different headers are used (see RFC 2616 Section 14 )
cURL Examples Fatal error: Allowed memory size of n00b bytes exhausted (tried to allocate 1337 bytes) in /this/slide.php on line 1 See PHP Manual , Context Options , or my php|architect article for more info. Just kidding. Really, the equivalent cURL code for the previous examples is so verbose that it won't fit on one slide and I don't think it's deserving of multiple slides.
RFC 2616 HyperText Transfer Protocol
RFC 3986 Uniform Resource Identifiers
"HTTP: The Definitive Guide" (ISBN 1565925092)
"HTTP Pocket Reference: HyperText Transfer Protocol" (ISBN 1565928628)
"HTTP Developer's Handbook" (ISBN 0672324547) by Chris Shiflett
Ben Ramsey's blog series on HTTP
Analysis Raw resource Usable data DOM XMLReader SimpleXML XSL tidy PCRE String functions JSON ctype XML Parser
tidy is good for correcting markup malformations. *
String functions and PCRE can be used for manual cleanup prior to using a parsing extension.
DOM is generally forgiving when parsing malformed markup. It generates warnings that can be suppressed.
Save a static copy of your target, use a validator on the input (ex: W3C Markup Validator ), fix validation errors manually, and write code to automatically apply fixes.
DOM and SimpleXML are tree-based parsers that store the entire document in memory to provide full access.
XMLReader is a pull-based parser that iterates over nodes in the document and is less memory-intensive.
SAX is also pull-based, but uses event-based callbacks.
Nothing "official" for CSS. Find something like CSSTidy .
PCRE can be used for parsing. Last resort, though.
Make as few assumptions (and as many assertions) about the target as possible.
Validation provides additional sanity checks for your application.
PCRE can be used to form pattern-based assertions about extracted data.
ctype can be used to form primitive type-based assertions.
XSL can be used to extract data from an XML-compatible document and retrofit it to a format defined by an XSL template.
To my knowledge, this capability is unfortunately unique to XML-compatible data.
Use components like template engines to separate formatting of data from retrieval/analysis logic.
Remain in keeping with the DRY principle.
Develop components that can be reused across projects. Ex: DomQuery , Zend_Dom .
Make an effort to minimize application-specific logic. This applies to both retrieval and analysis.
Apply to long-term real-time web scraping applications.
Affirm conditions of behavior and output of the target application.
Use in the application during runtime to avoid Bad Things (tm) happening when the target application changes.
Include in unit tests of the application. You are using unit tests, right?
Write tests on target application output stored in local files that can be run sans internet during development.
If possible/feasible/appropriate, write "live tests" that actively test using assertions on the target application.
Run live tests when the target appears to have changed (because your web scraping application breaks).
No heckling... OK, maybe just a little.
I will hang around afterward if you have questions, points for discussion, or just want to say hi. It's cool, I don't bite or have cooties or anything. I have business cards too.
I generally blog about my experiences with web scraping and PHP at http://ishouldbecoding.com. </shameless_plug>