Your SlideShare is downloading. ×
0
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Web Scraper Shibuya.pm tech talk #8
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Web Scraper Shibuya.pm tech talk #8

19,776

Published on

Published in: Technology
0 Comments
16 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
19,776
On Slideshare
0
From Embeds
0
Number of Embeds
10
Actions
Shares
0
Downloads
266
Comments
0
Likes
16
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Practical Web Scraping with Web::Scraper Tatsuhiko Miyagawa [email_address] Six Apart, Ltd. / Shibuya Perl Mongers Shibuya.pm Tech Talks #8
  • 2.
    • Practical Web Scraping
    • with Web::Scraper
  • 3. Web pages are built using text-based mark-up languages ( HTML and XHTML ), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human consumption, and frequently mix content with presentation. Thus, screen scrapers were reborn in the web era to extract machine-friendly data from HTML and other markup. http://en.wikipedia.org/wiki/Screen_scraping
  • 4. Web pages are built using text-based mark-up languages ( HTML and XHTML ), and frequently contain a wealth of useful data in text form. However, most web pages are designed for human consumption, and frequently mix content with presentation. Thus, screen scrapers were reborn in the web era to extract machine-friendly data from HTML and other markup. http://en.wikipedia.org/wiki/Screen_scraping
  • 5.
    • "Screen-scraping
    • is so 1999!"
  • 6.  
  • 7.  
  • 8.
    • RSS is a metadata
    • not a complete
    • HTML replacement
  • 9.
    • Practical Web Scraping
    • with Web::Scraper
  • 10.
    • What's wrong with
    • LWP & Regexp?
  • 11.  
  • 12. <td>Current <strong>UTC</strong> (or GMT/Zulu)-time used: <strong id=&quot;ctu&quot;>Monday, August 27, 2007 at 12:49:46</strong> <br />
  • 13. <td>Current <strong>UTC</strong> (or GMT/Zulu)-time used: <strong id=&quot;ctu&quot;>Monday, August 27, 2007 at 12:49:46</strong> <br /> > perl -MLWP::Simple -le '$c = get(&quot;http://timeanddate.com/worldclock/&quot;); $c =~ m@<strong id=&quot;ctu&quot;>(.*?)</strong>@ and print $1' Monday, August 27, 2007 at 12:49:46
  • 14.
    • It works!
  • 15. WWW::MySpace 0.70
  • 16. WWW::Search::Ebay 2.231
  • 17. WWW::Mixi 0.50
  • 18.
    • It works …
  • 19.
    • There are
    • 3 problems
    • (at least)
  • 20.
    • (1)
    • Fragile
    • Easy to break even with slight HTML changes
    • (like newlines, order of attributes etc.)
  • 21.
    • (2)
    • Hard to maintain
    • Regular expression based scrapers are good
    • Only when they're used in write-only scripts
  • 22.
    • (3)
    • Improper
    • HTML & encoding
    • handling
  • 23. <span class=&quot;message&quot;>I &hearts; Shibuya</span> > perl –e '$c =~ m@<span class=&quot;message&quot;>(.*?)</span>@ and print $1' I &hearts; Shibuya
  • 24. <span class=&quot;message&quot;>I &hearts; Shibuya</span> > perl –MHTML::Entities –e '$c =~ m@<span class=&quot;message&quot;>(.*?)</span>@ and print decode_entities ($1)' I ♥ Shibuya
  • 25. <span class=&quot;message&quot;>Perl が大好き! </span> > perl –MHTML::Entities –MEncode –e '$c =~ m@<span class=&quot;message&quot;>(.*?)</span>@ and print decode_entities( decode_utf8 ($1))' Wide character in print at –e line 1. Perl が大好き!
  • 26.
    • The &quot;right&quot; way
    • of screen-scraping
  • 27.
    • (1), (2)
    • Maintainable
    • Less fragile
  • 28.
    • Use XPath
    • and CSS Selectors
  • 29.
    • XPath
    • HTML::TreeBuilder::XPath
    • XML::LibXML
  • 30. XPath <td>Current <strong>UTC</strong> (or GMT/Zulu)-time used: <strong id=&quot;ctu&quot;>Monday, August 27, 2007 at 12:49:46</strong> <br /> use HTML::TreeBuilder::XPath; my $tree = HTML::TreeBuilder::XPath->new_from_content($content); print $tree->findnodes ('//strong[@id=&quot;ctu&quot;]') ->shift->as_text; # Monday, August 27, 2007 at 12:49:46
  • 31.
    • CSS Selectors
    • &quot;XPath for HTML coders&quot;
    • &quot;XPath for people who hates XML&quot;
  • 32. CSS Selectors
    • body { font-size: 12px; }
    • div.article { padding: 1em }
    • span#count { color: #fff }
  • 33.
    • XPath:
    • //strong[@id=&quot;ctu&quot;]
    • CSS Selector:
    • strong#ctu
  • 34. CSS Selectors <td>Current <strong>UTC</strong> (or GMT/Zulu)-time used: <strong id=&quot;ctu&quot;>Monday, August 27, 2007 at 12:49:46</strong> <br /> use HTML::TreeBuilder::XPath; use HTML::Selector::XPath qw(selector_to_xpath); my $tree = HTML::TreeBuilder::XPath->new_from_content($content); my $xpath = selector_to_xpath &quot;strong#ctu&quot;; print $tree->findnodes($xpath)->shift->as_text; # Monday, August 27, 2007 at 12:49:46
  • 35. Complete Script #!/usr/bin/perl use strict; use warnings; use Encode; use LWP::UserAgent; use HTTP::Response::Encoding; use HTML::TreeBuilder::XPath; use HTML::Selector::XPath qw(selector_to_xpath); my $ua = LWP::UserAgent->new; my $res = $ua->get(&quot;http://www.timeanddate.com/worldclock/&quot;); if ($res->is_error) { die &quot;HTTP GET error: &quot;, $res->status_line; } my $content = decode $res->encoding, $res->content; my $tree = HTML::TreeBuilder::XPath->new_from_content($content); my $xpath = selector_to_xpath(&quot;strong#ctu&quot;); my $node = $tree->findnodes($xpath)->shift; print $node->as_text;
  • 36.
    • Robust,
    • Maintainable,
    • and
    • Sane character handling
  • 37. Exmaple (before) <td>Current <strong>UTC</strong> (or GMT/Zulu)-time used: <strong id=&quot;ctu&quot;>Monday, August 27, 2007 at 12:49:46</strong> <br /> > perl -MLWP::Simple -le '$c = get(&quot;http://timeanddate.com/worldclock/&quot;); $c =~ m@<strong id=&quot;ctu&quot;>(.*?)</strong>@ and print $1' Monday, August 27, 2007 at 12:49:46
  • 38. Example (after) #!/usr/bin/perl use strict; use warnings; use Encode; use LWP::UserAgent; use HTTP::Response::Encoding; use HTML::TreeBuilder::XPath; use HTML::Selector::XPath qw(selector_to_xpath); my $ua = LWP::UserAgent->new; my $res = $ua->get(&quot;http://www.timeanddate.com/worldclock/&quot;); if ($res->is_error) { die &quot;HTTP GET error: &quot;, $res->status_line; } my $content = decode $res->encoding, $res->content; my $tree = HTML::TreeBuilder::XPath->new_from_content($content); my $xpath = selector_to_xpath(&quot;strong#ctu&quot;); my $node = $tree->findnodes($xpath)->shift; print $node->as_text;
  • 39.
    • but …
    • long and boring
  • 40.
    • Practical Web Scraping
    • with Web::Scraper
  • 41.
    • Web scraping toolkit
    • inspired by scrapi.rb
    • DSL-ish
  • 42. Example (before) #!/usr/bin/perl use strict; use warnings; use Encode; use LWP::UserAgent; use HTTP::Response::Encoding; use HTML::TreeBuilder::XPath; use HTML::Selector::XPath qw(selector_to_xpath); my $ua = LWP::UserAgent->new; my $res = $ua->get(&quot;http://www.timeanddate.com/worldclock/&quot;); if ($res->is_error) { die &quot;HTTP GET error: &quot;, $res->status_line; } my $content = decode $res->encoding, $res->content; my $tree = HTML::TreeBuilder::XPath->new_from_content($content); my $xpath = selector_to_xpath(&quot;strong#ctu&quot;); my $node = $tree->findnodes($xpath)->shift; print $node->as_text;
  • 43. Example (after)
    • #!/usr/bin/perl
    • use strict;
    • use warnings;
    • use Web::Scraper;
    • use URI;
    • my $s = scraper {
    • process &quot;strong#ctu&quot;, time => 'TEXT';
    • result 'time';
    • };
    • my $uri = URI->new(&quot;http://timeanddate.com/worldclock/&quot;);
    • print $s->scrape($uri);
  • 44. Basics
    • use Web::Scraper;
    • my $s = scraper {
    • # DSL goes here
    • };
    • my $res = $s->scrape($uri);
  • 45. process
    • process $selector,
    • $key => $what,
    • … ;
  • 46.
    • $selector:
    • CSS Selector
    • or
    • XPath (start with /)
  • 47.
    • $key:
    • key for the result hash
    • append &quot;[]&quot; for looping
  • 48.
    • $what:
    • '@attr'
    • 'TEXT'
    • 'RAW'
    • Web::Scraper
    • sub { … }
    • Hash reference
  • 49. <ul class=&quot;sites&quot;> <li><a href=&quot;http://vienna.openguides.org/&quot;>OpenGuides</a></li> <li><a href=&quot;http://vienna.yapceurope.org/&quot;>YAPC::Europe</a></li> </ul>
  • 50.
    • process &quot;ul.sites > li > a&quot;,
    • 'urls[]' => ' @href ';
    • # { urls => [ … ] }
    <ul class=&quot;sites&quot;> <li><a href=&quot; http://vienna.openguides.org/ &quot;>OpenGuides</a></li> <li><a href=&quot; http://vienna.yapceurope.org/ &quot;>YAPC::Europe</a></li> </ul>
  • 51.
    • process '//ul[@class=&quot;sites&quot;]/li/a',
    • 'names[]' => ' TEXT ';
    • # { names => [ 'OpenGuides', … ] }
    <ul class=&quot;sites&quot;> <li><a href=&quot;http://vienna.openguides.org/&quot;> OpenGuides </a></li> <li><a href=&quot;http://vienna.yapceurope.org/&quot;> YAPC::Europe </a></li> </ul>
  • 52.
    • process &quot;ul.sites > li&quot;,
    • 'sites[]' => scraper {
    • process 'a',
    • link => '@href', name => 'TEXT';
    • };
    • # { sites => [ { link => …, name => … },
    • # { link => …, name => … } ] };
    <ul class=&quot;sites&quot;> <li><a href=&quot;http://vienna.openguides.org/&quot;>OpenGuides</a></li> <li><a href=&quot;http://vienna.yapceurope.org/&quot;>YAPC::Europe</a></li> </ul>
  • 53.
    • process &quot;ul.sites > li > a&quot;,
    • 'sites[]' => sub {
    • # $_ is HTML::Element
    • +{ link => $_->attr('href'), name => $_->as_text };
    • };
    • # { sites => [ { link => …, name => … },
    • # { link => …, name => … } ] };
    <ul class=&quot;sites&quot;> <li><a href=&quot;http://vienna.openguides.org/&quot;>OpenGuides</a></li> <li><a href=&quot;http://vienna.yapceurope.org/&quot;>YAPC::Europe</a></li> </ul>
  • 54.
    • process &quot;ul.sites > li > a&quot;,
    • 'sites[]' => {
    • link => '@href', name => 'TEXT';
    • };
    • # { sites => [ { link => …, name => … },
    • # { link => …, name => … } ] };
    <ul class=&quot;sites&quot;> <li><a href=&quot;http://vienna.openguides.org/&quot;>OpenGuides</a></li> <li><a href=&quot;http://vienna.yapceurope.org/&quot;>YAPC::Europe</a></li> </ul>
  • 55. result
    • result;
    • # get stash as hashref (default)
    • result @keys;
    • # get stash as hashref containing @keys
    • result $key;
    • # get value of stash $key;
    my $s = scraper { process …; process …; result 'foo', 'bar'; };
  • 56.
    • Live Demo
  • 57.
    • Tools
  • 58.
    • > cpan Web::Scraper
    • comes with 'scraper' CLI
  • 59.
    • > scraper http://example.com/
    • scraper> process &quot;a&quot;, &quot;links[]&quot; => '@href';
    • scraper> d
    • $VAR1 = {
    • links => [
    • 'http://example.org/',
    • 'http://example.net/',
    • ],
    • };
    • scraper> y
    • ---
    • links:
    • - http://example.org/
    • - http://example.net/
  • 60.
    • > scraper /path/to/foo.html
    • > GET http://example.com/ | scraper
  • 61.
    • Recent Updates
  • 62.
    • 0.13
    • 'c' and 'c all'
    • WARN in scraper
  • 63.
    • 0.14
    • automatic absolute URI for link elements (a@href, img@src)
  • 64.
    • 0.14 (cont.)
    • 'RAW' and 'HTML'
  • 65.
    • 0.15
    • $Web::Scraper::UserAgent
    • $scraper->user_agent
  • 66.
    • 0.19
    • support encoding detection w/ META tags
  • 67.
    • TODO
  • 68.
    • Web::Scraper
    • Needs documentation
  • 69.
    • More examples
    • to put in eg/ directory
  • 70.
    • Alternative API
    • inspired by scRUBYt!
  • 71.
    • OO Backend API
    • if you don't like the DSL
  • 72.
    • integrate with
    • WWW::Mechanize
    • and Test::WWW::Declare
  • 73.
    • XPath Auto-suggestion
    • off of DOM + element
    • DOM + XPath => Element
    • DOM + Element => XPath?
    • (Template::Extract?)
  • 74.
    • generic XML support
    • (e.g. RSS/Atom feeds)
  • 75.
    • extensible text filter
    • date, geo, hCards (microformats)
    <span class=&quot;entry-date&quot;>October 1st, 2007 17:13:31 +0900</span> process &quot;.entry-date&quot;, date => 'TEXT :rfc822 ';
  • 76.
    • Summary
  • 77.
    • Web::Scraper
    • inspired by scrapi
  • 78.
    • easy, fun, maintainable
    • & less fragile
  • 79.
    • CSS selector
    • XPath
  • 80.
    • Questions?
  • 81.
    • Thank you
    • http://search.cpan.org/dist/Web-Scraper
    • http://www.slideshare.net/miyagawa/webscraper

×