Web Scraping With Python


Published on

Data Wranglers DC December meetup: http://www.meetup.com/Data-Wranglers-DC/events/151563622/

There's a lot of data sitting on websites just waiting to be combined with data you have sitting on your servers. During this talk, Robert Dempsey will show you how to create a dataset using Python by scraping websites for the data you want.

  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Story – Palamee using the computerHow many of you have children?Don’t worry – I won’t subject you to this ad.
  • Questions:1. Raise your hand if any part of data wrangling is a part of your job.2.Of you that raised your hand, what percentage, on average, would you say you spend doing data wrangling tasks?3. For those who aren’t doing this day-to-day: why did you join this group? What do you want to get out of it?4. Look around you – these are the people that are going to help you get from where you are to where you want to be.5. That is the purpose of this group – to bring like-minded individuals together so that we can all improve our craft and our lives.
  • IntroductionsWe’re going to do this a bit differently.For the next 5 minutes, I’d like you to introduce yourself to the person to your left and to the person on your right.
  • We’re a community. And part of that community lives on LinkedIn.Please join the community, start discussions,share resources, ask questions.As with every community, there are some rules >>
  • Group Rules
  • A huge thank you to our venue sponsor – Logikcull.Logikcull.com helps businesses and law firms significantly reduce the cost of litigation by automating eDiscovery and making it drop-dead-easy to find both what you want, and don't want in just a few clicks.
  • Here’s how to get on the Internet, which you’ll definitely want to do in order to download python packages and code.
  • Our topic tonight: web scraping with python.What is web scraping >>
  • Web scraping is using a computer to extract information from websites.Reasons:Lead listsBetter understand existing clientsBetter understand potential clients (Gallup integration with lead forms)Augment data I already haveYou can either build a web scraper, or you can buy one.
  • When to buy: you need something simple and fast.FMiner is one of those solutions. It’s one of the few I’ve found that runs on Mac and Windows. I’ve used it before and it’s pretty cool.A few others that I can’t vouch for but that got good reviews are >>
  • WebSundew
  • Visual Web Ripper
  • Screen-ScraperThere are many commercial options available, but when you want to build your own? >>
  • When to build:Need something truly customWeb pages are using crappy markup and it’s harder to fully automateIf you want to get hardcore and geeky >>
  • XPath is used to navigate through elements and attributes in an XML document.Basically it’s the path to different elements on a web page. We’ll see this later on.A few browser extensions to help you:Chrome: XPathHelper – Adam SadovskyFirefox: xpath finderThere are a few ways you can build your own scraper >>
  • My two favorite programming languages are Python and Ruby. Both are relatively easy to learn, and there are numerous examples of doing just about everything in both languages.When using Python:Our methodScrapyIf you would rather use Ruby >>
  • Like with Python, when using Ruby, you can either build it yourself or use a framework someone created.Depending on what you need to do though, there is a third alternative – browser extensions.
  • The best one I’ve found is for Chrome and is simply called scraper. This is great if you want to data from a website that’s stored in a table.If you’re interested in simply pulling an entire website or a single page for later offline processing, there are two very good options for you >>
  • SiteSucker: a little utility for pulling down entire websitesWget: a command-line utility on Mac and Linux that allows you to retrieve files using HTTP, HTTPS, and FTPBefore we get into the how-to, let’s look at a few ways websites will try to stop you from scraping them >>
  • There are a number of ways to block scrapers, however here are the ones I’ve encountered most.So that none of this happens to you, let’s look at some rules of the road >>
  • Emulate a human userPut timers into your code so you don't get blocked - we'll see an example of this in the codeDeclare a known browser when scraping
  • Use a proxy serverMac: NetShadeWindows: WinGate
  • Don’thammerawayat a websiteuntilit’s a mess.
  • Observe the terms of service. Whether or not you explicitly agreed to one, you have.With that groundwork laid, let’s get to the fun!
  • A note on pseudocode: I suggest first writing the steps you want your code to take before writing any code. This makes it much easier to create your solution.> An opener allows us to provide the website with a full-blown user agent string.ARPC company url: http://www.linkedin.com/company/45881Let’s look at the code! >>
  • Any questions?
  • Let’s have a good time. We’ve got some beverages for you. Please stay, ask any questions you have, and enjoy yourself.And remember >>
  • Don’t let this be you!
  • Web Scraping With Python

    1. 1. Web Scraping With Python Robert Dempsey
    2. 2.  There is a lot of data provided freely on the Internet.  Not all data is free, and not all site owners allow you to scrape data from their sites.  ALWAYS check the terms of service for a website BEFORE scraping it.  Be responsible, and stay within legal limits at all times. Important Disclaimer
    3. 3. Data Wranglers LinkedIn Group Where the discussions happen.
    4. 4.  If you have a question – ask it.  Be polite and courteous to others.  Turn your cell phones to vibrate when you come to the meeting.  You know more than you think. At some point, I’d like you to share, with us, something you’ve learned so we can all benefit from it. Group Rules
    5. 5. Twitter Hashtag #dwdc
    6. 6.  Wireless Network: Logik_guest  Password: logik1234 Connecting to the Internet
    7. 7. www.fminer.com
    8. 8. www.websundew.com
    9. 9. www.visualwebripper.com
    10. 10. screen-scraper.com
    11. 11. XPath Xpath Helper – Adam Sadovsky Xpath finder
    12. 12.  Our method: BeautifulSoup4 + Python libraries  Scrapy  Application framework (you still have to code)  http://scrapy.org DIY Scraper - Python
    13. 13.  Bare Metal = Nokogiri + Mechanize  Frameworks  Upton: https://github.com/propublica/upton  Wombat: https://github.com/felipecsl/wombat DIY Scraper - Ruby
    14. 14. Browser Extensions For Scraping Scraper https://chrome.google.com/webstore/detail/s craper/mbigbapnjcgaffohmbkdlecaccepngjd
    15. 15. Grabbing The Full Monty SiteSucker: sitesucker.us Wget: http://www.gnu.org/s/wget/
    16. 16.  CSS Sprites  Honeypots  IP blocking  Captcha  Login  Ad popups The Ways Websites Try To Block Us
    17. 17. NetShade http://raynersoftware.com/netshade/ WinGate http://www.wingate.com/
    18. 18.  Continuum.io: Anaconda  http://continuum.io/downloads  BeautifulSoup  http://www.crummy.com/software/BeautifulSoup/  pip install beautifulsoup4  easy_install beautifulsoup4  Unicodecsv  pip install unicodecsv Installs
    19. 19.  Find the webpage(s) you want  Get the path to the data using Xpath or the CSS selectors  Write the code  Test  Scrape  Export to CSV  Enjoy your data! General Steps
    20. 20. 1. Ensure you’ve installed the extension 2. Log in to Google Docs (this is where the data goes) 3. Open the URL: http://www.inc.com/inc5000/list 4. Highlight the first line 5. Right-click and select “Scrape Similar” 6. Verify the data in the window that pops up 7. Click the “Export to Google Docs…” button 8. Voila! #1: Scraping the Inc. 5000 with Scraper
    21. 21.  Only works with data in a tabular format  Only exports to Google Docs  Works on one page at a time  Suggestion: Keep the scraping window open, go to the next page, click “Scrape” again. Notes On Scraper
    22. 22.  BeautifulSoup  A toolkit for dissecting a document and extracting what you need.  Automatically converts incoming documents to Unicode and outgoing documents to UTF-8.  Sits on top of popular Python parsers like lxml and html5lib  Examples  http://www.crummy.com/software/BeautifulSoup/bs4/doc/ #2: Using Python to Scrape Pages
    23. 23. 1. Import your libraries 2. Take a LinkedIn URL as input 3. Build an opener 4. Create the soup using BS4 5. Extract the company description and specialties 6. Clean up the rest of the data 7. Extract the website, type, founded, industry, and company size if they exist, otherwise set them to “N/A” 8. Output to CSV 9. Sleep some random number of seconds & milliseconds Scraping LinkedIn Company Pages - PseudoCode
    24. 24.  https://github.com/rdempsey/dwdc Get The Code
    25. 25. Contacting Rob  robertonrails@gmail.com  Twitter: rdempsey  LinkedIn: robertwdempsey