Why would you even need to do this?
Why not just get data through a
browser?
Some use cases
• Reason 1: It just takes too dam* long to
manually search/get data on a web interface
• Reason 2: Workflow integration
• Reason 3: Your work is reproducible and
transparent if done from R instead of clicking
buttons on the web
Scraping E.g. 1: XML
We can process the XML ourselves using a bunch of lines of code…
Scraping E.g. 1: XML
…OR just use a package someone already created - rfishbase
And you get this nice plot
Practice…XML and JSON formats
data from the USA National Phenology Network
install.packages(c(“RCurl”,”XML”,”RJSONIO”)) # if not installed already
require(RCurl); require(XML); require(RJSONIO)
XML Format
xmlurl <- 'http://www-dev.usanpn.org/npn_portal/observations/
getObservationsForSpeciesIndividualAtLocation.xml?
year=2009&station_ids[0]=4881&station_ids[1]=4882&species_id=3'
xmlout <- getURLContent(xmlurl, curl = getCurlHandle())
xmlTreeParse(xmlout)[[1]][[1]]
JSON Format
jsonurl <- 'http://www-dev.usanpn.org/npn_portal/observations/
getObservationsForSpeciesIndividualAtLocation.json?
year=2009&station_ids[0]=4881&station_ids[1]=4882&species_id=3'
jsonout <- getURLContent(jsonurl, curl = getCurlHandle())
fromJSON(jsonout)
Practice…scraping HTML
install.packages(c("XML","RCurl")) # if not already installed
require(XML); require(RCurl)
# Lets look at the raw html first
rawhtml <- getURLContent('http://www.ism.ws/ISMReport/content.cfm?ItemNumber=10752')
rawhtml
# Scrape data from the website
rawPMI <- readHTMLTable('http://www.ism.ws/ISMReport/content.cfm?ItemNumber=10752')
rawPMI
PMI <- data.frame(rawPMI[[1]])
names(PMI)[1] <- 'Year'
APIs (application programmatic interface)
• Many data sources have API’s – largely for
talking to other web interfaces
– we can use their API from R
• Consists of a set of methods to search,
retrieve, or submit data to, a data
source/repository
• One can write R code to interface with an API
– Keep in mind some API’s require authentication
keys
API Documentation
• API docs for the Integrated Taxonomic
Information Service (ITIS):
http://www.itis.gov/ws_description.html
http://www.itis.gov/ITISWebService/services/ITISService/searchByScientificName?srchKey=Tardigrada
rOpenSci suite of R packages
• There are many packages on CRAN for specific
data sources on the web – search on CRAN to
find these
• rOpenSci is developing a lot of packages for as
many open source data sources as possible
– Please use and give feedback…
Data Literature/metadata
http://ropensci.org/ , code at GitHub
Why even think about doing this?
• Again, workflow integration
• It’s just easier to call X program from R if you
have are going to run many analyses with said
program
Eg. 1: Phylometa
…using the files in the dropbox
Also, get Phylometa here:
http://lajeunesse.myweb.usf.edu/publications.html
• On a Mac: doesn’t work on mac because it’s
.exe
– But system() often can work to run external programs
• On Windows:
system(paste('"new_phyloMeta_1.2b.exe" Aerts2006JEcol_tree.txt Aerts2006JEcol_data.txt'), intern=T)
NOTE: intern = T, returns the output to the R console
Should give you something like this
Resources
• rOpenSci (development of R packages for all
open source data and literature)
• CRAN packages (search for a data source)
• Tutorials/websites:
– http://www.programmingr.com/content/webscraping-using-readlines-
and-rcurl
• Non-R based, but cool:
http://ecologicaldata.org/