What is scrapy_model ?
It is just a helper to create scrapers using the Scrapy Selectors allowing you to select elements by CSS or by
XPATH and structuring your scraper via Models (just like an ORM model) and plugable to an ORM model via
Import the BaseFetcherModel, CSSField or XPathField (you can use both)
from scrapy_model import BaseFetcherModel, CSSField
Go to a webpage you want to scrap and use chrome dev tools or firebug to figure out the css paths then
considering you want to get the following fragment from some page.
<span id="person">Bruno Rocha <a href="http://brunorocha.org">website</a></span>
name = CSSField('span#person')
website = CSSField('span#person a')
Multiple queries in a single field
You can use multiple queries for a single field
name = XPathField(
In that case, the parsing will try to fetch by the first query and returns if finds a match, else it will try the subsequent
queries until it finds something, or it will return an empty selector.
Finding the best match by a query validator
If you want to run multiple queries and also validates the best match you can pass a validator function which will take the scrapy
selector an should return a boolean.
Example, imagine you get the "name" field defined above and you want to validates each query to ensure it has a 'li' with a text
"Schblaums" in there.
for li in selector.css('li'): # takes each <li> inside the ul selector
li_text = li.css('::text').extract() # Extract only the text
if "Schblaums" in li_text: # check if "Schblaums" is there
return True # this selector is valid!
return False # invalid query, take the next or default value
name = XPathField(
Every method named parse_<field> will run after all the fields are fetched for each field.
def parse_name(self, selector):
# here selector is the scrapy selector for 'span#person'
name = selector.css('::text').extract()
def parse_website(self, selector):
# here selector is the scrapy selector for 'span#person a'
website_url = selector.css('::attr(href)').extract()
after defined need to run the scraper
fetcher = Myfetcher(url='http://.....') # optionally you can use cached_fetch=True to cache requests on redis