Seminar on crawler

6,400 views

Published on

Published in: Education, Technology, Design
1 Comment
6 Likes
Statistics
Notes
No Downloads
Views
Total views
6,400
On SlideShare
0
From Embeds
0
Number of Embeds
199
Actions
Shares
0
Downloads
632
Comments
1
Likes
6
Embeds 0
No embeds

No notes for slide

Seminar on crawler

  1. 1. WEB CRAWLERs<br />Siddharth Shankar<br />
  2. 2. Resource finding<br />Finding info on the web<br /> - Surfing<br /> - Searching<br /> - crawling<br /><ul><li>Uses of crawling</li></ul> - Find stuff<br /> - Gather stuff<br /> - Check stuff <br />
  3. 3. Crawling and Crawlers<br />
  4. 4. WEB CRAWLERS<br /><ul><li> also known as web spiders and web robots.
  5. 5. less used names- ants,bots and worms.
  6. 6. A program or automated script which browses the World</li></ul> Wide Web in a methodical, automated manner<br /><ul><li> The process or program used by search engines to</li></ul>download pages from the web for later processing by a<br />search engine that will index the downloaded pages to<br />provide fast searches.<br />
  7. 7. WHY CRAWLERS?<br /><ul><li>Internet has a wide expanse of Information.
  8. 8. Finding relevant information requires an efficient mechanism.
  9. 9. Web Crawlers provide that scope to the search engine.</li></li></ul><li>How does web crawler work?<br />It starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of visited URLs, called the crawl frontier. <br />URLs from the frontier are recursively visited according to a set of policies. <br />
  10. 10. How does web crawler work?<br />
  11. 11. Prerequisites of Crawling System<br /><ul><li>Flexibility: System should be suitable for various scenarios.
  12. 12. High Performance(Scalability): System needs to be scalable with a</li></ul> minimum of one thousand pages/ second and extending up<br /> to millions of pages.<br /><ul><li> Fault Tolerance: process invalid HTML code, deal</li></ul> with unexpected Web server behavior, can<br /> handle stopped processes or interruptions in<br /> network services.<br />
  13. 13. <ul><li> Maintainability and Configurability: Appropriate interface is</li></ul> necessary for monitoring the crawling process including:<br />Download speed <br />Statistics on the pages<br />Amounts of data stored.<br />
  14. 14. Crawling Strategies<br /><ul><li>Breadth-First Crawling: launched by following hypertext links leading to those pages directly connected with this initial set.
  15. 15. Repetitive Crawling: once pages have been crawled, some systems require the process to be repeated periodically so that indexes are kept updated.
  16. 16. Targeted Crawling: specialized search engines use crawling process heuristics in order to target a certain type of page.</li></li></ul><li><ul><li>Random Walks and Sampling: random walks on Web graphs</li></ul> via sampling is done to estimate the size of documents on<br /> line.<br /><ul><li> Deep Web crawling: a lot of data accessible via the Web are</li></ul> currently contained in databases and may only<br /> be downloaded through the medium of<br /> appropriate requests or forms. The Deep Web is<br /> the name given to the Web containing this<br /> category of data.<br />
  17. 17. Crawling Policies<br />Selection Policy that states which pages to download.<br />Re-visit Policy that states when to check for changes to the pages.<br />Politeness Policy that states how to avoid overloading Web sites.<br />Parallelization Policy that states how to coordinate distributed Web crawlers.<br />
  18. 18. Selection policy<br /><ul><li> Search engines covers only a fraction of Internet.
  19. 19. This requires download of relevant pages, hence a good</li></ul> selection policy is very important.<br /><ul><li> Common Selection policies: </li></ul> Restricting followed links<br /> Path-ascending crawling<br /> Focused crawling<br /> Crawling the Deep Web<br />
  20. 20. Re-Visit Policy<br /><ul><li> Web is dynamic; crawling takes a long time.
  21. 21. Cost factors play important role in crawling.
  22. 22. Freshness and Age- commonly used cost functions.
  23. 23. Objective of crawler- high average freshness; low average age</li></ul> of web pages.<br /><ul><li> Two re-visit policies:</li></ul> Uniform policy<br /> Proportional policy<br />
  24. 24. Politeness Policy<br /><ul><li>Crawlers can have a crippling impact on the overall performance of a site.
  25. 25. The costs of using Web crawlers include:</li></ul> Network resources<br /> Server overload<br /> Server/ router crashes<br /> Network and server disruption<br /><ul><li>A partial solution to these problems is the robots exclusion protocol.</li></li></ul><li>Parallelization Policy<br /><ul><li>The crawler runs multiple processes in parallel.
  26. 26. The goal is:</li></ul> To maximize the download rate.<br /> To minimize the overhead from parallelization.<br /> To avoid repeated downloads of the same page.<br /><ul><li>The crawling system requires a policy for assigning the new URLs discovered during the crawling process.</li></li></ul><li>DISTRIBUTED WEB CRAWLING<br />A distributed computing technique whereby search engines employ many computers to index the Internet via web crawling. <br />The idea is to spread out the required resources of computation and bandwidth to many computers and networks.<br />Types of distributed web crawling:<br /> 1. Dynamic Assignment<br /> 2. Static Assignment<br />
  27. 27. DYNAMIC ASSIGNMENT<br />With this, a central server assigns new URLs to different crawlers dynamically. This allows the central server dynamically balance the load of each crawler.<br />Configurations of crawling architectures with dynamic assignments:<br /><ul><li> A small crawler configuration, in which there is a central DNS resolver and central queues per Web site, and distributed down loaders.
  28. 28. A large crawler configuration, in which the DNS resolver and the queues are also distributed.</li></li></ul><li>STATIC ASSIGNMENT<br /><ul><li>Here a fixed rule is stated from the beginning of the crawl that defines how to assign new URLs to the crawlers.
  29. 29. A hashing function can be used to transform URLs into a number that corresponds to the index of the corresponding crawling process.
  30. 30. To reduce the overhead due to the exchange of URLs between crawling processes, when links switch from one website to another, the exchange should be done in batch.</li></li></ul><li>FOCUSED CRAWLING<br />Focused crawling was first introduced by Chakrabarti. <br />A focused crawler ideally would like to download only web pages that are relevant to a particular topic and avoid downloading all others.<br />It assumes that some labeled examples of relevant and not relevant pages are available. <br />
  31. 31. STRATEGIES OF FOCUSED CRAWLING<br />A focused crawler predict the probability that a link to a particular page is relevant before actually downloading the page. A possible predictor is the anchor text of links.<br />In another approach, the relevance of a page is determined after downloading its content. Relevant pages are sent to content indexing and their contained URLs are added to the crawl frontier; pages that fall below a relevance threshold are discarded.<br />
  32. 32. EXAMPLES<br />Yahoo! Slurp: Yahoo Search crawler.<br />Msnbot:Microsoft's Bing web crawler.<br />Googlebot : Google’s web crawler.<br />WebCrawler : Used to build the first publicly-available full-text index of a subset of the Web.<br />World Wide Web Worm : Used to build a simple index of document titles and URLs.<br />Web Fountain: Distributed, modular crawler written in C++.<br />Slug: Semantic web crawler<br />
  33. 33. CONCLUSION<br />Web crawlers are an important aspect of the search engines.<br />Web crawling processes deemed high performance are the basic components of various Web services.<br /> It is not a trivial matter to set up such systems: <br /> 1. Data manipulated by these crawlers cover a wide area. <br /> 2. It is crucial to preserve a good balance between random access memory and disk accesses.<br />
  34. 34. THANKYOU<br />

×