Your SlideShare is downloading. ×
AJAX Crawl
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

AJAX Crawl

3,496

Published on

ICDE09

ICDE09

Published in: Technology
0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
3,496
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
38
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. AJAX Crawl: Making AJAX Applications Searchable Cristian Duda and many others ICDE09
  • 2. Outline  Introduction  Modeling AJAX  AJAX Crawling  Architecture of A Search Engine  Experimental Results & Conclusions
  • 3. What is AJAX?  Asynchronous JavaScript and XML
  • 4. What is AJAX?  Asynchronous JavaScript and XML  AJAX applications  Google Mail, Yahoo! Mail, Google Maps.  No URL changing
  • 5. Why Search Engines Ignore AJAX Content?  No caching/pre-crawl  Events cannot be cached.  Duplicate states.  Several events can lead to the same state.  Very granular events.  Lead to a large set of very similar states.  Infinite event invocation.
  • 6. Now  Introduction  Modeling AJAX  Event Model  AJAX Page Model  AJAX Web Sites Model  AJAX Crawling  ….
  • 7. Event Model  When JavaScript is used, the application reacts to user events: click, doubleClick, mouseover, etc.  Event structure in JavaScript:
  • 8. AJAX Page Model  An AJAX application =  a simple page identified by an URL +  a series of states, events and transitions  Page model: a view of all states in a page (e.g., all comment pages).  In particular it is an automaton, a Transition Graph, which contains all application entities (states, events, transition).
  • 9. Model of An AJAX Web Page
  • 10. Model of An AJAX Web Page Nodes: application state. An application state is represented as a DOM tree. Edges: transitions between states. A transition is triggered by an event activated on the source element and applied to one or more target elements, whose properties change through an action.Annotation For The Transition Graph of An AJAX Application
  • 11. AJAX Web Sites Model
  • 12. Now  Introduction  Modeling AJAX  AJAX Crawling  Architecture of A Search Engine  Experimental Results & Conclusions
  • 13. Crawling Algorithm  Build the model of AJAX Web Site.  Focus on how to build the AJAX Page Model. (i.e., for YouTube, indexing all comment pages of a video).
  • 14. A Basic AJAX Crawling Algorithm First Step: Read the initial DOM of the Document at a given URI.(line 2)
  • 15. A Basic AJAX Crawling Algorithm Next Step: AJAX-specific and consists of running the onLoad event of the body tag in the HTML document. (line 3) All JavaScript-enabled browsers invoke this function at first.
  • 16. A Basic AJAX Crawling Algorithm Crawling starts after this initial state has been constructed. (line 5) The algorithm performs a breadth -first crawling, i.e., it triggers all events in the page and invokes the corresponding JavaScript function.
  • 17. A Basic AJAX Crawling Algorithm Whenever the DOM changes, a new state is created (line 11) and the corresponding transition is added to the application model (line 16).
  • 18. Problem of The Basic Algorithm  The network time needed to fetch pages.  In case of AJAX Crawling, multiple individual events per page lead to fetching network content.  Traditional way: pre-caching the Web and crawling locally. Two pages can be checked to be identical using a single URL.
  • 19. A Heuristic Crawling Policy For AJAX Applications  We observe:  Stable structure,  Contains a menu, present in all states,  And a dynamic part.  By identifying the same state but without fetching the content.
  • 20. JavaScript Invocation Graph  The heuristic we use is based on the runtime analysis of the JavaScript invocation graph.
  • 21. JavaScript Invocation Graph Events And Functionalities In The JavaScript Invocation Graph Nodes: JavaScript functions. The functionally of an AJAX page is expressed through events.
  • 22. JavaScript Invocation Graph Functions In The JavaScript Invocation Graph On YouTube Page Functions in the JavaScript code can be invoked either directly by event triggers (event invocations) or indirectly by other functions (local invocations).
  • 23. The dependencies in the code are listed below:
  • 24. JavaScript Invocation Graph Hot Node: the functions that fetch content from the server. Hot Call: a call to a hot node A single function fetches content from the server, i.e., getURLXMLResponseAndFillDiv.
  • 25. In AJAX, the same function can be invoked in order to fetch the same content from the server from different comment pages. In this approach we detect this situation and we avoid invoking the same function twice.
  • 26. How to solve it?  We solve the problem of caching in AJAX applications and detecting duplicate states by identifying and reusing the result of server calls.  Just as in traditional I/O analysis in databases, we tend to minimize the number of the most expensive operations, i.e., the Hot Calls, invocations which generate AJAX calls to the server.
  • 27. Optimized Crawling Algorithm
  • 28. Optimized Crawling Algorithm  Step 1: Identifying Hot Nodes.  The crawler tags the Hot Nodes, i.e., the functions that directly contain AJAX calls.(line 34)
  • 29. Optimized Crawling Algorithm  Step 2: Building Hot Node Cache.  The crawler builds a table containing all Hot Node invocations, the actual parameters used in the call and the results returned by the server(line 34-53). This step uses the current runtime stack trace.
  • 30. Optimized Crawling Algorithm  Step 3: Intercepting Hot Node Calls.  The crawler adopts the following policy: 1. Intercept all invocations of Hot Nodes (functions) and actual parameters (line 34). 2. Lookup any function call within the Hot Node Cache (line 37-39). 3. If match is found (hot node with same parameters) do not invoke AJAX call and reuse existing content instead (line 41).
  • 31. Simplifying Assumptions  Snapshot Isolation  An application does not change during crawling.  No Forms  Do not deal with AJAX parts that require user inputting data in forms, such as Google Suggest.  No Update Events  Avoid triggering update events, such as Delete buttons.  No Image-based Retrieval
  • 32. Now  Introduction  Modeling AJAX  AJAX Crawling  Architecture of A Search Engine  Experimental Results
  • 33. The Components  AJAX Crawler  Indexing  Query Processing  Result Aggregation
  • 34. Indexing  Starts from the model of the AJAX Site and builds the physical inverted file.  Opposed to traditional way, a result is an URI and a state.
  • 35. Processing Simple Keyword Queries  Each query returns the URI and the state(s) which contain the keywords.  Ranked by the score.
  • 36. Processing Conjunctions  Query: Morcheeba singer
  • 37. Processing Conjunctions  Conjunctions are computed as a merge between the individual posting lists of the corresponding keywords.  Entries are compatible if the URLs are compatible, then if the States are identical.
  • 38. Parallelization Crawling AJAX faces the difficulty of not being able to really cache dynamic Web content and network connections must continuously be created.
  • 39. Parallelization A precrawler is used to build the traditional, linked-based Web site structure. The total list of URLs of AJAX Web pages is then partitioned and supplied to a set of parallel crawlers.
  • 40. Parallelization Each crawler applies the crawling algorithm and builds for each crawled page the AJAX Model.
  • 41. Parallelization More indexes are then built from the disjunct sets of AJAX Models.
  • 42. Parallelization Query processing is then performed by query shipping, computing the results from each Index, and then performing a merge of the individual results from each index, returning the final list to the client.
  • 43. Now  Introduction  Modeling AJAX  AJAX Crawling  Architecture of A Search Engine  Experimental Results & Conclusions
  • 44. Experimental Setup  YouTube Datasets  Algorithms:  Traditional Crawling  AJAX Non-Cached  AJAX Cached  AJAX Parallel YouTube Statistics
  • 45. Crawling Time  Network time is predominant.  Underline the importance of applying the Hot Node optimization.
  • 46. Total Crawling Time & Network Time
  • 47. Crawling Time  The Hot Node heuristics is effective.  The heuristic approach of the Hot Nodes causes a 1.29 factor of improvement in crawling time as opposed to the Non-Cached Approach.
  • 48. Number of AJAX Events Resulting In Network Requests
  • 49. Crawling Time  Parallelization is effective.  The running time decreases almost by 25%, as opposed to the AJAX Non-Parallel version.
  • 50. Query Processing Time  YouTube queries.  Query processing times on YouTube.
  • 51. Recall  For each query we evaluated the number of videos returned by just using the traditional approach, as opposed to the total number of videos returned in the AJAX Crawl approach, when also comment pages are taken into account.
  • 52. Discussions  Combine with existing search engines.  Focusing on a specific user’s interaction with the server.  Support more AJAX applications, such as forms.  Irrelevant events. This paper focus on the most important events (click, doubelclick, mouseover).
  • 53. Questions? Comments?

×