Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Web Archives and the dream of the Personal Search Engine


Published on

Keynote at the 4th Alexandria Workshop organised by Avishek Anand and Wolfgang Nejdl, L3S, Hannover (Germany). I argue that Web Archives should act as a pivot while revisiting the idea of decentralised search.

See also

Published in: Internet
  • Hey guys! Who wants to chat with me? More photos with me here 👉
    Are you sure you want to  Yes  No
    Your message goes here

Web Archives and the dream of the Personal Search Engine

  1. 1. Web Archives and the dream of the Personal Search Engine Arjen P. de Vries Hannover, October 19th, 2017
  2. 2. Library of the “Muntmuseum” in Utrecht (Erik van Hannen)
  3. 3. Why Search Remains Difficult to Get Right  Heterogeneous data sources - WWW, wikipedia, news, e-mail, patents, twitter, personal information, …  Varying result types - “Documents”, tweets, courses, people, experts, gene expressions, temperatures, …  Multiple dimensions of relevance - Topicality, recency, reading level, … Actual information needs often require a mix within and across dimensions. E.g., “recent news and patents from our top competitors”
  4. 4.  System’s internal information representation - Linguistic annotations - Named entities, sentiment, dependencies, … - Knowledge resources - Wikipedia, Freebase, IDC9, IPTC, … - Links to related documents - Citations, urls  Anchors that describe the URI - Anchor text  Queries that lead to clicks on the URI - Session, user, dwell-time, …  Tweets that mention the URI - Time, location, user, …  Other social media that describe the URI - User, rating - Tag, organisation of `folksonomy’ + UNCERTAINTY ALL OVER!
  5. 5. Learning to Rank (LTOR)  IR as a machine learning problem  Learn the matching function from observations - E.g., pairwise – clicked document below retrieved document should trigger a swap of their positions
  6. 6. Detect and classify NEs Rank search results Predict query intent Search suggestions
  7. 7. Spelling correction Predict query intent Rank Verticals Search suggestions
  8. 8. RobertJohnson(1911-13):“Earlythismorning whenyouknockeduponmydoor/AndIsaid,‘Hello, Satan,Ibelieveit’stimetogo.’”
  9. 9. WWW  The Web has become ever more centralized + Cloud services – good value-for-money/value-for-effort  Mobile makes things only worse “There is an app for that”
  10. 10. Decentralize Web Search? See also
  11. 11. Without the log data, web search isn’t as good  This also hinders retrieval experiments in academia! - Reproducibility vs. Representativeness of research results? Samar, T., Bellogín, A. & de Vries, A.P. Inf Retrieval J (2016) 19: 230. doi: 10.1007/s10791-015-9276-9
  12. 12. Disclaimer:
  13. 13. Personal search engine!
  14. 14.
  16. 16. Realistic?  Clueweb 2012: 80TB Recent CommonCrawl (August 2017): 3.28B pages, 280TB  Average web page takes up 320 KB - Large sample collected with Googlebot, May 26th, 2010 - Reported 4.2B pages (would require ~1.3 Petabyte)  De Kunder & Van de Bosch estimate an upper bound of ~50B pages -  Also considering continuing growth (claimed in unpublished work) - Andrew Trotman, Jinglan Zhang, Future Web Growth and its Consequences for Web Search Architectures.
  17. 17. Realistic?  Who actually needs all of the Web if their search engine is truly personal?  E.g., I do not read more than 4 or 5 languages…  And, I do not want to see or read anything related to qualifiers for the world cup
  18. 18. Two Problems  How to get the web data on the personal search engine?  How to replace the lack of usage data from many?
  19. 19. Getting the Data  Idea: - Organize the web crawl in topically related bundles - Apply bittorrent-like decentralization to share & update bundles  Use techniques inspired by query obfuscation to hide the real user’s interests when downloading bundles See also WebRTC based in-browser implementations:  Webtorrent:  CacheP2P: shares 16TB research data, including Clueweb 2009 and 2012 anchor tekst And, IPFS: “A peer-to-peer hypermedia protocol to make the web faster, safer, and more open.”
  20. 20. Web Archives to the Rescue?  Web Archives already store the data that the personal search engine would need - Just not (yet) organized in topical bundles
  21. 21. Rescue the Web archives?!  Q: a business model for archiving?  Q: enrich the (rarely used) web archives with usage data?  Q: crowd-sourced seed-lists for crawling? See also different direction to rescue the Web Archive: by Hugo Huurdeman
  22. 22. IPFS  “The Permanent Web” - Smart mix of bittorrent for peer-2-peer filesharing and git for versioning - Each file and all of the blocks within it are given a unique fingerprint called a cryptographic hash - This hash is used to lookup files  IPFS = the Inter-Planetary File System  Decentralized file sharing, but no decentralized search
  23. 23. “… communication and media limitations, due to the distance between Earth and Mars, resulting in time delays: they will have to request the movies or news broadcasts they want to see in advance. […] Easy Internet access will be limited to their preferred sites that are constantly updated on the local Mars web server. Other websites will take between 6 and 45 minutes to appear on their screen - first 3- 22 minutes for your click to reach Earth, and then another 3-22 minutes for the website data to reach Mars.”
  24. 24. “Searching from Mars”  Tradeoff between “effort” (waiting for responses from Earth) and “data transfer” (pre-fetching or caching data on Mars).  Related work: - Jimmy Lin, Charles L. A. Clarke, and Gaurav Baruah. Searching from Mars. Internet Computing, 20(1):77-82, 2016. - Charles L.A. Clarke, Gordon V. Cormack, Jimmy Lin, and Adam Roegiest. Total Recall: Blue Sky on Mars. ICTIR '16. - Charles L. A. Clarke, Gordon V. Cormack, Jimmy Lin, Adam Roegiest. Ten Blue Links on Mars.
  25. 25. Pre-fetching & Caching  Hide latencies of getting the data from the live web - Pre-fetch pages linked from initial query results page - Pre-fetch additional related pages - Pre-fetches expanded with those from query suggestions  Cache web data to avoid accessing the live web
  26. 26. Analogy  Web Archive ~ Earth  Personal search engine (@ people’s homes) ~ Mars
  27. 27. Two Problems  How to get the web data on the personal search engine?  How to replace the lack of usage data from many?
  28. 28. Alternatives for Log Data?  Social annotations - E.g., shortened urls - Still requires access to an API conveying the query representation - E.g., anchor text - E.g., “twanchor text” – tweets providing context to a URL
  29. 29. “SearsiaSuggest”  Searsia (federated search engine created by Djoerd Hiemstra) uses anchor text instead of query logs for its autocompletions - “… for queries of 2 words or more (the average query length in the test data is 2.6), anchor text autocompletions perform better than query log autocompletions” - No more tracking of users!  See also: - tracking-users/ -
  30. 30. Anchor Text & Timestamps  Anchor text exhibits characteristics similar to user query and document title [Eiron & McCurley, Jin et al.]  Anchor text with timestamps can be used to capture & trace entity evolution [Kanhabua and Nejdl]  Anchor text with timestamps lets us reconstruct (past) topic popularity [Samar et al.] Again, the Web Archive to the rescue!
  31. 31. Recover Past Trends  “Ground-truth” from WikiStats, Google Trends and the KB online newspaper archive’s query log  Anchor Text combined with timestamps can be used to find past popular topics - The % of coverage varies across the sources of past trends - Anchor Text popularity correlates with the % of coverage  Crawl strategy: KB vs. CommonCrawl - Breadth-first (CommonCrawl) covers more topics globally and from the NL domain
  32. 32. Investigate Bias  Our study, on the Dutch Web Archive: - Anchor Text from external links - Create query sets with a timestamp per query (2009 – 2012) - De-duplicated for year of crawl (Most sites crawled once a year, but a subset more frequently.)  Retrievability study - Number of sites crawled in a year does not influence the retrievability of documents from that year - Difficulty to retrieve a document from a certain timeframe does depend on the subset size L. Azzopardi and V. Vinay. Retrievability: An evaluation measure for higher order information access tasks. In Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM ’08, pages 561–570, New York, NY, USA, 2008. ACM.
  33. 33.
  34. 34. PhDdefense:October30th,2017
  35. 35. Trade log data! IR-809: (2011) Feild, H., Allan, J. and Glatt, J., "CrowdLogging: Distributed, private, and anonymous search logging," Proceedings of the International Conference on Research and Development in Information Retrieval (SIGIR'11), pp. 375-384. [View bibtex] We describe an approach for distributed search log collection, storage, and mining, with the dual goals of preserving privacy and making the mined information broadly available. [..] The approach works with any search behavior artifact that can be extracted from a search log, including queries, query reformulations, and query- click pairs.
  36. 36. Open challenges  How to select the part of your log data you are willing to share?  How to estimate the value of this log data?
  37. 37. Share Log Segments by Topic?  Represent searchers’ previous search history in the form of concise human-readable topical profiles - Classifier trained on ODP applied to clicked pages Carsten Eickhoff, Kevyn Collins-Thompson, Paul Bennett, and Susan Dumais. Designing human-readable user profiles for search evaluation (ECIR’13)
  38. 38. Share Log Segments by Topic?  Linking a year of query logs to Wikipedia categories helped distill the segments corresponding to events like marriage, a first-born, and expat life (taxes) - Jiyin He, Marc Bron: Measuring Demonstrated Potential Domain Knowledge with Knowledge Graphs. KG4IR@SIGIR 2017: 13-18
  39. 39. But wait… … do we REALLY need all that query log info?
  40. 40. Personal search engine  Safely gain access to rich personal data including email, browsing history, documents read and contents of the user’s home directory  Can high quality evidence about an individual’s recurring long-term interests replace the shallow information of many?
  41. 41. Better Search – “Deep Personalization”  “Even more broadly than trying to get people the right content based on their context, we as a community need to be thinking about how to support people through the entire search experience.” Jaime Teevan on “Slow Search”  Search as a dialogue My first journal paper: De Vries, Van der Veer and Blanken: Let’s talk about it: dialogues with multimedia databases (1998)
  42. 42. “Deep Personalization”  How could the indexer know about the wide variety of sources and their schema information...  Or, How to build 1000+ search engines?!
  43. 43. Create LOD representation
  44. 44. Engineer the Search EngineModel the Search Engine
  45. 45. “Search by Strategy” • “No idealized one-shot search engine” • Hand over control to the user (or, most likely, the search intermediary)
  46. 46. “Search by Strategy” • “No idealized one-shot search engine” • Hand over control to the user (or, most likely, the search intermediary) • Search (and link) strategies can be shared!
  47. 47. Note: Enhances Reproducibility of IR Research!
  48. 48. Web Archives to Lead the Revolution!  Two main opportunities: - Free us from the mass surveillance that is now the default business model of the internet - Improve Web Archive and Web Archive search  Long run: realize truly personal search engines?
  49. 49. Blueprint of the Personal Search Engine  Decentralize search  Webarchives to rescue - Super-peers in a P2P network of personal search engines  “Deep personalization” - Exploit the rich source data that can be processed safely locally  A sharing economy: - Data markets to trade log-data and improve – mutually – your search results