From federated to aggregated search
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

From federated to aggregated search

on

  • 5,979 views

SIGIR 2010 Tutorial, with Fernando Diaz & Milad Shokouhi

SIGIR 2010 Tutorial, with Fernando Diaz & Milad Shokouhi

Statistics

Views

Total Views
5,979
Views on SlideShare
5,974
Embed Views
5

Actions

Likes
2
Downloads
75
Comments
0

1 Embed 5

https://twitter.com 5

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Add URL
  • MILAD: Anchor text should not be THERE (you said it – please updated) MILAD: there was a comment from Andrew Trotman (we can ignore) about cooperative search engines. Anything you want to add about this (as I said we can safely ignore)
  • There was a comment about Amdox (Yellow Page): Mliad???
  • Say why some are underlined.
  • Formula does not print
  • Slide did not print well (stuff missing)
  • Milad you said “Collection overlap estimation” was misplaced here.
  • I have a comment here that says add the MJ slide 
  • Server vs collection here – does it matter at the end? Would be nice to have collection here 
  • Server vs collection
  • Server vs collection
  • Milad, you did speak quite a bit here, so maybe add something more?
  • I have a comment here: KDD cup?
  • All should be in % (or at least same format) Text needed here.
  • Say in some text what is combined here.
  • For other issues here, I have as comment add refs.
  • I have as comment here “predict newsworthiness of queries”
  • Say what C and D are.
  • Check E and F – something was not correct.
  • This slide does not print
  • This slide does not print.
  • CTR is full

From federated to aggregated search Presentation Transcript

  • 1. From federated to aggregated search Fernando Diaz, Mounia Lalmas and Milad Shokouhi [email_address] [email_address] [email_address]
  • 2. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 3. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 4. Introduction
    • What is federated search?
    • What is aggregated search?
      • Motivations
      • Challenges
      • Relationships
  • 5. A classical example of federated search www.theeuropeanlibrary.org Collections to be searched One query
  • 6. A classical example of federated search www.theeuropeanlibrary.org Merged list of results
  • 7. Motivation for federated search
    • Search a number of independent collections, with a focus on hidden web collections
      • Collections not easily crawlable (and often should not)
    • Access to up-to-date information and data
    • Parallel search over several collections
    • Effective tool for enterprise and digital library environments
  • 8. Challenges for federated search
    • How to represent collections, so that to know what documents each contain?
    • How to select the collection(s) to be searched for relevant documents?
    • How to merge results retrieved from several collections, to return one list of results to the users?
      • Cooperative environment
      • Uncooperative environment
  • 9. From federated search to aggregated search
    • “ Federated search on the web”
      • Peer-to-peer network connects distributed peers (usually for file sharing), where each peer can be both server and client
      • Metasearch engine combines the results of different search engines into a single result list
      • Vertical search – also known as aggregated search – add the top-ranked results from relevant verticals (e.g. images, videos, maps) to typical web search results
  • 10. A classical example of aggregated search News Homepage Wikipedia Real-time results Video Twitter Structured Data
  • 11. Motivation for aggregated search
    • Increasingly different types of information being available, sough and relevant
      • e.g. news, image, wiki, video, audio, blog, map, tweet
    • Search engine allows accessing these through so-called verticals
    • Two “ways” to search
      • Users can directly search the verticals
      • Or rely on so called aggregated search
    Google universal search 2007 : [ … ] search across all its content sources, compare and rank all the information in real time, and deliver a single, integrated set of search results [ … ] will incorporate information from a variety of previously separate sources – including videos, images, news, maps, books, and websites – into a single set of results. http://www.google.com/intl/en/press/pressrel/universalsearch_20070516.html
  • 12. Motivation for aggregated search (Arguello et al , 09) 25K editorially classified queries
  • 13. Motivation for aggregated search
  • 14. Motivation for aggregated search
  • 15. Challenges in aggregated search
      • Extremely heterogeneous collections
    • What is/are the vertical intent(s)?
    • And
        • Handling ambiguous (query | vertical) intent
        • Handling non-stationary intent (e.g. news, local)
    • How many results from each to return and where to position them in the result page?
        • Slotting results
        • Users looking at 1 st result page
    • Page optimization and its evaluation
  • 16. Ambiguous non-stationary intent Query - Travel - Molusk - Paul Vertical - Wikipedia - News - Image
  • 17. Recap – Introduction federated search aggregated search heterogeneity low high scale (documents, users) small large user feedback little a lot
  • 18. Terminology
    • federated search, distributed information retrieval, data fusion, aggregated search, universal search, peer-to-peer network
    • resource, vertical, database, collection, source, server, domain, genre
    • merging, blending, fusion, aggregation, slotted, tiled
  • 19. Problem definition Present the “querier” with a summary of search results from one or more resources.
  • 20. General architecture User Search Interface/ Portal/ Broker Source/ Server/ Vertical Source/ Server/ Vertical Source/ Server/ Vertical Source/ Server/ Vertical Raw Query Source/ Server/ Vertical Query Query Query Query Query
  • 21. Peer-to-peer network Peer Directory Server
  • 22. Peer to Peer (P2P) networks
    • Broker-based
      • Single centralized broker with documents lists shared from peer (e.g. Napster, original version )
    • Decentralized
      • Each peer acts as both client and server (e.g. Gnutella v0.4)
    • Structure-based
      • Use distributed hash tables (DHT) (e.g. Chord (Stocia et al, 03) )
    • Hierarchical
      • Use local directory services for routing and merging (e.g. Swapper.NET)
  • 23. Federated search Query Broker Collection A Query Query Query Query Query Collection B Collection C Collection D Collection E Sum A Sum B Sum C Sum D Sum E Merged results
  • 24. Federated search
    • Also known as distributed information retrieval (DIR) system
    • Provides one portal for searching information from multiple sources
      • corporate intranets, fee-based databases, library catalogues, internet resources, user-specific digital storage
    • Funnelback, Westlaw, FedStats, Cheshire, etc (see also http://federatedsearchblog.com/ )
  • 25. http://funnelback.com/pdfs/brochures/enterprise.pdf
  • 26. Metasearch User Metasearch engine Raw Query WWW Query Query Query Query
  • 27. Metasearch
    • Search engine querying several different search engines and combines results from them (blended), or displays results separately (non-blended)
    • Does not crawl the web but rely on data gathered by other search engines
    • Dogpile,Metacrawler, Search.com, etc
      • ( see http://www.cryer.co.uk/resources/searchengines/meta.htm )
  • 28. Aggregated search User Angelina Jolie Results WWW Index (text) Query Query Query Query
  • 29. Aggregated search
    • Specific to a web search engine
    • “ Increasingly” more than one type of information relevant to an information need
      • mostly web page + image, map, blog, etc
    • These types of information are indexed and ranked using dedicated approaches (verticals)
    • Presenting the results from verticals in an aggregated way believed to be more useful
    • All major search engines are doing some levels of aggregated search
  • 30. Data fusion Query GOV2 BM25 KL Inquery Anchor only Title only One document collection Different document representations Different retrieval models Merging One ranked list of result (merged) (e.g. Voorhees etal, 95)
  • 31. Data fusion
    • Search one collection
    • Document can be indexed in different ways
      • Title index, abstract index, etc (poly-representation)
      • Weighting scheme
    • Different retrieval models
    • Rankings generated by different retrieval models (or different document representations) merged to produce the final rank
    • Has often been shown to improve retrieval performance (TREC)
  • 32. Terminology - Resource
    • Source
    • Server
    • Database
    • Collection (federated search)
    • Server
    • Vertical (aggregated search)
    • Domain
    • Genre
  • 33. Terminology - Aggregation
    • Merging
    • Blending
    • Fusion
    • Slotted
    • Tiled
  • 34. Aggregated search (tiled) http://au.alpha.yahoo.com/
  • 35. Aggregated search (tiled) Naver.com
  • 36. Aggregated search (slotted)
  • 37. Others
    • Clustering
    • Faceted search
    • Multi-document summarization
    • Document generation
    • Entity search
      • (see special issue – in press – on “Current research in focused retrieval and result aggregation”, Journal of Information Retrieval (Trotman etal, 10))
  • 38. Yippy – Clustering search engine from Vivisimo clusty.com
  • 39. Faceted search
  • 40. Multi-document summarization http://newsblaster.cs.columbia.edu/
  • 41. “ Fictitious” document generation (Paris et al, 10)
  • 42. Entity search http://sandbox.yahoo.com/Correlator
  • 43. Recap
    • Shown the relations between federated, aggregated search, and others
    • Exposed the various terminologies used
    • In the rest of the tutorial, we concentrate on federated search and aggregated search
    • Focus is on “effective search”
  • 44. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 45. Architecture: what are the general components of federated and aggregated search systems.
  • 46. Federated search architecture
  • 47. Aggregated search architecture
    • Pre-retrieval aggregation: decide verticals before seeing results
    • Post-retrieval aggregation: decide verticals after seeing results
    • Pre-web aggregation: decide verticals before seeing web results
    • Post-web aggregation: decide verticals after seeing web results
  • 48. Post-retrieval, pre-web
  • 49. Pre and post-retrieval, pre-web
  • 50. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 51. Resource representation: how to represent resources, so that we know what documents each contain.
  • 52. Resource representation in federated search (Also known as resource summary/description)
  • 53. Resource representation
    • Cooperative environments
      • Comprehensive term statistics
      • Collection size information
    • Uncooperative environments
      • Query-based sampling
      • Collection size estimation
  • 54. Resource representation (cooperative environments)
    • STARTS Protocol (Gravano et al, 97)
      • Source metadata
      • Rich query language
  • 55.
    • Different types of term statistics
      • (Callan et al, 95; Gravano et al, 94a,b,99; Meng et al, 01; Yuwono and Lee, 97; Xu and Callan, 98; Zobel, 97 )
    • Anchor-text
      • HARP (Hawking and Thomas, 05)
    Resource representation (cooperative environments)
  • 56. Resource representation (uncooperative environments)
    • Query-based sampling ( Callan and Connell, 01 )
      • Select a query, probe collection
      • Download the top n documents
      • Select the next query, repeat
    Query selector Query Sampled documents
  • 57.
    • Query selector
      • (Callan and Connell, 01)
        • Other resource description (ord)
        • Learned resource description (lrd)
          • Average tf, random , df, ctf
      • Query logs
        • (Craswell, 00; Shokouhi et al, 07d)
      • Focused probing
        • (Ipeirotis and Gravano, 02)
    Resource representation (uncooperative environments)
  • 58.
    • Adaptive sampling
      • (Shokouhi et al, 06a)
        • Rate of visiting new vocabulary
      • (Baillie et al, 06a)
        • Rate of sample quality improvement (reference query log)
      • (Caverlee et al, 06)
        • Proportional document ratio ( PD )
        • Proportional vocabulary ratio ( PV )
        • Vocabulary growth (VG)
    Resource representation (uncooperative environments)
  • 59.
    • Improving incomplete samples
      • Shrinkage (Ipeirotis, 04; Ipeirotis and Gravano, 04) : topically related collections should share similar terms
      • Q-pilot (Sugiura and Etzioni, 00) :
      • sampled documents + backlinks + front page
    Resource representation (uncooperative environments)
  • 60.
    • Capture-recapture ( Liu et al, 01)
    Resource representation (Collection size estimation) Sample A (Capture) Sample B (recapture) http://www.dorlingkindersley-uk.co.uk/static/cs/uk/11/clipart/nature/image_nature040.html
  • 61. Resource representation (Collection size estimation)
  • 62.
    • Multiple queries sampler
    • ( Thomas and Hawking, 07 )
    • Random-walk sampler, and pool-based sampler
    • ( Bar-Yossef and Gurevich, 06 )
    • Collection overlap estimation
    • ( Shokouhi and Zobel, 07 )
    Resource representation (Collection size estimation)
  • 63. Resource representation (Updating summaries)
    • (Ipeirotis et al, 05)
    • (Shokouhi et al, 07a)
  • 64. Resource representation in aggregated search
    • Vertical content
      • samples or access to vertical API
      • represents content supply
    • Vertical query logs
      • samples or access to historic vertical searches
      • represents content demand
  • 65. Vertical content includes text NEWS
  • 66. Vertical content includes structure SPORTS
  • 67. Vertical content includes images IMAGES
  • 68. Issues with vertical content
    • Dynamics
      • some vertical becomes stale fast
    • Heterogeneous content
      • heterogeneous ranking algorithms
    • Non-free text APIs
      • affects query-based sampling
  • 69. Addressing content dynamics
    • sample most recently indexed documents
      • (Diaz 09)
    • assumes users more likely to be interested in recent content
    • in practice, only need a fraction of the corpus to perform well
    (Konig et al, 09)
  • 70. Addressing heterogeneous content
    • use text available with documents (e.g. captions)
    • manually map to surrogates (e.g. wikipedia pages)
    (Arguello et al, 09) performance of two different methods of dealing with heterogeneous content
  • 71. Vertical query logs
    • Queries issued directly to a vertical represent explicit vertical intent
    • Is similar to having a large body of labeled queries
  • 72. Issues with vertical query logs
    • Dynamics
      • some verticals require temporally-sensitive sampling
      • for example, we do not want to sample news query logs for a whole year
    • Non-free text APIs
      • affects query modeling
  • 73. Hybrid approaches
    • Should only sample documents likely to be useful for vertical selection/merging
      • e.g. a document which is never requested is not useful for representing a vertical
    • Suggests log-biased sampling
      • (Shokouhi et al, 06; Arguello et al, 09)
  • 74. Recap – Resource representation federated search aggregated search Representation completeness low low-high Representation generation sampling/shared dictionaries sampling, API Freshness important critical
  • 75. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 76. Resource selection: how to select the resource(s) to be searched for relevant documents.
  • 77. Resource selection for federated search Query Broker Collection A Query Query Query Collection B Collection C Collection D Collection E Sum A Sum B Sum C Sum D Sum E
  • 78.
    • “ Big-document” bag of word summaries
        • CORI ( Callan et al, 95)
        • GlOSS (Gravano et al, 94b)
        • CVV (Yuwono and Lee, 97)
    Resource selection (Lexicon-based methods) Collection C Collection A Collection B Sampling Sampling Sampling Broker
  • 79. Resource selection (Lexicon-based methods)
    • CORI
    • GlOSS
  • 80.
    • Sample documents with retained boundaries
        • ReDDE ( Si and Callan, 03a)
        • CRCS (Shokouhi, 07a)
        • SUSHI ( Thomas and Shokouhi, 09 )
    Resource selection (Document-surrogate methods) Collection C Collection A Collection B Sampling Sampling Sampling Broker
  • 81. Resource selection (Document-surrogate methods)
    • ReDDE
    • ReDDE assumes that the top-ranked sampled documents are relevant.
    • ReDDE estimates the size of collections by sample-resample
    • Assuming that all collections have the same size we have: yellow > blue > red
    • CRCS is inspired by ReDDE but assigns different probability of relevance based on document position: red > yellow, blue
    Query Ranking Broker
  • 82.
    • SUSHI
    Resource selection (Document-surrogate methods) http://www.monthly.se/nucleus/index.php?itemid=1464
  • 83.
    • SUSHI
    Resource selection (Document-surrogate methods) http://www.monthly.se/nucleus/index.php?itemid=1464
  • 84.
    • SUSHI
    Resource selection (Document-surrogate methods)
    • Different regression functions for each collection and query
    • Scores are comparable (estimated over the same index)
    http://www.monthly.se/nucleus/index.php?itemid=1464
  • 85.
    • Utility maximization techniques
      • Model the search effectiveness
      • DTF (Nottelmann and Fuhr, 03) , UUM (Si and Callan, 04a) , RUM (Si and Callan, 05b)
    • Classification-based methods
      • Classify collections/queries for better selection
      • Classification-aware server selection (Ipeirotis and Gravano, 08) , classification-based resource selection (Arguello et al, 09a) , learning from past queries (Cetintas et al, 09)
    Resource selection (Supervised methods)
  • 86. Resource selection in aggregated Search
    • Content-based predictors
      • derived from (sampled) vertical content
    • Query string-based predictors
      • derived from query text, independent of any resource associated with a vertical
    • Query log-based predictors
      • derived from previous requests issued by users to the vertical portal
  • 87. Content-based predictors
    • Distributed information retrieval (DIR) predictors
    • Simple result set predictors
      • numresults, score distributions, etc
      • (Diaz 09; Konig etal, 09)
    • Complex result set predictors
      • Clarity (Cronen-Townsend et al, 02)
      • Autocorrelation (Diaz, 07)
      • Many, many more (Hauff, 10)
  • 88. Issues with content-based predictors
    • DIR (usually) assumes homogeneous content types
    • performance predictors (usually) assume text corpora
    • assumes ranking function consistency
      • between verticals
      • between vertical selector machine and vertical ranker machine
    • verticals have different dynamics (e.g. news vs. image)
  • 89. String-based predictors
    • Dictionary lookups
      • terms correlated with a vertical (e.g., movie titles)
    • Regular expressions
      • patterns correlated with explicit vertical requests (e.g., obama news)
    • Named entities
      • automatically-detected entity types (e.g., geographic entities)
  • 90. String-based predictors
    • Issues
      • curating lists and expressions (manual or automatic)
      • terms included in dictionary manually vetted for relevance
        • high precision/low recall
  • 91. Log-based predictors
    • Classification approaches
      • (Beitzel etal 07; Li etal , 08)
    • Language model approaches
      • (Arguello etal, 09)
    • Issues
      • verticals with structured queries (e.g. local)
      • query logs with dynamics (e.g. news)
      • (Diaz, 09)
  • 92. Comparing predictor performance (Arguello et al, 09)
  • 93. Predictor cost
    • Pre-retrieval predictors
      • computed without sending the query to the vertical
      • no network cost
    • Post-retrieval predictors
      • computed on the results from the vertical
      • requires vertical support of web scale query traffic
      • incurs network latency
      • can be mitigated with vertical content caches
  • 94. Combining predictors
    • Use predictors as features for a machine-learned model
    • Training data
      • editorial data
      • behavioral data (e.g. clicks)
      • other vertical data
    (Diaz, 09; Arguello etal, 09; Konig etal, 09)
  • 95. Editorial data
    • Data: <query,vertical,{+,-}>
    • Features: predictors based on f(query,vertical)
    • Models:
      • log-linear (Arguello etal, 09)
      • boosted decision trees (Arguello etal, 10)
  • 96. Combining predictors (Arguello etal, 09)
  • 97. Click data
    • Data: <query,vertical,{click,skip}>, <query,vertical,click through rate>
    • Features: predictors based on f(query,vertical)
    • Models:
      • log-linear (Diaz, 09)
      • boosted decision trees (Konig etal, 09)
  • 98. Gathering click data
    • Exploration bucket:
      • show suboptimal presentations in order to gather positive (and negative) click/skip data
    • Cold start problem:
      • without a basic model, the best exploration is random
    • Random exploration results in poor user experience
  • 99. Gathering click data
    • Solutions
      • reduce impact to small fraction of traffic/users
      • train a basic high-precision non-click model (perhaps with editorial data)
    • Other issues
      • Presentation bias: different verticals have different click-through rates a priori
      • Position bias: different presentation positions have different click-through rates a priori
  • 100. Click precision and recall (Konig etal, 09) ability to predict queries using thresholded click-through-rate to infer relevance
  • 101. Non-target data have training data no data
  • 102. Non-target data
    • Data: <query,source vertical,{+,-}>
    • Features: predictors based on f(query,target vertical)
    • Models:
      • generic model+adaptation
      • (Arguello etal, 10)
  • 103. Non-target data
      • (Arguello etal, 10)
  • 104. Generic model
    • Objective
      • train a single model that performs well for all source verticals
    • Assumption
      • if it performs well across all source verticals, it will perform well on the target vertical
      • (Arguello etal, 10)
  • 105. Non-target data
      • (Arguello etal, 10)
    adapted model
  • 106. Adapted model
    • Objective
      • learn non-generic relationship between features and the target vertical
    • Assumption
      • can bootstrap from labels generated by the generic model
      • (Arguello etal, 10)
  • 107. Non-target query classification
      • (Arguello etal, 10)
    average precision on target query classification; red (blue) indicates statistically significant improvements (degradations) compared to the single predictor
  • 108. Training set characteristics
    • What is the cost of generating training data
      • how much money?
      • how much time?
      • how many negative impressions as a result of exploration?
    • Are targets normalized ?
      • can we compare classifier output?
  • 109. Training set cost summary
  • 110. Online adaptation
    • Production vertical selection systems receive a variety of feedback signals
      • clicks, skips
      • reformulations
    • A machine-learned system can adjust predictions based on real time user feedback
      • very important for dynamic verticals
      • (Diaz, 09; Diaz and Arguello, 09)
  • 111. Online adaptation
    • Passive feedback: adjust prediction/parameters in response to feedback
      • allows recovery from false positives
      • difficult to recover from false negatives
    • Active feedback/explore-exploit: opportunistically present suboptimal verticals for feedback
      • allows recovery from both errors
      • incurs exploration cost
      • (Diaz, 09; Diaz and Arguello, 09)
  • 112. Online adaptation
    • Issues
      • setting learning rate for dynamic intent verticals
      • normalizing feedback signal across verticals
      • resolving feedback and training signal (click≠relevance)
      • (Diaz, 09; Diaz and Arguello, 09)
  • 113. Recap – Resource selection
  • 114. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 115. Resource presentation: how to return results retrieved from several resources to users.
  • 116.
    • Same source (web) different overlapped indexes
    • Document scores may not be available
    • Title, snippet, position and timestamps
      • D-WISE (Yuwono and Lee, 96)
      • Inquirus (Glover et al., 99)
      • SavvySearch (Dreilinger and Howe, 1997)
    Result merging (Metasearch engines)
  • 117.
    • Same corpus
    • Different retrieval models
    • Document scores/positions available
      • Unsupervised techniques
        • CombSUM, CombMNZ (Fox and Shaw, 93, 94)
        • Borda fuse ( Aslam and Montague, 01 )
      • Supervised techniques
        • Bayes-fuse, weighted Borda fuse ( Aslam and Montague, 01 )
        • Segment-based fusion ( Lillis et al 06, 08; Shokouhi 07b)
    Result merging (Data fusion)
  • 118. Result merging in federated search User Broker Collection A Query Query Collection B Collection C Collection D Collection E Sum A Sum B Sum C Sum D Sum E Merged results Query
  • 119.
    • CORI ( Callan et al, 95)
      • Normalized collection score + Normalized document score.
    Result merging
  • 120. Result merging
    • SSL (Si and Callan, 2003b)
    A G B C D E F H Query Ranking Selected resources L R D F Q Broker
  • 121. Result merging http://upload.wikimedia.org/wikipedia/en/1/13/Linear_regression.png Source-specific score Broker score
  • 122.
    • Multi-lingual result merging
      • SSL with logistic regression (Si and Callan, 05a; Si et al, 08)
    • Personalized metasearch
      • (Thomas, 08)
    • Merging overlapped collections
      • COSCO ( Hernandez and Kambhampati 05) :
      • exact duplicates
      • GHV ( Bernstein et al, 06; Shokouhi et al, 07b) :
      • exact/near duplicates
    Result merging - Miscellaneous scenarios
  • 123. Images on top Images in the middle Images at the bottom Images at top-right Images on the left Images at the bottom-right Slotted vs tiled result presentation 3 verticals 3 positions 3 degree of vertical intents (Sushmita et al, 10)
  • 124.
    • Designers of aggregated search interfaces should account for the aggregation styles
    • for both, vertical intent key for deciding on position and type of “vertical” results
    • slotted  accurate estimation of the best position of “vertical” result
    • tiled  accurate selection of the type of “vertical” result
    Slotted vs tiled
  • 125. Recap – Result presentation federated search aggregated search Content type homogenous (text documents) heterogeneous Document scores depends on environment heterogeneous Oracle centralized index none
  • 126. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 127. Evaluation Evaluation: how to measure the effectiveness of federated and aggregated search systems.
  • 128.
    • CTF ratio ( Callan and Connell, 01)
    • Spearman rank correlation coefficient (SRCC), ( Callan and Connell, 01)
    • Kullback-Leibler divergence (KL) (Baillie et al, 06b; Ipeirotis et al, 2005) , topical KL ( Baillie et al, 09)
    • Predictive likelihood ( Baillie et al, 06a)
    Resource representation (summaries) evaluation – Federated search
  • 129. Resource selection evaluation – Federated search
  • 130. Result merging evaluation – Federated search
    • Oracle
      • Correct merging (centralized index ranking) ( Hawking and Thistlewaite, 99)
      • Perfect merging (ordered by relevance labels) (H awking and Thistlewaite, 99)
    • Metrics
      • Precision
      • Correct matches ( Chakravarthy and Haase, 95)
  • 131. Vertical Selection Evaluation – Aggregated search
    • Majority of publications focus on single vertical selection
      • vertical accuracy, precision, recall
    • Evaluation data
      • editorial data
      • behavioral data
    single vertical selection
  • 132. Editorial data
    • Guidelines
      • judge relevance based on vertical results (implicit judging of retrieval/content quality)
      • judge relevance based on vertical description (assumes idealized retrieval/content quality)
    • Evaluation metric derived from binary or graded relevance judgments
      • (Arguello etal, 09; Arguello et al, 10)
  • 133. Behavioral data
    • Inference relevance from behavioral data (e.g. click data)
    • Evaluation metric
      • regression error on predicted CTR
      • infer binary or graded relevance
      • (Diaz, 09; Konig etal , 09)
  • 134. Test collections (a la TREC) * There are on an average more than 100 events/shots contained in each video clip (document) (Zhou & Lalmas, 10) Statistics on Topics number of topics 150 average rel docs per topic 110.3 average rel verticals per topic 1.75 ratio of “General Web” topics 29.3% ratio of topics with two vertical intents 66.7% ratio of topics with more than two vertical intents 4.0% quantity/media text image video total size (G) 2125 41.1 445.5 2611.6 number of documents 86,186,315 670,439 1,253* 86,858,007
  • 135. ImageCLEF photo retrieval track …… TREC web track INEX ad-hoc track TREC blog track topic t 1 doc d 1 d 2 d 3 … d n judgment R N R … R …… Blog Vertical Reference (Encyclopedia) Vertical Image Vertical General Web Vertical Shopping Vertical topic t 1 doc d 1 d 2 … d V1 judgment R N … R vertical V 1 V 2 d 1 d 2 … d V2 N N … R …… V k d 1 d 2 … d Vk N N … N t 1 existing test collections (simulated) verticals Test collections (a la TREC)
  • 136. Recap – Evaluation federated search aggregated search Editorial data document relevance judgments query labels Behavioral data none critical
  • 137. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 138. Open problems in federated search
    • Beyond big document
      • Classification-based server selection (Arguello et al, 09a)
      • Topic modeling
    • Query expansion
      • Previous techniques had little success (Ogilvie and Callan, 01; Shokouhi et al, 09)
    • Evaluating federated search
      • Confounding factors
    • Federated search in other context
      • Blog Search (Elsas et al, 08; Seo and Croft, 08)
    • Effective merging
      • Supervised techniques
  • 139. Open problems in aggregated search
    • Evaluation metrics
      • slotted presentation
      • tiled presentation
      • metrics based on behavioral signals
    • Models for multiple verticals
    • Minimizing the cost for new verticals, markets
  • 140. Outline
    • Introduction and Terminology
    • Architecture
    • Resource Representation
    • Resource Selection
    • Result Presentation
    • Evaluation
    • Open Problems
    • Bibliography
  • 141. Bibliography
      • J. Arguello, F. Diaz, J. Callan, and J.-F. Crespo, Sources of evidence for vertical selection. In SIGIR 2009 (2009).
      • J. Arguello, J. Callan, and F. Diaz. Classification-based resource selection. In Proceedings of the ACM CIKM, Pages 1277--1286, Hong Kong, China, 2009a.
      • J. Arguello, F. Diaz, J.-F. Paiement, Vertical Selection in the Presence of Unlabeled Verticals. In SIGIR 2010 (2010).
      • J. Aslam and Mark Montague. Models for metasearch, In Proceedings of ACM SIGIR, Pages, 276--284, New Orleans, LA, 2001.
      • M. Baillie, L. Azzopardi, and F. Crestani. Adaptive query-based sampling of distributed collections, In Proceedings of SPIRE, Pages 316--328, Glasgow, UK, 2006a.
      • M. Baillie, L. Azzopardi, and F. Crestani. Towards better measures: evaluation of estimated resource description quality for distributed IR. In X. Jia, editor, Proceedings of the First International Conference on Scalable Information systems, page 41, Hong Kong, 2006b.
      • M. Baillie, M. Carman, and F. Crestani. A topic-based measure of resource description quality for distributed information retrieval. In Proceedings of ECIR, pages 485--496, Toulouse, France, 2009.
  • 142. Bibliography
      • Z. Bar-Yossef and M. Gurevich. Random sampling from a search engine's index. Proceedings of WWW, pages 367--376, Edinburgh, UK, 2006.
      • S. M. Beitzel, E. C. Jensen, D. D. Lewis, A. Chowdhury, O. and Frieder, Automatic classification of web queries using very large unlabeled query logs. ACM Trans. Inf. Syst. 25, 2 (2007), 9.
      • Y. Bernstein, M. Shokouhi, and J. Zobel. Compact features for detection of near-duplicates in distributed retrieval. Proceedings of SPIRE, Pages 110--121, Glasgow, UK, 2006.
      • J. Callan and M. Connell. Query-based sampling of text databases. ACM Transactions on Information Systems, 19(2):97--130, 2001.
      • J. Callan, Z. Lu, and B. Croft. Searching distributed collections with inference networks. In Proceedings of ACM SIGIR, pages 21--28. Seattle, WA, 1995
      • J. Caverlee, L. Liu, and J. Bae. Distributed query sampling: a quality-conscious approach. In Proceedings of ACM SIGIR, pages 340--347. Seattle, WA, 2006.
      • S. Cetintas, L. Si, and H. Yuan, Learning from past queries for resource selection, In Proceedings of ACM CIKM, Pages1867--1870, Hong Kong, China.
  • 143.
      • B.T. Bartell, G.W. Cottrell, and R.K. Belew. Automatic Combination of Multiple Ranked Retrieval Systems, ACM SIGIR, pp 173-181, 1994.
      • C. Baumgarten. A Probabilitstic Solution to the Selection and Fusion Problem in Distributed Information Retrieval, ACM SIGIR, pp 246-253, 1999.
      • N. Craswell. Methods for Distributed Information Retrieval. PhD thesis, Australian National University, 2000.
      • S. Cronen-Townsend, Y. Zhou, and W. B. Croft. Predicting query performance. ACM SIGIR, pp 299–306, 2002.
      • A. Chakravarthy and K. Haase. NetSerf: using semantic knowledge to find internet information archives, ACM SIGIR, pp 4-11, Seattle, WA, 1995.
      • F. Diaz. Performance prediction using spatial autocorrelation. ACM SIGIR, pp. 583–590, 2007.
      • F. Diaz. Integration of news content into web results. ACM International Conference on Web Search and Data Mining, 2009.
      • F. Diaz, J. and Arguello. Adaptation of offline vertical selection predictions in the presence of user feedback, ACM SIGIR, 2009.
      • D. Dreilinger and A. Howe. Experiences with selecting search engines using metasearch. ACM Transaction on Information Systems, 15(3):195-222, 1997.
      • J. Elsas, J. Arguello, J. Callan, and J. Carbonell. Retrieval and feedback models for blog feed search, ACM SIGIR, pp 347-354, Singapore, 2009.
    Bibliography
  • 144.
      • E. Glover, S. Lawrence, W. Birmingham, and C. Giles. Architecture of a metasearch engine that supports user information needs, ACM CIKM, pp 210—216,1999.
      • L. Gravano, H. García-Molina, and A. Tomasic. Precision and recall of GlOSS estimators for database discovery. Third International conference on Parallel and Distributed Information Systems, pp 103--106, Austin, TX, 1994a.
      • L. Gravano, H. García-Molina, and A. Tomasic. The effectiveness of GlOSS for the text database discovery problem. ACM SIGMOD, pp 126--137, Minneapolis, MN, 1994b.
      • L. Gravano, C. Chang, H. García-Molina, and A. Paepcke. STARTS:Stanford proposal for internet metasearching, ACM SIGMOD, pp 207--218, Tucson, AZ, 1997.
      • L. Gravano, H. García-Molina , and A. Tomasic. GlOSS: text-source discovery over the internet, ACM Transactions on Database Systems, 24(2):229--264, 1999.
      • E. Fox and J. Shaw. Combination of multiple searches. Second Text REtrieval Conference, pp 243-252, Gaithersburg, MD, 1993.
      • E. Fox and J. Shaw. Combination of multiple searches, Third Text REtrieval Conference, pp 105-108, Gaithersburg, MD, 1994.
      • J. French, and A. Powell. Metrics for evaluating database selection techniques, World Wide Web, 3(3):153--163, 2000.
      • C. Hauff. Predicting the Effectiveness of Queries and Retrieval Systems, PhD thesis, University of Twente, 2010.
    Bibliography
  • 145.
      • D. Hawking and P. Thomas. Server selection methods in hybrid portal search, ACM SIGIR, pp 75-82, Salvador, Brazil, 2005.
      • D. Hawking and P. Thistlewaite. Methods for information server selection, ACM Transactions on Information Systems, 17(1):40-76, 1999.
      • T. Hernandez and S. Kambhampati. Improving text collection selection with coverage and overlap statistics. WWW, pp 1128-1129, Chiba, Japan, 2005.
      • P. Ipeirotis and L. Gravano. When one sample is not enough: improving text database selection using shrinkage. ACM SIGMOD, pp 767-778, Paris, France, 2004.
      • P. Ipeirotis and L. Gravano. Distributed search over the hidden web: Hierarchical database sampling and selection. VLDB, pages 394-405, Hong Kong, China, 2002.
      • P. Ipeirotis and L. Gravano. Classification-aware hidden-web text database selection. ACM Transactions on Information Systems, 26(2):1-66, 2008.
      • P. Ipeirotis, A. Ntoulas, J. Cho, and L. Gravano. Modeling and managing content changes in text databases, 21st International Conference on Data Engineering, pp 606-617, Tokyo, Japan, 2005.
      • A. C. König, M. Gamon, and Q. Wu. Click-through prediction for news queries, ACM SIGIR, 2009.
    Bibliography
  • 146.
      • X. Li, Y.-Y. Wang, and A. Acero, Learning query intent from regularized click graphs, ACM SIGIR, pp. 339–346.
      • D. Lillis, F. Toolan, R. Collier, and J. Dunnion. ProbFuse: a probabilistic approach to data fusion, ACM SIGIR, pp 139-146, Seattle, WA, 2006.
      • K. Liu, C. Yu, and W. Meng. Discovering the representative of a search engine. ACM CIKM, pp 652-654, McLean, VA, 2002.
      • N. Liu, J. Yan, W. Fan, Q. Yang, and Z. Chen. Identifying Vertical Search Intention of Query through Social Tagging Propagation, WWW, Madrid, 2009.
      • W. Meng, Z. Wu, C. Yu, and Z. Li. A highly scalable and effective method for metasearch, ACM Transactions on Information Systems, 19(3):310-335, 2001.
      • W. Meng, C. Yu, and K. Liu. Building efficient and effective metasearch engines. ACM Computing Surveys, 34(1):48-89, 2002.
      • V. Murdock, and M. Lalmas. Workshop on aggregated search, SIGIR Forum 42(2): 80-83, 2008.
      • H. Nottelmann and N. Fuhr. Combining CORI and the decision-theoretic approach for advanced resource selection, ECIR, pp 138--153, Sunderland, UK, 2004.
      • P. Ogilvie and J. Callan. The effectiveness of query expansion for distributed information retrieval, ACM CIKM, pp 1830--190, Atlanta, GA, 2001.
      • C. Paris, S. Wan and P. Thomas. Focused and aggregated search: a perspective from natural language generation, Journal of Information Retrieval, Special Issue, 2010.
    Bibliography
  • 147.
    • S. Park. Analysis of characteristics and trends of Web queries submitted to NAVER, a major Korean search engine, Library & Information Science Research 31(2): 126-133, 2009.
    • F. Schumacher and R. Eschmeyer. The estimation of fish populations in lakes and ponds, Journal of the Tennessee Academy of Science, 18:228-249, 1943.
    • M. Shokouhi. Central-rank-based collection selection in uncooperative distributed information retrieval, ECIR, pp 160-172, Rome, Italy, 2007a.
    • J. Seo and B. Croft. Blog site search using resource selection, ACM CIKM, pp 1053-1062, Napa Valley, CA, 2008.
    • M. Shokouhi. Segmentation of search engine results for effective data-fusion, ECIR, pp 185-197, Rome, Italy, 2007b.
    • M. Shokouhi and J. Zobel. Robust result merging using sample-based score estimates, ACM Transactions on Information Systems, 27(3): 1-29, 2009.
    • M. Shokouhi and J. Zobel. Federated text retrieval from uncooperative overlapped collections, ACM SIGIR, pp 495-502. Amsterdam, Netherlands, 2007.
    • M. Shokouhi, F. Scholer, and J. Zobel. Sample sizes for query probing in uncooperative distributed information retrieval, Eighth Asia Pacific Web Conference, pp 63--75, Harbin, China, 2006a.
    Bibliography
  • 148.
    • M. Shokouhi, J. Zobel, F. Scholer, and S. Tahaghoghi. Capturing collection size for distributed non-cooperative retrieval, ACM SIGIR, pp 316-323, Seattle, WA, 2006b.
    • M. Shokouhi, J. Zobel, S. Tahaghoghi, and F. Scholer. Using query logs to establish vocabularies in distributed information retrieval, Information Processing and Management, 43(1):169-180, 2007d.
    • M. Shokouhi, P. Thomas, and L. Azzopardi. Effective query expansion for federated search, ACM SIGIR, pp 427-434, Singapore, 2009.
    • L. Si and J. Callan. Unified utility maximization framework for resource selection, ACM CIKM, pages 32-41, Washington, DC, 2004a.
    • L. Si and J. Callan. CLEF2005: multilingual retrieval by combining multiple multilingual ranked lists. Sixth Workshop of the Cross-Language Evaluation Forum, Vienna, Austria, 2005a. http://www.cs.purdue.edu/homes/lsi/publications.htm
    • L. Si, J. Callan, S. Cetintas, and H. Yuan. An effective and efficient results merging strategy for multilingual information retrieval in federated search environments, Information Retrieval, 11(1):1--24, 2008.
    • L. Si and J. Callan. Relevant document distribution estimation method for resource selection, ACM SIGIR, pp 298-305, Toronto, Canada, 2003a.
    • L. Si and J. Callan. Modeling search engine effectiveness for federated search, ACM SIGIR, pp 83-90, Salvador, Brazil, 2005b.
    • L. Si and J. Callan. A semisupervised learning method to merge search engine results, ACM Transactions on Information Systems, 21(4):457-491, 2003b.
    Bibliography
  • 149.
    • A. Sugiura and O. Etzioni. Query routing for web search engines: architectures and experiments, WWW, Pages 417-429, Amsterdam, Netherlands, 2000.
    • S. Sushmita, H. Joho and M. Lalmas. A Task-Based Evaluation of an Aggregated Search Interface, SPIRE, Saariselkä, Finland, 2009.
    • S. Sushmita, H. Joho, M. Lalmas, and R. Villa. Factors Affecting Click-Through Behavior in Aggregated Search Interfaces, ACM CIKM, Toronto, Canada, 2010.
    • S. Sushmita, B. Piwowarski, and M. Lalmas. Dynamics of Genre and Domain Intents, Technical Report, University of Glasgow 2010.
    • S. Sushmita, H. Joho, M. Lalmas and J.M. Jose. Understanding domain &quot;relevance&quot; in web search, WWW 2009 Workshop on Web Search Result Summarization and Presentation, Madrid, Spain, 2009.
    • P. Thomas and D. Hawking. Evaluating sampling methods for uncooperative collections, ACM SIGIR, pp 503-510, Amsterdam, Netherlands, 2007.
    • P. Thomas. Server characterisation and selection for personal metasearch, PhD thesis, Australian National University, 2008.
    • P. Thomas and M. Shokouhi. SUSHI: scoring scaled samples for server selection, ACM SIGIR, pp 419-426, Singapore, Singapore, 2009.
    • A. Trotman, S. Geva, J. Kamps, M. Lalmas and V. Murdock (eds). Current research in focused retrieval and result aggregation, Special Issue in the Journal of Information Retrieval, Springer, 2010.
    Bibliography
  • 150.
    • T. Tsikrika and M. Lalmas. Merging Techniques for Performing Data Fusion on the Web, ACM CIKM, pp 181-189, Atlanta, Georgia, 2001.
    • Ellen M. Voorhees, Narendra Kumar Gupta, Ben Johnson-Laird. Learning Collection Fusion Strategies, ACM SIGIR, pp 172-179, 1995.
    • B. Yuwono and D. Lee. WISE: A world wide web resource database system. IEEE Transactions on Knowledge and Data Engineering, 8(4):548--554, 1996.
    • B. Yuwono and D. Lee. Server ranking for distributed text retrieval systems on the internet. Fifth International Conference on Database Systems for Advanced Applications, 6, pp 41-50, Melbourne, Australia, 1997.
    • J. Xu and J. Callan. Effective retrieval with distributed collections, ACM SIGIR, pp 112-120, Melbourne, Australia, 1998.
    • A. Zhou and M. Lalmas. Building a Test Collection for Aggregated Search, Technical Report, University of Glasgow 2010.
    • J. Zobel. Collection selection via lexicon inspection, Australian Document Computing Symposium, pp 74--80, Melbourne, Australia, 1997.
    Bibliography