slides
Upcoming SlideShare
Loading in...5
×
 

slides

on

  • 321 views

 

Statistics

Views

Total Views
321
Views on SlideShare
321
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

slides slides Presentation Transcript

  • Comparing Offline and Online Statistics Estimation for Text Retrieval from Overlapped Collections MS Thesis Defense Bhaumik Chokshi Committee Members: Prof. Subbarao Kambhampati (Chair) Prof. Yi Chen Prof. Hasan Davulcu
  • My MS Work
    • Collection Selection : ROSCO
    • Query Processing over Incomplete Autonomous Databases: QPIAD
    • Handling Query Imprecision and Data Incompleteness: QUIC
  • Multi Source Information Retrieval
    • In multi source information retrieval problem, searching every information source is not efficient. The retrieval system must choose one collection or subset of collections to call to answer a given query.
  • Overlapping Collections
    • Many real world collections have significant overlap.
      • For example, multiple bibliography collections (e.g., ACMDL, IEEE, DBLP etc.) may store some of the same papers and multiple news archives (e.g., New York Times, Washington Post etc.) may store very similar news stories.
    CSB IEEE ACM
    • How likely it is that a given collection has documents relevant to the query.
    • Whether a collection will provide novel results given the collections already selected.
    DBLP Science
  • Related Work
    • Most collection selection approaches do not consider overlap
      • Existing systems like CORI, ReDDE try to create a representative for each collection based on term and document frequency information.
      • ReDDE uses collection samples to estimate relevance of each collection. Same samples can be used to estimate overlap among collections.
    • 16.6% of the documents in runs submitted to the TREC 2004 terabyte track were redundant. [Bernstein and Zobel, 2005]
    • Using coverage and overlap statistics in context of relational data sources. [Nie and Kambhampati, 2004]
      • Overlap among tuples can be identified in a much straightforward way compared to text documents.
  • Challenges Involved
    • Need for query specific overlap
      • Two collections may have low overlap as a whole but can have high overlap for a particular set of queries.
    • Overlap assessment offline vs. online
      • Offline approach can store statistics for general keywords and map incoming query to these keywords to obtain relevance and overlap statistics.
      • Online approach can use the samples to estimate relevance and overlap statistics.
    • Efficiently determine true overlap between collections
      • True overlap between collections can be estimated using result to result comparison for different collections.
    COSCO
  • Context of this work
    • COSCO takes overlap into account while determining collection order.
    • But it does it offline.
      • Samples built for the collections can be used to estimate overlap statistics which can be a better estimate as it is for a particular query.
    • COSCO estimates overlap using bag similarity over result-set document.
      • True overlap between collections can be obtained using result to result comparison.
    • COSCO does not do experiments on TREC data.
  • Contributions
    • ROSCO, an online approach which estimates overlap statistics from the samples of the collections.
    • Comparison of offline (COSCO) and online (ROSCO) approaches for statistics estimation for text retrieval from overlapping collections.
  • Outline
    • COSCO and ROSCO Architecture
    • ROSCO Approach
    • Empirical Evaluation
    • Other Contributions
    • Conclusion
  • COSCO Architecture
  • ROSCO Architecture
  • Outline
    • COSCO and ROSCO Architecture
    • ROSCO Approach
    • Empirical Evaluations
    • Other Contributions
    • Conclusion
  • ROSCO (Offline Component) Collection representation through query based sampling C2 C1 Training Queries Samples Training Queries S2 S1 Union of Samples
  • ROSCO (Offline Component) Collection Size Estimation C2 C1 Random Queries Random Queries Samples S2 S1 Number of documents returned from collection C i Number of documents returned from sample S i Size Estimates
  • ROSCO (Offline Component) Grainy Hash Vector Sample Hash GHV w bits n bits
  • ROSCO (Online Component) Assessing Relevance Union of Samples Query S2 S1 Samples Query Determine top –k relevant documents for each collections Size Estimates Top-k documents for each collection
  • ROSCO (Online Component) Assessing Overlap and Combining with Relevance Estimate no. of relevant new documents for each collection Size Estimates GHVs of the top-k documents of each collection GHVs of documents of the collections selected till now Collection with maximum no. of new relevant documents
  • Comparison of ROSCO and COSCO
    • COSCO:
      • Offline method for estimating coverage and overlap statistics.
      • Gets estimate for a query by using statistics for corresponding frequent item sets. Statistics for “data mining integration” can be obtained by using statistics from “data mining” and “data integration” .
      • This way of computing statistics can lead to a much different estimate from actual statistics.
    • ROSCO:
      • Online method for estimating coverage and overlap statistics.
      • Gets estimate by sending query to sample which can give better estimate for a particular query at hand.
      • Success of this approach depends on the quality of sample. Sometimes it can be hard to obtain a good sample of the collection.
  • Outline
    • ROSCO and COSCO Architecture
    • ROSCO Approach
    • Empirical Evaluation
    • Other Contributions
    • Conclusion
  • Empirical Evaluation
    • Whether ROSCO can perform better in an environment of overlapping text collections compared to the approaches which do not consider overlap.
    • Compare ROSCO and COSCO in presence of overlap among collections.
  • Testbed Creation
    • Test Data
      • TREC Genomics data.
      • 50 queries with their relevance judgment.
    • Testbed Creation
      • 100 disjoint clusters from 200,000 documents to create topic specific collections.
      • uniform-50cols:
        • 50 collections.
        • Each of the 200,000 documents is randomly assigned to 10 different collections.
        • Total of 2 million documents.
      • skewed-100cols:
        • 100 collections.
        • Each of the 100 clusters is randomly assigned to 10 different collections.
        • Total of 2 million documents.
        • As each cluster is assigned to multiple collections, topic specific overlap among collections is more prominent in this testbed compared to uniform-50cols.
  • Collection Size and Relevance Statistics
    • Testbed 1 Testbed 2
    uniform-50cols skewed-50cols
  • Collection Overlap Statistics uniform-50cols skewed-100cols
  • Tested Methods
    • COSCO, ReDDE and ROSCO.
    • Greedy Ideal for establishing performance bound
    • Setting up COSCO
      • 40 training queries to each of the collection
    • Setting up ROSCO and ReDDE
      • Training Queries: 25 queries for each collection.
      • Sample size: 10% of the actual collections.
      • 10 size estimates
      • Duplicate detection: GHV containing 32 vectors of 2 bits each (total of 64 bits).
      • Mismatches allowed: 0 mismatch allowed for exact duplicates
    • Evaluation
      • Recall after each collection called. (Central evaluation and TREC evaluation)
      • Processing time.
  • Greedy Ideal
    • This method attempts to greedily maximize the percentage recall assuming oracular information.
    • It is used for establishing performance bound and as a baseline ranking method in evaluation.
  • Experimental Results (Central Evaluation)
    • 10 queries different from training queries for evaluation.
    • 5-fold cross validation
    • Evaluation metric:
    • For both the testbeds ROSCO performs better than ReDDE and COSCO by 7-8% in terms of recall metric R.
    Ranking by a particular method Ranking by the baseline method
  • Experimental Results (TREC Evaluation)
    • For both testbeds ROSCO is performing better than ReDDE and ROSCO in terms of recall metric R.
    • As skewed-100col testbed is created by topic specific clusters, ROSCO shows more improvement compared to uniform-50col testbed over other approaches.
  • Experimental Results (Processing Cost)
    • Processing time for ReDDE and ROSCO is more compared to COSCO. But no. of collections called by ReDDE and ROSCO are less for same amount of recall.
  • Summary of Experimental Results
    • Evaluated ROSCO, ReDDE and COSCO on two different testbeds with overlapping collections.
    • ROSCO shows improvement over ReDDE and COSCO by
      • 7-8% for central evaluations on both testbeds.
      • TREC evaluation: 3-5% on uniform-50cols and 8-10% on clustered-100cols.
    • Processing time for ReDDE and ROSCO is more compared to COSCO. But no. of collections called by ReDDE and ROSCO are less for same amount of recall.
  • Outline
    • ROSCO and COSCO Architecture
    • ROSCO Approach
    • Empirical Evaluation
    • Other Contributions
    • Conclusion
  • Other Contributions (QPIAD Project) F Measure based query rewriting for incomplete autonomous web databases Given a query Q:(Body Style=Convt) retrieve all relevant tuples Ranked Relevant Uncertain Answers
    • Select Top K Rewritten Queries
      • Q 1 ’: Model=A4
      • Q 2 ’: Model=Z4
      • Q 3 ’: Model=Boxster
    Re-order queries based on Estimated Precision AFD: Model~> Body style Id Make Model Year Body 1 Audi A4 2001 Convt 2 BMW Z4 2002 Convt 3 Porsche Boxster 2005 Convt 4 BMW Z4 2003 NULL 5 Honda Civic 2004 NULL 6 Toyota Camry 2002 Sedan 7 Audi A4 2006 NULL Id Make Model Year Body Confidence 4 BMW Z4 2003 NULL 0.7 7 Audi A4 2006 NULL 0.3 Id Make Model Year Body 1 Audi A4 2001 Convt 2 BMW Z4 2002 Convt 3 Porsche Boxster 2005 Convt
  • Other Contributions (QPIAD Project)
    • Sources may impose resource limitations
    • on the # of queries we can issue
    • Therefore, we should select only the top-K
    • queries while ensuring the proper balance
    • between precision and recall
      • SOLUTION: Use F-Measure based
      • selection with configurable alpha parameter
        • α =1 P = R
        • α <1 P > R
        • α >1 P < R
    • JOINS
    P – Estimated Precision R – Estimated Recall (based on P & Est. Sel.) F Measure based query rewriting for incomplete autonomous web databases Co-author on VLDB 2007 research paper
  • Other Contributions (QUIC Project) Given a query Q: model = Civic, an Accord with sedan body style may be more relevant than Civic with coupe body style. Handling unconstrained attributes in presence of query imprecision and data incompleteness Tuples matching user query can be ranked based on unconstrained attributes. [Surajit Chaudhuri, Gautam Das, Vagelis Hristidis and Gerhard Weikum, 2004] In absence of query log, relevance for unconstrained attributes can be approximated from database. 10 queries, 13 users Approach considering unconstrained attributes performs better than the one ignoring unconstrained attributes. Co-author on CIDR 2007 demo paper
  • Outline
    • ROSCO and COSCO Architecture
    • ROSCO Approach
    • Empirical Evaluation
    • Other Contributions
    • Conclusion
  • Conclusion
    • An online method ROSCO for overlap estimation.
    • Comparison of offline and online approaches for text retrieval in an environment composed of overlapping collections.
    • Results of empirical evaluation show that online method for overlap estimation performs better than offline method for overlap estimation as well as method which does not consider overlap among collections.
    • Co-author on two other works appearing in
    • CIDR – 2007 and VLDB - 2007