Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

probabilistic ranking


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

probabilistic ranking

  1. 1. Ranking of Database Query Results Nitesh Maan, Arujn Saraswat, Nishant Kapoor
  2. 2. Introduction <ul><li>As the name suggests ‘Ranking’ is the process of ordering a set of values (or data items) based on some parameter that is of high relevance to the user of ranking process. </li></ul><ul><li>Ranking and returning the most relevant results of user’s query is a popular paradigm in information retrieval. </li></ul>
  3. 3. Ranking and databases <ul><li>Not much work has been done in ranking of results of query in database systems </li></ul><ul><li>We have all seen example of ranking of results in the internet. The most common example is the internet search engines (like Google). A set of WebPages (satisfying the users search criteria) are returned, with most relevant results featuring at the top of the list. </li></ul>
  4. 4. <ul><li>In contrast to the WWW, databases support only a Boolean query model. For example a selection query on a SQL database schema returns all tuples that satisfy the conditions specified in the query. Depending on the conditions specified in the query, two situations may arise: </li></ul>
  5. 5. <ul><li>Empty Answers: when the query is too selective, the answer may be empty. </li></ul><ul><li>Many Answers: when the query is not too selective, too many tuples may be there in the answer. </li></ul><ul><li>We next consider these two scenarios in detail and look at various mechanism to produce ranked results in these circumstances. </li></ul>
  6. 6. The Empty Answers Problem <ul><li>Empty answers problem is the consequence of a very selective query in database system. </li></ul><ul><li>In this case it would be desirable to return a ranked list of ‘approximately’ matching tuples without burdening the user to specify any additional conditions. In other words, an automated approach for ranking and returning approximately matching tuples. </li></ul>
  7. 7. Automated Ranking Functions <ul><li>Automated ranking of query results is the process of taking a user query and mapping it to a Top-K query with a ranking function that depends on conditions specified in the user query. </li></ul><ul><li>A ranking function should be able to work well even for large databases and have minimum side effects on query processing </li></ul>
  8. 8. Automated Ranking functions for the ‘Empty Answers Problem’ <ul><li>IDF Similarity </li></ul><ul><li>QF Similarity </li></ul><ul><li>QFIDF Similarity </li></ul>
  9. 9. IDF Similarity <ul><li>IDF (inverse document frequency) is an adaptation of popular IR technique based on the philosophy that frequently occurring words convey less information about user’s needs than rarely occurring words, and thus should be weighted less. </li></ul>
  10. 10. IDF Similarity, formal definition <ul><li>For every value ‘t’ in the domain of attribute ‘A’, IDF(t) can be defined as log(n/F(t)), </li></ul><ul><li>where ‘n’ = number of tuples in the database </li></ul><ul><li>F(t) = frequency of tuples in database where ‘A’ = ‘t’ </li></ul><ul><li>The similarity between a tuple ‘T’ and a query ‘Q’ is defined as: </li></ul><ul><li>i.e., similarity between a tuple T and a query Q is simply the sum of corresponding similarity coefficients over all attributes in T </li></ul>
  11. 11. QF Similarity – leveraging workloads <ul><li>There may be instances where relevance of a attribute value may be due to factors other than the frequency of its occurrence. </li></ul><ul><li>QF similarity is based on this very philosophy. According to QF Similarity, the importance of attribute values is directly related to the frequency of their occurrence in query strings in workload. </li></ul>
  12. 12. QFIDF Similarity <ul><li>QF is purely workload based, i.e., it does not use data at all. This may be a disadvantage in situations wherein we have insufficient or unreliable workloads. </li></ul><ul><li>QFIDF Similarity is a remedy in such situations. It combines QF and IDF weights. This way even if a value is never referenced in the workload, it gets a small non-zero QF. </li></ul>
  13. 13. Breaking ties…. <ul><li>In case of many answers problem, the recently discussed ranking functions might fail to perform. </li></ul><ul><li>This is because many tuples may tie for the same similarity score. Such a scenario could arise for empty answer problem also. </li></ul><ul><li>To break this tie, requires looking beyond the attributes specified in the query, i.e., missing attributes . </li></ul>
  14. 14. Many Answers Problem <ul><li>We know by now, that many answers problem in database systems is the consequence of not too selective queries. </li></ul><ul><li>Such a query on a database system produces a large number of tuples that satisfy the condition specified in the query. </li></ul><ul><li>Let us see how ranking of results in such a scenario is accomplished. </li></ul>
  15. 15. Basic Approach <ul><li>Any ranking function for many answers problem has to look beyond the attributes specified in the query , since all or a large number of tuples satisfy the specified conditions. </li></ul><ul><li>To determine precisely the unspecified attributes is a challenging task. We show adaptation of Probabilistic Information Retrieval (PIR) ranking methods. </li></ul>
  16. 16. Ranking function for Many Answers Problem <ul><li>Ranking function for many answers problem is developed by adaptation of PIR models that best model data dependencies and correlations. </li></ul><ul><li>The ranking function of a tuple depends on two factors: (a) a global score, and (b) a conditional score. </li></ul><ul><li>These scores can be computed through workload as well as data analysis. </li></ul>
  17. 17. Ranking Function: Adaptation of PIR Models for Structured Data <ul><li>The basic philosophy of PIR models is that given a document collection, ‘D’, the set of relevant documents, ‘R’, and the set of irrelevant documents, (= D – R), any document ‘t’ in ‘D’ can be ranked by finding out score(t). The score(t) is the probability of ‘t’ belonging to the relevant set, ‘R’ </li></ul>
  18. 18. Problem in adapting this approach <ul><li>The problem in computing the score(t) using PIR model for the databases is that the relevant set, ‘R’ is unknown at query time. </li></ul><ul><li>This approach is well suited to IR domain as ‘R’ is usually determined through user feedback. </li></ul><ul><li>User feedback based estimation of ‘R’ might be attempted in databases also but we propose an automated approach. </li></ul>
  19. 19. The ranking formula <ul><li>This is the final ranking formula that we will use in computing scores of tuples in order to rank them. </li></ul><ul><li>The ranking formula is composed of two large factors: </li></ul><ul><li>Global part of the score: measures the global importance of unspecified attributes </li></ul><ul><li>Conditional part of the score: measures the dependencies between the specified and unspecified attributes </li></ul>
  20. 20. Architecture of Ranking System Detailed Architecture of the Ranking System
  21. 21. Implementation <ul><li>Pre-Processing : the pre-processing component is composed of ‘ Atomic Probabilities Module ’ and ‘ Index Module ’. </li></ul><ul><li>Atomic Probabilities Module – is responsible for computation of several atomic probabilities necessary for the computation of Ranking Function, Score(t). </li></ul><ul><li>Index Module – is responsible for pre-computation of ranked lists necessary for improving the efficiency of query processing module. </li></ul>
  22. 22. Implementation <ul><li>Intermediate Layer : the atomic probabilities, lists computed by the Index Module are all stored as database tables in the intermediate layer. All the tables in the intermediate layer are indexed on appropriate attributes for fast access during the later stages. </li></ul><ul><li>Primary purpose of the intermediate layer is to avoid computing the score from scratch each time a query is received, by storing pre-computed results of all atomic computations </li></ul>
  23. 23. The Index Module <ul><li>Index module pre-computes the ranked lists of tuples for each possible atomic query. </li></ul><ul><li>Purpose is to take the run-time load off the query processing component. </li></ul><ul><li>To assist the query processing component in returning the Top-K tuples, ranked lists of the tuples for all possible “atomic” queries are pre-computed </li></ul><ul><li>Taking as input, the association rules and the database, Conditional List and Global List are created for each distinct value ‘x’ in the database </li></ul>
  24. 24. Query Processing Component <ul><li>List merge algorithm is the key player in the query processing component. </li></ul><ul><li>Its function is to take the user query, compute scores for all the tuples that satisfy the condition specified in the query, rank the tuples in a sorted order of the scores and then return the Top-K tuples. </li></ul>
  25. 25. Space Requirements <ul><li>To build the conditional and the global lists, space consumed is O(mn) bytes (where ‘m’ is the number of attributes and ‘n’ is the number of tuples of the database table) </li></ul><ul><li>There may be applications where space is an expensive resource. </li></ul><ul><li>In such cases, only a subset of the lists may be stored at pre-processing times, but this will at the expense of an increase in query processing time. </li></ul>
  26. 26. What’s Next….. <ul><li>The ranking function so presented works on single table databases and does not allow presence of NULL values. </li></ul><ul><li>A very interesting but nevertheless challenging extension to this work would be to develop ranking functions that work on multi-table databases and allow NULL’s as well as non-text data in database columns. </li></ul>