Query recommendation papers

  • 294 views
Uploaded on

 

More in: Technology , Education
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
294
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
11
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • All questions are associated with an implicit answer type. So even if we don't know the actual answer, we can expect what the type of that answer would be. An accurate expectation makes it easier to predict the answer from a sentence that contains the query words. <example>
  • The answer to a question like “What are the tourist attractions in Reims?” could be many different things – church, historic building,, park, statue, famous intersection etc.
  • Unsupervised method to dynamically construct a probabilistic answer type model for each question. Such a model evaluates whether or not a word fits into the question context. <example> we can find words that appear in this context from a corpus.
  • Parsed the AQUAINT corpus (3GB) with Minipar and collected the frequency counts of words appearing in various contexts. Parsing and database construction is done offline as the database is identical for all questions. Extracted 527, 768 contexts that appeared at least 25 times in the corpus. Which city hosted the winter olympics? Question clearly states that the desired answer type is city. So the context is “X is a city”
  • The first model assigns the same likelihood to every instance of the candidate word. Since a word can be polysemous like we saw in the “washington” example, we introduce the candidate context. Various parameters of the model are then estimated using the context filler database and appropriate probability distribution
  • The model is used to filter the contents of the documents retrieved by IR portion of the question-answer system. Each answer candidate is scored and the list is sorted in descending order of score. Then treat the system as filter and observe the number of candidates that must pass through it before at least one correct answer is accepted. A model that allows low percentage of candidates to pass while still accepting at least one answer is favourable to one that passes many candidates. Compared against two models – Oracle system that uses manual question classification and manual entity tagging. And ANNIE that performs automatic tagging.
  • Users submit a query to search according to their potential information need. They then consecutively reformulate their queries in a search session until their original needs are fulfilled.
  • Idea is to integrate the learned search intents of queries into the prior preference of the personalized random walk, and apply the random walk under different search intent resp. Lambda is the teleportation probability rho is the weight balancing the original query and its intent. It is set less than 1 to smoothen the preference vector with the learned intents which can provide rich information about the original query.

Transcript

  • 1. Summary of Query Recommendation papers Ashish Kulkarni
  • 2. A Probabilistic Answer Type Model
  • 3. Introduction
    • What is the capital of Norway?
    • We would expect the answer to be a city and can filter out most text from
      • The landed aristocracy was virtually crushed by Hakon V, who reigned from 1299 to 1319 and Oslo became the capital of Norway, replacing Bergen as the principal city of the kingdom.
    • The goal of answer typing is to determine whether a word's semantic type is appropriate as an answer for a question
  • 4. Previous approaches to Answer Typing
    • Used a predefined set of answer types and used supervised learning or manually constructed rules
    • There will always be questions whose answers do not belong to any of the predefined types. “What are tourist attractions in Reims?” The answer could be many different things. Define a catch-all class. Not as effective as the other answer types.
    • Granularity – if the types are too specific they are difficult to tag. If they are too general, too many candidates might be identified.
  • 5. Proposed approach
    • Unsupervised method to dynamically construct a probabilistic answer type model
    • “ What are the tourist attractions in Reims?” We would expect the answers to fit into the context “X is a tourist attraction.” From a corpus we can find words that appeared in this context.
    • Using the frequency counts of these words in the context, we construct a probabilistic model to compute the probability for a word w to occur in a set of contexts T given an occurrence of w – P(in(w, T) | w).
    • Parameters of this model are obtained from an automatically parsed, unlabelled corpus. By asking whether a word would occur in a particular context extracted from a question, we avoid explicitly specifying a list of possible answer types.
  • 6. Resources used by the model
    • Word clusters – abstracting a given word to a class of related words. Clustering by Committee (CBC) algorithm. A word may belong to multiple clusters.
    • Contexts – context in which a word appears imposes constraints on the semantic type of the word. Represented by undirected paths in the dependency trees involving the word at the beginning or end. The word itself is replaced by X. A word is said to be filler of the context if it replaces X.
    • Question contexts are extracted from a question. An answer is a plausible filler of a question context. Two rules – if wh-word has a trace in the parse tree, the question contexts are the contexts of the trace. If wh-word is a determiner, then single context involving the noun that is modified by the determiner.
    • Candidate contexts are extracted from parse trees of the candidate answers. Occurrence of washington in 'Washington's descendants' and 'suburban Washington' should be scored differently if the question is seeking a location.
  • 7. Probabilistic model
    • Goal is to evaluate the appropriateness of an answer candidate. This is proportional to the probability that it will be a filler of the question context Tq extracted from the question. P(in(w, Tq)|w).
    • To mitigate data sparseness, variable C for clusters is introduced. It can be shown that the above model splits into two parts – one that models which clusters a word belongs to and the other that models the appropriateness of the cluster to question contexts.
    • Then introduce the candidate context and compute joint likelihood P(in(w,Tq) | w, in(w, Tw)), where Tw is the set of contexts for the candidate w.
  • 8. Conclusions
    • Explicit answer types can exhibit poor performance especially for those questions not fitting one of the types.
    • The answer types also need to be redefined when the domain or corpus changes significantly. The probabilistic answer typing model can adapt to different corpora and question answering domains.
    • This model can be combined with other existing answer typing strategies especially in those cases where catch-all classes are used.
  • 9. Exploring the Query-Flow Graph with a Mixture Model for Query Recommendation
  • 10. Introduction
    • Query – flow graph: Nodes represent unique queries and two nodes are connected by a directed edge if they occur consecutively in a search session. A weighing function assigns weight representing the probability that two nodes q and q' are part of the same chain.
    • Chain is defined as a sequence of queries with similar information need.
    Query-flow graph Results, sessions, clickthrough Query Logs Query Recommendations
  • 11. Previous approach to query recommendation
    • Traditionally, personalized random walk over query-flow graph was used for recommendation.
    • Dangling queries
      • No out links
      • Nearly 9% of whole queries
    • Ambiguous queries
      • Mixed recommendation
        • Hard to read
      • Dominant recommendation
        • Cannot satisfy different needs
  • 12. Proposed approach
    • Novel mixture model to interpret the generation of the query-flow graph under multiple hidden intents.
    • Assumptions
      • Queries are generated from some hidden search intents
      • Two queries occurred consecutively in a session if they are from the same search intent.
    • Two step process
      • Apply a novel mixture model over query-flow graph to learn the intents of queries
      • Perform an intent-biased random walk on the query-flow graph for recommendations.
  • 13. Mixture model
    • The Mixture model is a probabilistic model of generating query-flow graph
    • The approach uses machine learning techniques to train the model to learn intents of the search queries.
  • 14. Intent-biased random walk
      • The model adds the query intent to the preference vector
      • For dangling queries, the preference vector backs off to query’s intents to recommend the related queries.
      • For ambiguous queries, run the intent-biased random walk for every intents of the query and group the recommendations by intent.
  • 15. Experiment results
    • Data set
      • 3-month query log from a commercial search engine
      • Query stream split into sessions using 30 min timeout
      • The biggest connected graph is extracted for experiments, which consisted of 16,980 queries and 51,214 edges.
    • Mixture model with 600 dimensions (hidden intents)
    Dangling query suggestions Ambiguous query suggestions
  • 16. Toward a deeper understanding of user intent and query expressiveness
  • 17. Introduction
    • Inferring users' intents is a popular research topic
    • Intent is the “need behind a query”
    • What is in the user's mind when they enter a search query?
  • 18. Previous approach to identifying intent
    • Focussed on intent modeling or on clustering intents into categories
    • They assume that human annotators can infer the main intent of each query
    • They suffer from the human judgement of inferring the intent by observing the queries and/or the URLs browsed
    • Need to understand -
      • What constitutes an intent?
      • What factors go into articulating what is in user's mind into queries?
  • 19. Proposed approach
    • Factors to evaluate the effectiveness of a set of queries to articulate the user intent
      • Complexity of the task user wants to accomplish
      • Number of dimensions already explored
      • Specificity of what user is looking for
  • 20. Empirical study
    • 10 participants performed number of search tasks
    • Listed 10 things they were interested to search (before session)
    • 55 min search session
    • Not required to complete each search topic in their list
    • Quantitative data collected through video recording
      • Queries
      • Search results
      • Clicked URLs
      • Landing pages
      • Dwell time etc.
  • 21. Sample search tasks
    • Peruvian literature: intent was to replace some readings from a book with easier and shorter readings from the web. User followed the structure of the book (chapters, topics) while querying.
    • Bugaboo stroller: intent is to sell an used stroller at the right price. In the queries user did not convey the intent to sell vs buy.
    • San Francisco restaurants: looking for a restaurant user hasn't been to before. Wants to explore what is out there and also find something fun to do after dinner.
    • A's spring training: intent is to find the right resource for organizing an A's spring training.
    • Things to do in Vegas: intent is to look for things that others have done that the user has not done already.
  • 22. Analysis
    • Analysis based on answering following questions -
      • What is the complexity of the search task?
      • What is the known component of the intent?
      • What is the unknown component of the intent?
      • How much effort does the searcher have to put to complete the search task?
      • Is the intent articulated well by the set of queries?
  • 23. Observations
    • Intents have a structure that reflects the user's mental model at the time of their search
    • There can be conjunction of intents
    • Intent could also be what someone does not want/need
    • Not all queries convey the intent
  • 24. Discussions
    • Depending on whether the intent is general or specific, the effectiveness of the queries can vary. It is not enough to consider a single query in the session to predict the intent.
    • Personalization – result set modification based on the user's current knowledge
    • Diversification of search results is inadequate to compensate for the search engine's lack of knowledge of what is in user's mind
  • 25. Thank You