Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Techniques For Deep Query Understanding


Published on

M.Tech Seminar

Published in: Education
  • Be the first to comment

Techniques For Deep Query Understanding

  1. 1. Techniques for Deep Query Understanding “Beware of the man who knows the answer before he understands the question” Guided By: Dr. Dhaval Patel, Assistant Professor, Department Of CSE, IIT Roorkee. Presented By: Abhay Prakash, En. No. – 10211002, CSI, V Year, IIT Roorkee.
  2. 2. (Source: Google) Introduction: Query Understanding  Purpose:  To understand what exactly the user is searching for – his precise intent  To correct mistakes and guide user to formulate a precise intended query Query Refinement Why only this phrase in Bold? (Source: Google) Query Suggestion
  3. 3. Emerging Variety of Queries  Natural Language Queries instead of Keyword Represented Queries  “who is the best classical singer in India” instead of “best classical singer India”  Use of NL Queries increasing (Makoto in [1])  Local Search Queries  “Where can I eat cheesecake right now?”  Context Dependent Queries (Interactive Question Answering (Source: Bing – location set as US)
  4. 4. Background: How results are generated Query Understanding (which index parameters to be used) Review: Hotel ABC, Civil Lines: I ate cheesecake, which was really awesome. (4/5 star) High Level Architecture of Search Mechanism (Source: Self Made) INDEX (Knowledge Base) Document Understanding (What and how to Index) User Query Results Ranking Entities: Hotel ABC, cheesecake Location: Civil Lines Quality: 0.8 Time: 8:15 PM “where can I eat cheesecake right now?” Data (Text documents, User Reviews, Blogs, Tweets, Linkedin …) [Time: 8:15 PM] Intent: Hotel Search Search for: cheesecake Location: Civil lines Time: 8:20 PM
  5. 5. Background: QU & Adv. In Search (Weotta in [3]) 1. Basic Search  Direct text match based retrieval of documents  Restrict search space using facet values provided by user  Current day example: Online shopping sites Mechanism in Basic Search (Source: Self Made) Example of Facets (Source:
  6. 6. Background: QU & Adv. In Search (Weotta in [3]) 2. Advanced Search  Ranking of result documents based on:  TF-IDF to identify more relevant documents  Website authority and popularity  Keyword weighting  Not Considered:  Context, NLP for semantic understanding  Location of query, time of query  Example: Google as was in its early stage
  7. 7. Background: QU & Adv. In Search (Weotta in [3]) 3. Deep Search  What difference does it bring?  Requirements:  Semantic Understanding of Query  Knowledge of Context, previous tasks  User Understanding and Personalization
  8. 8. Architecture: Query Understanding Module Query Query Suggestion Query Correction Query Expansion Query Classification Semantic Tagging 2. Query Refinement 3. Query Intent Detection QUERY UNDERSTASNDING MODULE ANSWER GENERATION MODULE 1. Query Suggestion Result Components of Query Understanding Module (Source: Self Made)
  9. 9. Architecture: Query Understanding Module Query i) michael jordan berkley ii) michael jordan NBA Query Suggestion Query Correction Query Expansion i) michael jordan berkley: academic ii) michael l. Jordan Berkley: academic Query Classification Semantic Tagging Example of purpose of each Component (Source: Self Made) michal jrdan michael jordan i) michael jordan berkley ii) michael l. jordan berkley i) [michael jordan: PersonName] [berkley: Location]: academic ii) [michael l. jordan: PersonName] [berkley: Location]: academic
  10. 10. Query Correction  Reformulates the ill-formed (mistaken) search queries  ex. Macine learning  Machine Learning  Refinements:  Spelling error, Two words Merged together, One word separated  Phrase segmentation (machine + learning  machine learning)  Acronym Expansion (CSE  Computer Science & Engineering)  Refinement may be mutually dependent  “lectures on machne learn”  learn is a correct term, but should have been learning  Hence, different terms need to be addressed simultaneously
  11. 11.  Problem Modeled by Jiafeng in [10] as  Original Query (푥 = 푥1 푥2 . . . 푥푛)  Corrected Query (푦 = 푦1 푦2 . . . 푦푛)  Get y(complete sequence) which has maximum probability of occurrence, given the sequence x.  Simple Technique  Assume terms independent  take 푦푖 with max Pr 푦푖 푥푖  Prime Disadvantage:  Reality deviates a lot from assumption  Ex. “Lectures on machine learning” Independent Corrections Query Correction
  12. 12. Query Correction Using Conventional CRF  What is CRF?  Probabilistic graphical model, models conditional distribution of unobserved state sequences  Trained on given observation sequence  Trained for getting Pr 푦 푠푒푞푢푒푛푐푒 푥  Why use CRF? Conditioned on?  Sequence of words matters (learning machine?)  푦푖 conditioned on other 푦푖s as well, along with 푥푖  Corrections are mutually dependent (e.g. machine learning)  Disadvantage:  Will require very large amount of data, 푦푖 candidates’ domain open Conventional CRF
  13. 13.  Restricting space of y for the given x  conditioned 푦푖 on operation also  표 = 표1 표2 … 표푛, such that 표푖 required to get 푦푖 from 푥푖  표푖 is operation like deletion, insertion of characters, etc.  Learning and Prediction  Dataset of (푥(1), 푦(1), 표(1)), . . . , (푥(푁), 푦(푁), 표(푁))  Features  log Pr(푦푖−1|푦푖 ), where the prob. calculated using corpus  Whether 푦푖 푖푠 표푏푡푎푖푛푒푑 푓푟표푚 푥푖 푎푓푡푒푟 표푝푒푟푎푡푖표푛 표푖 --{0|1} Basic CRF-QR Model Query Correction Basic CRF-QR Model (Jiafeng et. al in [10])
  14. 14.  What is new?  Handles scenario with more than one refinements  Machine learm  learn  learning  Sequence of (sequence of operation)  표 = 표푖,1, 표푖,2, . . . 표푖,푛 i.e. multiple operations on each word  Intermediate results: 푧푖 = 푧푖,1푧푖,2 . . . 푧푖,푚−1 Extended CRF-QR Query Correction Extended CRF-QR Model (Jiafeng et. al in [10])
  15. 15. Query Suggestion  Purpose:  Suggest similar queries  Query auto-completion  Requirements  Context consideration [7]  Identifying Interleaved Tasks [9]  Personalized suggestion [2] Suggestions on “iit r..”
  16. 16. Context aware Query Suggestion (Huanhuan in [7]) Query Suggestion Mechanism (Source: [7]) Query Suggestion  Query – mapped  Concept  Concept Suffix tree from log  Suggestion time: Transition on tree with each query’s concept  Suggest top queries of that state
  17. 17. Concept Suffix Tree  Concept Discovery  Queries clustered using set of clicked URLs  Feature vector 푞푖 = 푛표푟푚 푤푖푗 푖푓 푒푑푔푒 푒푖푗 푒푥푖푠푡푠 0 표푡ℎ푒푟푤푖푠푒  Each identified cluster is taken as a Concept  Concept Suffix Tree  Vertex: state after transition through a sequence of concepts (of queries)  Transition in a session  C2C3C1: transition Beginning  C1  C3  C2 Click-Through Bipartite Query Suggestion Context aware Query Suggestion (Huanhuan in [7])
  18. 18. Query Suggestion Task aware Query Suggestion (Allan in [9])  Why task identification Important?  Considering Off-Task query in context adversely affect quality of recommendation  30% sessions contained multiple tasks (Zhen in [8])  5% sessions have Interleaved tasks (Zhen in [8])  Identify similar previous queries as On-Task  consider only On-Task queries as context Effect of On-Task and Off-Task queries
  19. 19. Query Suggestion Task aware Query Suggestion (Allan in [9])  Measures to evaluate similarity between two queries  Lexical Score: captures similarity at word level directly. Average of:  Jaccard Coefficient between trigrams from the two queries: how many common trigrams?  (1 - Levenshtein Edit Distance), which shows closeness at word level  Semantic Score: maximum of the following two  푠푤푖푘푖푝푒푑푖푒푎(푞푖 , 푞푗 ): cosine similarity of vector of tf-idf score of Wikipedia documents w.r.t the two queries.  푠푤푖푘푡푖표푛푎푟푦 (푞푖 , 푞푗 ): similar to above on Wiktionary entries  Final Similarity(풒풊, 풒풋) = 휶 . Lexical Score + (1-휶) . Semantic Score  If Similarity(푞푖, Reference_q) greater than threshold  푞푖 is On-Task Query
  20. 20. Query Suggestion Personalization in Query Suggestion (Milad in [2])  On character hit of ‘i’  “Instagram” more popular for female below 25  “Imdb” more popular for male in 25-44.  Candidate queries generated by prior general method  Personalization by re-ranking candidate queries  Features for feedback earlier global rank  Original position  Original score  Short History Features  3-Gram similarity with just previous query  Avg. 3-gram similarity with all previous queries in the session
  21. 21. Query Suggestion Personalization in Query Suggestion (Source: [2])  Long History Features  No. of times candidate query issued in past  Avg. 3-gram similarity with all previous queries in the past  Demographic Features  Candidate query frequency over queries by same age group  Candidate query likelihood -- same age group  Candidate query frequency -- same gender group  Candidate query likelihood -- same gender group  Candidate query frequency -- same region group  Candidate query likelihood -- same region group
  22. 22. Query Expansion  Sending more words (should generate similar result) to tackle term-miss  Ex. “Tutorial lecture on ABC”  “Video Lecture on ABC”  Expansion Tasks:  Adding synonyms of words  Morphological words by stemming  Naïve Approach  Exhaustive lookup in thesaurus  Time taking  Still miss terms of similar intent (terms even semantically far)
  23. 23. Query Expansion Path Constrained Random Walk (Jianfeng in [11])  Exploiting search logs for identifying terms having similar end result  Search log data of <Query, Document> clicks  Graph Representation  Node Q: seed query Nodes Q’: queries in search log Nodes D: documents Nodes W: words that occur in queries and documents  Word nodes are the candidate expansion terms  Edges have scoring function  Represents probability of transition from start node to end node Search Log as Graph
  24. 24. Query Expansion Path Constrained Random Walk (Jiafeng in [11])  Probability of using w as an expansion word?  Product of probabilities in Paths starting at node Q and ending at w  Top probable words picked, obtained from random walk Search Log as Graph
  25. 25. Query Classification  Classifying given query in a predefined Intent Class  Ex. michael Jordan berkley: academic  Precise intent by sequence of nodes from root to leaf  More challenging than document classification  Short length  Keyword representation, makes more ambiguous  Ex. query “brazil germany”  Older basic techniques Example Taxonomy (Source: [6])  Considering single query  statistical techniques like 2-gram/3-gram inference
  26. 26. Query Classification Context aware Query Classification (Huanhuan in [6])  Resolving ambiguity using context  Previous Queries ∈ sports, then “Michael Jordan”  sports (Basketball Player)  Previous Queries ∈ academic, then “Michael Jordan”  academic (ML professor)  Use of CRF (because training and prediction on sequence)  Local Features  Query Terms: Each 푞푡 supports a target category  Pseudo Feedback:  푞푡 with concept 푐푡, submitted to an external web directory  How many of top M results have 푐푡 concept?  Implicit Feedback:  Instead of Top M results – only the clicked documents taken
  27. 27. Query Classification Context aware Query Classification (Huanhuan in [6])  Contextual Features  Direct association between adjacent labels  Number of occurrences of adjacent labels < 푐푡−1, 푐푡 >  Higher weight  higher probability of transit from 푐푡−1to 푐푡  Taxonomy-based association between adjacent labels  Given pair of adjacent labels < 푐푡−1, 푐푡 > at level n  n-1 features of taxonomy-based association between 푐푡−1, 푐푡 considered  e.g. Computer/Software related to Computer/Hardware, matching at (n-1)th level  Computer
  28. 28. Semantic Tagging  Identifies the semantic concepts of a word or phrase  [michael jordan: PersonName] [berkley: Location]: academic  Useful only if phrases in documents also tagged  Shallow Parsing Methods  Part of Speech Tags: e.g. Clubbing consecutive nouns for Named Entity Recognition  Disadvantage: Sentence Level Long Segments can’t be identified
  29. 29. Semantic Tagging  Hierarchical Parsing Structures  Trained a semi-Markov CRF on segments  Features  Syntactic Features Parse tree of sentence Plot
  30. 30. Semantic Tagging  Semantic Dependency Features  leverage the information about dependencies among different segments  Ex. “show me a funny movie starring Johnny and featuring Carribbean Pirates”  ‘Featuring’ takes arguments – “funny movie” and “Carribbean Pirates”  long distance semantic dependency between the object “movie” and attribute <Plot>
  31. 31. Conclusion & Future Work  End-to-End Discussion of Query Understanding Module Tasks  Semantic Understanding of queries for intent detection has lot of scope  Use of NL (grammatically correct) queries rising  Understanding at the structure level  User community detection for its application in Query Suggestion  Based on search behavior  Community/Topic specific temporal trending of search query
  32. 32. References [1] Makoto P. Kato, Takehiro Yamamoto, Hiroaki Ohshima and Katsumi Tanaka, "Cognitive Search Intents Hidden Behind Queries: A User Study on Query Formulations," in WWW Companion, Seoul, Korea, 2014. [2] Milad Shokouhi, "Learning to Personalize Query Auto-Completion," in SIGIR, Dublin, Ireland, 2013. [3] Weotta, "Deep Search," 10 6 2014. [Online]. Available: [Accessed 6 8 2014]. [4] W. Bruce Croft, Michael Bendersky, Hang Li and Gu Xu, "Query Understanding and Representation," SIGIR Forum, vol. 44, no. 2, pp. 48-53, 2010. [5] Jingjing Liu, Panupong Pasupat, Yining Wang, Scott Cyphers and Jim Glass, "Query Understanding Enhanced by Hierarchical Parsing Structures," in ASRU, 2013. [6] Huanhuan Cao, Derek Hao Hu, Dou Shen and Daxin Jiang, "Context-Aware Query Classification," in SIGIR, Boston, Massachusetts, USA, 2009.
  33. 33. References (Continued…) [7] Huanhuan Cao, Daxin Jiang, Jian Pei, Qi He, Zhen Liao, Enhong Chen, Hang Li, "Context-Aware Query Suggestion by Mining Click-Through and Session Data," in KDD, Las Vegas, Nevada, USA, 2008. [8] Zhen Liao, Yang Song, Li-wei He and Yalou Huang, "Evaluating the Effectiveness of Search Task Trails," in WWW, Lyon, France, 2012. [9] Allan, Henry Feild and James, "Task-Aware Query Recommendation," in SIGIR, Dublin, Ireland, 2013. [10] Jiafeng Guo, Gu Xu, Hang Li and Xueqi Cheng, "A Unified and Discriminative Model for Query Refinement,“ in SIGIR, Singapore, 2008. [11] Jianfeng Gao, Gu Xu and Jinxi Xu, "Query Expansion Using Path-Constrained Random Walks," in SIGIR, Dublin, Ireland, 2013.