SlideShare a Scribd company logo
1 of 59
Beyond Learning to Rank
Additional Machine Learning Approaches to Improve Search Relevancy
2
• Chief Data Scientist at Dice.com, under Yuri Bykov
• Key Projects using search:
Who Am I?
• Recommender Systems – more jobs like this, more seekers like
this (uses custom Solr index)
• Custom Dice Solr MLT handler (real-time recommendations)
• ‘Did you mean?’ functionality
• Title, skills and company type-ahead
• Relevancy improvements in Dice jobs search
3
• Supply Demand Analysis (see my blog post on this, with an
accompanying data visualization)
• “Dice Career” (new Dice mobile app releasing soon on the iOS
app store, coming soon to android)
• Includes salary predictor
• Dice Skills pages – http://www.dice.com/skills
Other Projects
PhD
• PhD candidate at DePaul University, studying natural language processing and
machine learning
Relevancy Tuning
Strengths and Limitations of Different Approaches
Relevancy Tuning – Common Approaches
• Gather a golden dataset of relevancy judgements
• Use a team of analysts to evaluate the quality of search results for a set of common user queries
• Mine the search logs, capturing which documents were clicked for each query, and how long the user
spent on the resulting webpage
• Define some metric that measure what you want to optimize you search engine for (MAP, NDCG, etc)
• Tune the parameters of the search engine to improve the performance on this dataset using either:
1. Manual Tuning
2. Grid Search – brute force search over a large list of parameters
3. Machine Learned Ranking (MLR) – train a machine learning model to re-rank the top n search results
• This improves precision – by reducing the number of bad matches returned by the system
Manual Tuning
 Slow, manual task
 Not very objective or scientific, unless validated by computing metrics over a dataset of relevancy judgements
Grid Search
 Brute force search over all the search parameters to find the optimal configuration – naïve, does not
intelligently explore the search space of parameters
 Very slow, needs to run 100’s or 1000’s of searches to test each set of parameters
Machine Learned Ranking
• Uses a machine learning model, trained on a golden dataset, to re-rank the search results
• Machine learning algorithms are too slow to rank all documents in moderate to large indexes
• Typical approach is to take the top N documents returned by the search engine and re-rank them
• What if those top N documents weren’t the most relevant?
• 2 possibilities –
1. Top N documents don’t contain all relevant documents (recall problem)
2. Top N documents contain some irrelevant documents (precision problem)
• MLR systems can be prone to feedback loops where they influence their own training data
Talk Overview
This talk will cover the following 3 topics:
1. Conceptual Search
• Solves the recall problem
2. How to Automatically Optimize the Search Engine Configuration
• Improves the quality of the top search results, prior to re-ranking
3. The problem of feedback loops, and how to prevent them
Part 1: Conceptual Search
1
3
Q. What is the Most Common Relevancy Tuning Mistake?
1
4
A. Ignoring the importance of RECALL
Q. What is the Most Common Relevancy Tuning Mistake?
1
5
Relevancy Tuning
• Key performance metrics to measure:
• Precision
• Recall
• F1 Measure - 2*(P*R)/(P+R)
• Precision is easier – correct mistakes in the top search results
• Recall - need to know which relevant documents don’t come back
• Hard to accurately measure
• Need to know all the relevant documents present in the index
1
6
What is Conceptual Search?
• A.K.A. Semantic Search
• Two key challenges with keyword matching:
• Polysemy: Words have more than one meaning
• e.g. engineer – mechanical? programmer? automation engineer?
• Synonymy: Many different words have the same meaning
• e.g. QA, quality assurance, tester; VB, Visual Basic, VB.Net
• Other related challenges -
• Typos, Spelling Errors, Idioms
• Conceptual search attempts to solve these problems by learning concepts
1
7
Why Conceptual Search?
• We will attempt to improve recall without diminishing precision
• Can match relevant documents containing none of the query terms
• I will the popular Solr lucene-based search engine to illustrate how you can
implement conceptual search, but the techniques used can be applied to any
search engine
Solr
1
8
Concepts
• Conceptual search allows us to retrieve documents by how similar the concepts
in the query are to the concepts in a document
• Concepts represent important high-level ideas in a given domain (e.g. java
technologies, big data jobs, helpdesk support, etc)
• Concepts are automatically learned from documents using machine learning
• Words can belong to multiple concepts, with varying strengths of association with
each concept
1
9
Traditional Techniques
• Many algorithms have been used for concept learning, include LSA (Latent
Semantic Analysis), LDA (Latent Dirichlet Allocation) and Word2Vec
• All involve mapping a document to a low dimensional dense vector (an array of
numbers)
• Each element of the vector is a number representing how well the document
represents that concept
• E.g. LSA powers the similar skills found in dice’s skills pages
• See From Frequency to Meaning: Vector Space Models of Semantics for more
information on traditional vector space models of word meaning
2
0
Traditional Techniques Don’t Scale
• LSALSI, LDA and related techniques rely on factorization of very large term-
document matrices – very slow and computationally intensive
• Require embedding a machine learning model within the search engine to map
new queries to the concept space (latent or topic space)
• Query performance is very poor – unable to utilize the inverted index as all
documents have the same number of concepts
• What we want is a way to map words not documents to concepts. Then we can
embed this in Solr via synonym filters and custom query parsers
2
1
Word2Vec and ‘Word Math’
• Word2Vec was developed by google around 2013 for learning vector
representations for words, building on earlier work from Rumelhart, Hinton and
Williams in 1986 (see paper below for citation of this work)
• Word2Vec Paper: Efficient Estimation of Word Representations in Vector
Space
• It works by training a machine learning model to predict the words surrounding
a word in a sentence
• Similar words get similar vector representations
• Scales well to very large datasets - no matrix factorization
2
2
“Word Math” Example
• Using basic vector arithmetic, you get some interesting patterns
• This illustrates how it represents relationships between words
• E.g. man – king + woman = queen
2
3
The algorithm learns to
represent different types of
relationships between
words in vector form
2
4
Why Do I Care? This is a Search Meetup…
2
5
Why Do I Care? This is a Search Meetup…
• This algorithm can be used to represent documents as vectors of concepts
• We can them use these representations to do conceptual search
• This will surface many relevant documents missed by keyword matching
• This boosts recall
• This technique can also be used to automatically learn synonyms
2
6
A Quick Demo
Using our Dice active jobs index, some example common user queries:
• Data Scientist
• Big Data
• Information Retrieval
• C#
• Web Developer
• CTO
• Project Manager
Note: All matching documents would NOT be returned by keyword matching
2
7
How?
GitHub- DiceTechJobs/ConceptualSearch:
1. Pre-Process documents – parse html, strip noise characters, tokenize words
2. Define important keywords for your domain, or use my code to auto extract top terms
and phrases
3. Train Word2Vec model on documents to produce a word2vec model
4. Using this model, either:
1. Vectors: Use the raw vectors and embed them in Solr using synonyms + payloads
2. Top N Similar: Or extract the top n similar terms with similarities and embed these as weighted
synonyms using my custom queryboost parser and tokenizer
3. Clusters: Cluster these vectors by similarity, and map terms to clusters in a synonym file
2
8
Define Top Domain Specific Keywords
• If you have a set of documents belonging to a specific domain, it is important
to define the important keywords for your domain:
• Use top few thousand search keywords
• Or use my fast keyword and phrase extraction tool (in GitHub)
• Or use a shingle filter to extract top 1 - 4 word sequences (ngrams) by
document frequency
• Important to map common phrases to single tokens, e.g. data scientist =>
data_scientist, java developer=>java_developer
2
9
Do It Yourself
• All code for this talk is now publicly available on GitHub:
• https://github.com/DiceTechJobs/SolrPlugins - Solr plugins to work with
conceptual search, and other dice plugins, such as a custom MLT handler
• https://github.com/DiceTechJobs/SolrConfigExamples - Examples of Solr
configuration file entries to enable conceptual search and other Dice plugins:
• https://github.com/DiceTechJobs/ConceptualSearch - Python code to
compute the Word2Vec word vectors, and generate Solr synonym files
3
0
Some Solr Tricks to Make this Happen
1. Keyword Extraction: Use the synonym filter to extract key words from your
documents
3
1
Some Solr Tricks to Make this Happen
1. Keyword Extraction: Use the synonym filter to extract key words from your
documents
• Alternatively (for Solr) use the SolrTextTagger (CareerBuilder use this)
• Both the synonym filter and the SolrTextTagger use a finite state transducer
• This gives you a very naïve but also very fast way to extract entities from text
• Uses a greedy algorithm – if there are multiple possible matches, takes the
one with the most tokens
• Recommended approach for fast entity extraction. If you need a more
accurate approach, train a Named Entity Recognition model
3
2
3
3
Some Solr Tricks to Make this Happen
1. Keyword Extraction: Use the synonym filter to extract key words from your
documents
2. Synonym Expansion using Payloads:
• Use the synonym filter to expand a keyword to multiple tokens
• Each token has an associated payload – used to adjust relevancy scores at
index or query time
• If we do this at query time, it can be considered query term expansion, using
word vector similarity to determine the boosts of the related terms
3
4
3
5
Synonym File Examples – Vector Method (1)
• Each keyword maps to a set of tokens via a synonym file
• Example Vector Synonym file entry (5 element vector, usually 100+ elements):
• java developer=>001|0.4 002|0.1 003|0.5 005|.9
• Uses a custom token filter that averages these vectors over the entire
document (see GitHub - DiceTechJobs/SolrPlugins)
• Relatively fast at index time but some additional indexing overhead
• Very slow to query
3
6
Synonym File Examples – Top N Method (2)
• Each keyword maps to a set of most similar keywords via a synonym file
• Top N Synonym file entry (top 5):
• java_developer=>java_j2ee_developer|0.907526 java_architect|0.889903
lead_java_developer|0.867594 j2ee_developer|0.864028 java_engineer|0.861407
• Can configure this in solr at index time with payloads, a payload aware query parser
and a payload similarity function
• Or you can configure this at query time with a special token filter that converts
payloads into term boosts, along with a special parser (see GitHub -
DiceTechJobs/SolrPlugins)
• Fast at index and query time if N is reasonable (10-30)
3
7
Searching over Clustered Terms
• After we have learned word vectors, we can use a clustering algorithm to
cluster terms by their vectors to give clusters of related words
• Each keyword is mapped to it’s cluster, and matching occurs between clusters
• Can learn several different sizes of cluster, such as 500, 1000, 5000 clusters
• Apply stronger boosts to fields with smaller clusters (e.g. the 5000 cluster field)
using the edismax qf parameter - tighter clusters get more weight
• Code for clustering vectors in GitHub - DiceTechJobs/ConceptualSearch
3
8
Synonym File Examples – Clustering Method (3)
• Each keyword in a cluster maps to the same artificial token for that cluster
• Cluster Synonym file entries:
• java=>cluster_171
• java applications=>cluster_171
• java coding=>cluster_171
• java design=>cluster_171
• Doesn’t use payloads so does not require any special plugins
• No noticeable impact on query or indexing performance
3
9
Example Clusters Learned from Dice Job Postings
• Note: Labels in bold are manually assigned for interpretability:
• Natural Languages: bi lingual, bilingual, chinese, fluent, french, german, japanese,
korean, lingual, localized, portuguese, russian, spanish, speak, speaker
• Apple Programming Languages: cocoa, swift
• Search Engine Technologies: apache solr, elasticsearch, lucene, lucene solr,
search, search engines, search technologies, solr, solr lucene
• Microsoft .Net Technologies: c# wcf, microsoft c#, microsoft.net, mvc web, wcf web
services, web forms, webforms, windows forms, winforms, wpf wcf
4
0
Example Clusters Learned from Dice Job Postings
AttentionAttitude:
attention, attentive, close attention, compromising, conscientious, conscious, customer oriented,
customer service focus, customer service oriented, deliver results, delivering results,
demonstrated commitment, dependability, dependable, detailed oriented, diligence, diligent, do
attitude, ethic, excellent follow, extremely detail oriented, good attention, meticulous, meticulous
attention, organized, orientated, outgoing, outstanding customer service, pay attention,
personality, pleasant, positive attitude, professional appearance, professional attitude, professional
demeanor, punctual, punctuality, self motivated, self motivation, superb, superior, thoroughness
4
1
Ideas for Future Work
• Use a machine learning algorithm to learn a sparse representation of the word vectors
• Use a word vectors derived from a word-context matrix instead of word2vec – gives a sparse
vector representation of each word (e.g. HAL – Hyperspace Analogue of Language)
• Incorporate several different vector space models that focus on different aspects of word
meaning, such as Dependency-Based Word Embeddings (uses grammatical relationships)
and the HAL model mentioned (weights terms by their proximity in the local context).
4
2
Conceptual Search Summary
• It’s easy to overlook recall when performing relevancy tuning
• Conceptual search improves recall while maintaining high precision by matching documents
on concepts or ideas.
• In reality this involves learning which terms are related to one another
• Word2Vec is a scalable algorithm for learning related words from a set of documents, that
gives state of the art results in word analogy tasks
• We can train a Word2Vec model offline, and embed it’s output into Solr by using the in-built
synonym filter and payload functionality, combined with some custom plugins
• Video of my Lucene Revolution 2015 Conceptual Search Talk
Part 2 – How to Automatically Optimize the Search
Engine Configuration
Search Engine Configuration
•Modern search engines contain a lot of parameters that can impact the relevancy of the results
•These tend to fall into 2 main areas:
1. Query Configuration Parameters
•Control how the search engine generates queries and computes relevancy scores
•What boosts to specify per field? What similarity algorithm to use? TF IDF, BM25, custom?
•Disable length normalization on some fields?
•Boosting by attributes - how to boost by the age or location of the document
2. Index and Query Analysis Configuration
•Controls how documents and queries get tokenized for the purpose of matching
•Use stemming? Synonyms? Stop words? Ngrams?
How Do We Ensure the Optimal Configuration?
•Testing on a golden test set:
•Run the top few hundred or thousand queries and measure IR metrics
•Takes a long time to evaluate each configuration – minutes to hours
•Common Approaches:
•Manual Tuning?
•Slow (human in the loop) and subjective
•Grid Search – try every possible combination of settings?
•Naïve – searches the entire grid, does not adjust its search based on its findings
•Slow test time limits how many configurations we can try
•Can we do better?
Can We Apply Machine Learning?
•We have a labelled dataset of queries with relevancy judgements
•Can we frame this as a supervised learning problem, and apply gradient descent?
•No…
•Popular IR metrics (MAP, NDCG, etc) are non-differentiable
•The set of query configuration and analysis settings cannot be optimized in this way as a
gradient is unavailable
Solution: Use a Black Box Optimization Algorithm
•Use an optimization algorithm to optimize a ‘black box’ function
•Black box function – provide the optimization algorithm with a function that takes a set of
parameters as inputs and computes a score
•The black box algorithm will then try and choose parameter settings to optimize the score
•This can be thought of as a form of reinforcement learning
•These algorithms will intelligently search the space of possible search configurations to arrive at a
solution
Implementation
•The black box function:
•Inputs: – search engine configuration settings
•Output: - a score denoting the performance of that search engine configuration on the golden test
set
•Example Optimization Algorithms:
•Co-ordinate ascent
•Genetic algorithm
•Bayesian optimization
•Simulated annealing
Implementation
•The problem
•Optimize the configuration parameters of the search engine powering our content-based recommender
engine
•IR Metric to optimize
•Mean Average Precision at 5 documents, averaged over all recommender queries
•Train Test Split:
•80% training data, 20% test data
•Optimization Library :
•GPyOpt – open source Bayesian optimization library from the university of Sheffield, UK
•Result:
•13% improvement in the MAP @ 5 on the test dataset
Ideas for Future Work
•Learn a better ranking function for the search engine:
•Use genetic programming to evolve a ranking function that works better on our dataset
•Use metric learning algorithms (e.g. maximum margin nearest neighbors)
Part 3: Beware of Feedback Loops
Building a Machine Learning System
1. Users interact with the system to
produce data
2. Machine learning algorithms turn
that data into a model
Users Interact
with the System
Data
Model
Machine Learning
Produce
Building a Machine Learning System
1. Users interact with the system to
produce data
2. Machine learning algorithms turn
that data into a model
What happens if the model’s
predictions influence the user’s
behavior?
Users Interact
with the System
Data
Model
Machine Learning
Produce
Feedback Loops
1. Users produce labelled data
2. Machine learning algorithms turn
that data into a model
3. Model changes user behavior,
modifying its own future training
data
Data
Model
Machine Learning
Produce
Model changes behavior
Users Interact
with the System
Direct Feedback Loops
•If the predictions of the machine learning system alter user behavior in such a way as to affect its own
training data, then you have a feedback loop
•Because the system is influencing its own training data, this can lead to gradual changes in the
system behavior over time
•This can lead to behavior that is hard to predict before the system is released into production
•Examples – recommender systems, search engines that use machine learned ranking models, route
finding systems, ad-targeting systems
Hidden Feedback Loops
•You can also get complex interactions when you have two separate machine learning models that
influence each other through the real world
•For example, improvements to a search engine might result in more purchases of certain products,
which in turn could influence a recommender system that uses item popularity as one of the features
used to generating recommendations
•Feedback loops, and many other challenges involved in building and maintaining machine learning
systems in the wild are covered in these two excellent papers from google:
•Machine Learning: The High-Interest Credit Card of Technical Debt
•Hidden Technical Debt in Machine Learning Systems
Preventing Feedback Loops
1. Isolate a subset of data from being influenced by the model, for use in training the system
•E.g. generate a subset of recommendations at random, or by using an unsupervised model
•E.g. leave a small proportion of user searches un-ranked by the MLR model
2. Use a reinforcement learning model instead (such as a multi-armed bandit) - the system will
dynamically adapt to the users’ behavior, balancing exploring different hypotheses with exploiting
what it’s learned to produce accurate predictions
Summary
•To improve relevancy in a search engine, it is important first to gather a golden dataset of relevancy
judgements
•Machine learned ranking is an excellent approach for improving relevancy.
•However, it mainly focuses on improving precision not recall, and relies heavily on the quality of the
the top search results
•Conceptual search is one approach for improving the recall of the search engine
•Black box optimization algorithms can be used to auto-tune the search engine configuration to
improve relevancy
•Search and recommendation engines that use supervised machine learning models are prone to
feedback loops, which can lead to poor quality predictions
END OF TALK – Questions?

More Related Content

What's hot

Haystack 2018 - Algorithmic Extraction of Keywords Concepts and Vocabularies
Haystack 2018 - Algorithmic Extraction of Keywords Concepts and VocabulariesHaystack 2018 - Algorithmic Extraction of Keywords Concepts and Vocabularies
Haystack 2018 - Algorithmic Extraction of Keywords Concepts and VocabulariesMax Irwin
 
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...Joaquin Delgado PhD.
 
Using a keyword extraction pipeline to understand concepts in future work sec...
Using a keyword extraction pipeline to understand concepts in future work sec...Using a keyword extraction pipeline to understand concepts in future work sec...
Using a keyword extraction pipeline to understand concepts in future work sec...Kai Li
 
Vectors in Search - Towards More Semantic Matching
Vectors in Search - Towards More Semantic MatchingVectors in Search - Towards More Semantic Matching
Vectors in Search - Towards More Semantic MatchingSimon Hughes
 
Searching with vectors
Searching with vectorsSearching with vectors
Searching with vectorsSimon Hughes
 
Personalized Search and Job Recommendations - Simon Hughes, Dice.com
Personalized Search and Job Recommendations - Simon Hughes, Dice.comPersonalized Search and Job Recommendations - Simon Hughes, Dice.com
Personalized Search and Job Recommendations - Simon Hughes, Dice.comLucidworks
 
Lucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine LearningLucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine LearningJoaquin Delgado PhD.
 
Question Answering and Virtual Assistants with Deep Learning
Question Answering and Virtual Assistants with Deep LearningQuestion Answering and Virtual Assistants with Deep Learning
Question Answering and Virtual Assistants with Deep LearningLucidworks
 
South Big Data Hub: Text Data Analysis Panel
South Big Data Hub: Text Data Analysis PanelSouth Big Data Hub: Text Data Analysis Panel
South Big Data Hub: Text Data Analysis PanelTrey Grainger
 
Haystack London - Search Quality Evaluation, Tools and Techniques
Haystack London - Search Quality Evaluation, Tools and Techniques Haystack London - Search Quality Evaluation, Tools and Techniques
Haystack London - Search Quality Evaluation, Tools and Techniques Andrea Gazzarini
 
The Relevance of the Apache Solr Semantic Knowledge Graph
The Relevance of the Apache Solr Semantic Knowledge GraphThe Relevance of the Apache Solr Semantic Knowledge Graph
The Relevance of the Apache Solr Semantic Knowledge GraphTrey Grainger
 
How to Build your Training Set for a Learning To Rank Project - Haystack
How to Build your Training Set for a Learning To Rank Project - HaystackHow to Build your Training Set for a Learning To Rank Project - Haystack
How to Build your Training Set for a Learning To Rank Project - HaystackSease
 
Rated Ranking Evaluator (FOSDEM 2019)
Rated Ranking Evaluator (FOSDEM 2019)Rated Ranking Evaluator (FOSDEM 2019)
Rated Ranking Evaluator (FOSDEM 2019)Andrea Gazzarini
 
Using and learning phrases
Using and learning phrasesUsing and learning phrases
Using and learning phrasesCassandra Jacobs
 
probabilistic ranking
probabilistic rankingprobabilistic ranking
probabilistic rankingFELIX75
 
Topic Modelling: Tutorial on Usage and Applications
Topic Modelling: Tutorial on Usage and ApplicationsTopic Modelling: Tutorial on Usage and Applications
Topic Modelling: Tutorial on Usage and ApplicationsAyush Jain
 
Search Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachSearch Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachAlessandro Benedetti
 
The power of community: training a Transformer Language Model on a shoestring
The power of community: training a Transformer Language Model on a shoestringThe power of community: training a Transformer Language Model on a shoestring
The power of community: training a Transformer Language Model on a shoestringSujit Pal
 
Topic extraction using machine learning
Topic extraction using machine learningTopic extraction using machine learning
Topic extraction using machine learningSanjib Basak
 
Combining IR with Relevance Feedback for Concept Location
Combining IR with Relevance Feedback for Concept LocationCombining IR with Relevance Feedback for Concept Location
Combining IR with Relevance Feedback for Concept LocationSonia Haiduc
 

What's hot (20)

Haystack 2018 - Algorithmic Extraction of Keywords Concepts and Vocabularies
Haystack 2018 - Algorithmic Extraction of Keywords Concepts and VocabulariesHaystack 2018 - Algorithmic Extraction of Keywords Concepts and Vocabularies
Haystack 2018 - Algorithmic Extraction of Keywords Concepts and Vocabularies
 
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
 
Using a keyword extraction pipeline to understand concepts in future work sec...
Using a keyword extraction pipeline to understand concepts in future work sec...Using a keyword extraction pipeline to understand concepts in future work sec...
Using a keyword extraction pipeline to understand concepts in future work sec...
 
Vectors in Search - Towards More Semantic Matching
Vectors in Search - Towards More Semantic MatchingVectors in Search - Towards More Semantic Matching
Vectors in Search - Towards More Semantic Matching
 
Searching with vectors
Searching with vectorsSearching with vectors
Searching with vectors
 
Personalized Search and Job Recommendations - Simon Hughes, Dice.com
Personalized Search and Job Recommendations - Simon Hughes, Dice.comPersonalized Search and Job Recommendations - Simon Hughes, Dice.com
Personalized Search and Job Recommendations - Simon Hughes, Dice.com
 
Lucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine LearningLucene/Solr Revolution 2015: Where Search Meets Machine Learning
Lucene/Solr Revolution 2015: Where Search Meets Machine Learning
 
Question Answering and Virtual Assistants with Deep Learning
Question Answering and Virtual Assistants with Deep LearningQuestion Answering and Virtual Assistants with Deep Learning
Question Answering and Virtual Assistants with Deep Learning
 
South Big Data Hub: Text Data Analysis Panel
South Big Data Hub: Text Data Analysis PanelSouth Big Data Hub: Text Data Analysis Panel
South Big Data Hub: Text Data Analysis Panel
 
Haystack London - Search Quality Evaluation, Tools and Techniques
Haystack London - Search Quality Evaluation, Tools and Techniques Haystack London - Search Quality Evaluation, Tools and Techniques
Haystack London - Search Quality Evaluation, Tools and Techniques
 
The Relevance of the Apache Solr Semantic Knowledge Graph
The Relevance of the Apache Solr Semantic Knowledge GraphThe Relevance of the Apache Solr Semantic Knowledge Graph
The Relevance of the Apache Solr Semantic Knowledge Graph
 
How to Build your Training Set for a Learning To Rank Project - Haystack
How to Build your Training Set for a Learning To Rank Project - HaystackHow to Build your Training Set for a Learning To Rank Project - Haystack
How to Build your Training Set for a Learning To Rank Project - Haystack
 
Rated Ranking Evaluator (FOSDEM 2019)
Rated Ranking Evaluator (FOSDEM 2019)Rated Ranking Evaluator (FOSDEM 2019)
Rated Ranking Evaluator (FOSDEM 2019)
 
Using and learning phrases
Using and learning phrasesUsing and learning phrases
Using and learning phrases
 
probabilistic ranking
probabilistic rankingprobabilistic ranking
probabilistic ranking
 
Topic Modelling: Tutorial on Usage and Applications
Topic Modelling: Tutorial on Usage and ApplicationsTopic Modelling: Tutorial on Usage and Applications
Topic Modelling: Tutorial on Usage and Applications
 
Search Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source ApproachSearch Quality Evaluation to Help Reproducibility: An Open-source Approach
Search Quality Evaluation to Help Reproducibility: An Open-source Approach
 
The power of community: training a Transformer Language Model on a shoestring
The power of community: training a Transformer Language Model on a shoestringThe power of community: training a Transformer Language Model on a shoestring
The power of community: training a Transformer Language Model on a shoestring
 
Topic extraction using machine learning
Topic extraction using machine learningTopic extraction using machine learning
Topic extraction using machine learning
 
Combining IR with Relevance Feedback for Concept Location
Combining IR with Relevance Feedback for Concept LocationCombining IR with Relevance Feedback for Concept Location
Combining IR with Relevance Feedback for Concept Location
 

Similar to Dice.com Bay Area Search - Beyond Learning to Rank Talk

Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.comEnhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.comSimon Hughes
 
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
 RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...S. Diana Hu
 
Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...
Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...
Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...Lucidworks
 
Improving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language ProcessingImproving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language ProcessingDataWorks Summit
 
Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...
Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...
Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...Lucidworks
 
Webinar: Question Answering and Virtual Assistants with Deep Learning
Webinar: Question Answering and Virtual Assistants with Deep LearningWebinar: Question Answering and Virtual Assistants with Deep Learning
Webinar: Question Answering and Virtual Assistants with Deep LearningLucidworks
 
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...Lucidworks
 
Candidate selection tutorial
Candidate selection tutorialCandidate selection tutorial
Candidate selection tutorialYiqun Liu
 
SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...
SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...
SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...Aman Grover
 
How Oracle Uses CrowdFlower For Sentiment Analysis
How Oracle Uses CrowdFlower For Sentiment AnalysisHow Oracle Uses CrowdFlower For Sentiment Analysis
How Oracle Uses CrowdFlower For Sentiment AnalysisCrowdFlower
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalLionel Briand
 
Relevancy and Search Quality Analysis - Search Technologies
Relevancy and Search Quality Analysis - Search TechnologiesRelevancy and Search Quality Analysis - Search Technologies
Relevancy and Search Quality Analysis - Search Technologiesenterprisesearchmeetup
 
Lucene Bootcamp - 2
Lucene Bootcamp - 2Lucene Bootcamp - 2
Lucene Bootcamp - 2GokulD
 
ModelMine a tool to facilitate mining models from open source repositories pr...
ModelMine a tool to facilitate mining models from open source repositories pr...ModelMine a tool to facilitate mining models from open source repositories pr...
ModelMine a tool to facilitate mining models from open source repositories pr...Sayed Mohsin Reza
 
From SQL to Python - A Beginner's Guide to Making the Switch
From SQL to Python - A Beginner's Guide to Making the SwitchFrom SQL to Python - A Beginner's Guide to Making the Switch
From SQL to Python - A Beginner's Guide to Making the SwitchRachel Berryman
 
Search enabled applications with lucene.net
Search enabled applications with lucene.netSearch enabled applications with lucene.net
Search enabled applications with lucene.netWillem Meints
 
The recommendations system for source code components retrieval
The recommendations system for source code components retrievalThe recommendations system for source code components retrieval
The recommendations system for source code components retrievalAYESHA JAVED
 

Similar to Dice.com Bay Area Search - Beyond Learning to Rank Talk (20)

Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.comEnhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
 
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
 RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
 
Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...
Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...
Evolving The Optimal Relevancy Scoring Model at Dice.com: Presented by Simon ...
 
Improving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language ProcessingImproving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language Processing
 
Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...
Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...
Query-time Nonparametric Regression with Temporally Bounded Models - Patrick ...
 
Webinar: Question Answering and Virtual Assistants with Deep Learning
Webinar: Question Answering and Virtual Assistants with Deep LearningWebinar: Question Answering and Virtual Assistants with Deep Learning
Webinar: Question Answering and Virtual Assistants with Deep Learning
 
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
 
Final presentation
Final presentationFinal presentation
Final presentation
 
Candidate selection tutorial
Candidate selection tutorialCandidate selection tutorial
Candidate selection tutorial
 
SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...
SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...
SIGIR 2017 - Candidate Selection for Large Scale Personalized Search and Reco...
 
Software Design
Software DesignSoftware Design
Software Design
 
How Oracle Uses CrowdFlower For Sentiment Analysis
How Oracle Uses CrowdFlower For Sentiment AnalysisHow Oracle Uses CrowdFlower For Sentiment Analysis
How Oracle Uses CrowdFlower For Sentiment Analysis
 
Precise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive GoalPrecise and Complete Requirements? An Elusive Goal
Precise and Complete Requirements? An Elusive Goal
 
Relevancy and Search Quality Analysis - Search Technologies
Relevancy and Search Quality Analysis - Search TechnologiesRelevancy and Search Quality Analysis - Search Technologies
Relevancy and Search Quality Analysis - Search Technologies
 
Lucene Bootcamp - 2
Lucene Bootcamp - 2Lucene Bootcamp - 2
Lucene Bootcamp - 2
 
ModelMine a tool to facilitate mining models from open source repositories pr...
ModelMine a tool to facilitate mining models from open source repositories pr...ModelMine a tool to facilitate mining models from open source repositories pr...
ModelMine a tool to facilitate mining models from open source repositories pr...
 
From SQL to Python - A Beginner's Guide to Making the Switch
From SQL to Python - A Beginner's Guide to Making the SwitchFrom SQL to Python - A Beginner's Guide to Making the Switch
From SQL to Python - A Beginner's Guide to Making the Switch
 
Deep learning for NLP
Deep learning for NLPDeep learning for NLP
Deep learning for NLP
 
Search enabled applications with lucene.net
Search enabled applications with lucene.netSearch enabled applications with lucene.net
Search enabled applications with lucene.net
 
The recommendations system for source code components retrieval
The recommendations system for source code components retrievalThe recommendations system for source code components retrieval
The recommendations system for source code components retrieval
 

Recently uploaded

From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubaihf8803863
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Callshivangimorya083
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Callshivangimorya083
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998YohFuh
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptSonatrach
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfMarket Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfRachmat Ramadhan H
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiSuhani Kapoor
 
Call Girls In Mahipalpur O9654467111 Escorts Service
Call Girls In Mahipalpur O9654467111  Escorts ServiceCall Girls In Mahipalpur O9654467111  Escorts Service
Call Girls In Mahipalpur O9654467111 Escorts ServiceSapana Sha
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...Suhani Kapoor
 

Recently uploaded (20)

From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
E-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxE-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptx
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip CallDelhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
Delhi Call Girls CP 9711199171 ☎✔👌✔ Whatsapp Hard And Sexy Vip Call
 
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
VIP Call Girls Service Charbagh { Lucknow Call Girls Service 9548273370 } Boo...
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
 
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in  KishangarhDelhi 99530 vip 56974 Genuine Escort Service Call Girls in  Kishangarh
Delhi 99530 vip 56974 Genuine Escort Service Call Girls in Kishangarh
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
 
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdfMarket Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
Market Analysis in the 5 Largest Economic Countries in Southeast Asia.pdf
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service AmravatiVIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
VIP Call Girls in Amravati Aarohi 8250192130 Independent Escort Service Amravati
 
Call Girls In Mahipalpur O9654467111 Escorts Service
Call Girls In Mahipalpur O9654467111  Escorts ServiceCall Girls In Mahipalpur O9654467111  Escorts Service
Call Girls In Mahipalpur O9654467111 Escorts Service
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
 
Decoding Loan Approval: Predictive Modeling in Action
Decoding Loan Approval: Predictive Modeling in ActionDecoding Loan Approval: Predictive Modeling in Action
Decoding Loan Approval: Predictive Modeling in Action
 

Dice.com Bay Area Search - Beyond Learning to Rank Talk

  • 1. Beyond Learning to Rank Additional Machine Learning Approaches to Improve Search Relevancy
  • 2. 2 • Chief Data Scientist at Dice.com, under Yuri Bykov • Key Projects using search: Who Am I? • Recommender Systems – more jobs like this, more seekers like this (uses custom Solr index) • Custom Dice Solr MLT handler (real-time recommendations) • ‘Did you mean?’ functionality • Title, skills and company type-ahead • Relevancy improvements in Dice jobs search
  • 3. 3 • Supply Demand Analysis (see my blog post on this, with an accompanying data visualization) • “Dice Career” (new Dice mobile app releasing soon on the iOS app store, coming soon to android) • Includes salary predictor • Dice Skills pages – http://www.dice.com/skills Other Projects PhD • PhD candidate at DePaul University, studying natural language processing and machine learning
  • 4.
  • 5.
  • 6.
  • 7. Relevancy Tuning Strengths and Limitations of Different Approaches
  • 8. Relevancy Tuning – Common Approaches • Gather a golden dataset of relevancy judgements • Use a team of analysts to evaluate the quality of search results for a set of common user queries • Mine the search logs, capturing which documents were clicked for each query, and how long the user spent on the resulting webpage • Define some metric that measure what you want to optimize you search engine for (MAP, NDCG, etc) • Tune the parameters of the search engine to improve the performance on this dataset using either: 1. Manual Tuning 2. Grid Search – brute force search over a large list of parameters 3. Machine Learned Ranking (MLR) – train a machine learning model to re-rank the top n search results • This improves precision – by reducing the number of bad matches returned by the system
  • 9. Manual Tuning  Slow, manual task  Not very objective or scientific, unless validated by computing metrics over a dataset of relevancy judgements Grid Search  Brute force search over all the search parameters to find the optimal configuration – naïve, does not intelligently explore the search space of parameters  Very slow, needs to run 100’s or 1000’s of searches to test each set of parameters
  • 10. Machine Learned Ranking • Uses a machine learning model, trained on a golden dataset, to re-rank the search results • Machine learning algorithms are too slow to rank all documents in moderate to large indexes • Typical approach is to take the top N documents returned by the search engine and re-rank them • What if those top N documents weren’t the most relevant? • 2 possibilities – 1. Top N documents don’t contain all relevant documents (recall problem) 2. Top N documents contain some irrelevant documents (precision problem) • MLR systems can be prone to feedback loops where they influence their own training data
  • 11. Talk Overview This talk will cover the following 3 topics: 1. Conceptual Search • Solves the recall problem 2. How to Automatically Optimize the Search Engine Configuration • Improves the quality of the top search results, prior to re-ranking 3. The problem of feedback loops, and how to prevent them
  • 13. 1 3 Q. What is the Most Common Relevancy Tuning Mistake?
  • 14. 1 4 A. Ignoring the importance of RECALL Q. What is the Most Common Relevancy Tuning Mistake?
  • 15. 1 5 Relevancy Tuning • Key performance metrics to measure: • Precision • Recall • F1 Measure - 2*(P*R)/(P+R) • Precision is easier – correct mistakes in the top search results • Recall - need to know which relevant documents don’t come back • Hard to accurately measure • Need to know all the relevant documents present in the index
  • 16. 1 6 What is Conceptual Search? • A.K.A. Semantic Search • Two key challenges with keyword matching: • Polysemy: Words have more than one meaning • e.g. engineer – mechanical? programmer? automation engineer? • Synonymy: Many different words have the same meaning • e.g. QA, quality assurance, tester; VB, Visual Basic, VB.Net • Other related challenges - • Typos, Spelling Errors, Idioms • Conceptual search attempts to solve these problems by learning concepts
  • 17. 1 7 Why Conceptual Search? • We will attempt to improve recall without diminishing precision • Can match relevant documents containing none of the query terms • I will the popular Solr lucene-based search engine to illustrate how you can implement conceptual search, but the techniques used can be applied to any search engine Solr
  • 18. 1 8 Concepts • Conceptual search allows us to retrieve documents by how similar the concepts in the query are to the concepts in a document • Concepts represent important high-level ideas in a given domain (e.g. java technologies, big data jobs, helpdesk support, etc) • Concepts are automatically learned from documents using machine learning • Words can belong to multiple concepts, with varying strengths of association with each concept
  • 19. 1 9 Traditional Techniques • Many algorithms have been used for concept learning, include LSA (Latent Semantic Analysis), LDA (Latent Dirichlet Allocation) and Word2Vec • All involve mapping a document to a low dimensional dense vector (an array of numbers) • Each element of the vector is a number representing how well the document represents that concept • E.g. LSA powers the similar skills found in dice’s skills pages • See From Frequency to Meaning: Vector Space Models of Semantics for more information on traditional vector space models of word meaning
  • 20. 2 0 Traditional Techniques Don’t Scale • LSALSI, LDA and related techniques rely on factorization of very large term- document matrices – very slow and computationally intensive • Require embedding a machine learning model within the search engine to map new queries to the concept space (latent or topic space) • Query performance is very poor – unable to utilize the inverted index as all documents have the same number of concepts • What we want is a way to map words not documents to concepts. Then we can embed this in Solr via synonym filters and custom query parsers
  • 21. 2 1 Word2Vec and ‘Word Math’ • Word2Vec was developed by google around 2013 for learning vector representations for words, building on earlier work from Rumelhart, Hinton and Williams in 1986 (see paper below for citation of this work) • Word2Vec Paper: Efficient Estimation of Word Representations in Vector Space • It works by training a machine learning model to predict the words surrounding a word in a sentence • Similar words get similar vector representations • Scales well to very large datasets - no matrix factorization
  • 22. 2 2 “Word Math” Example • Using basic vector arithmetic, you get some interesting patterns • This illustrates how it represents relationships between words • E.g. man – king + woman = queen
  • 23. 2 3 The algorithm learns to represent different types of relationships between words in vector form
  • 24. 2 4 Why Do I Care? This is a Search Meetup…
  • 25. 2 5 Why Do I Care? This is a Search Meetup… • This algorithm can be used to represent documents as vectors of concepts • We can them use these representations to do conceptual search • This will surface many relevant documents missed by keyword matching • This boosts recall • This technique can also be used to automatically learn synonyms
  • 26. 2 6 A Quick Demo Using our Dice active jobs index, some example common user queries: • Data Scientist • Big Data • Information Retrieval • C# • Web Developer • CTO • Project Manager Note: All matching documents would NOT be returned by keyword matching
  • 27. 2 7 How? GitHub- DiceTechJobs/ConceptualSearch: 1. Pre-Process documents – parse html, strip noise characters, tokenize words 2. Define important keywords for your domain, or use my code to auto extract top terms and phrases 3. Train Word2Vec model on documents to produce a word2vec model 4. Using this model, either: 1. Vectors: Use the raw vectors and embed them in Solr using synonyms + payloads 2. Top N Similar: Or extract the top n similar terms with similarities and embed these as weighted synonyms using my custom queryboost parser and tokenizer 3. Clusters: Cluster these vectors by similarity, and map terms to clusters in a synonym file
  • 28. 2 8 Define Top Domain Specific Keywords • If you have a set of documents belonging to a specific domain, it is important to define the important keywords for your domain: • Use top few thousand search keywords • Or use my fast keyword and phrase extraction tool (in GitHub) • Or use a shingle filter to extract top 1 - 4 word sequences (ngrams) by document frequency • Important to map common phrases to single tokens, e.g. data scientist => data_scientist, java developer=>java_developer
  • 29. 2 9 Do It Yourself • All code for this talk is now publicly available on GitHub: • https://github.com/DiceTechJobs/SolrPlugins - Solr plugins to work with conceptual search, and other dice plugins, such as a custom MLT handler • https://github.com/DiceTechJobs/SolrConfigExamples - Examples of Solr configuration file entries to enable conceptual search and other Dice plugins: • https://github.com/DiceTechJobs/ConceptualSearch - Python code to compute the Word2Vec word vectors, and generate Solr synonym files
  • 30. 3 0 Some Solr Tricks to Make this Happen 1. Keyword Extraction: Use the synonym filter to extract key words from your documents
  • 31. 3 1 Some Solr Tricks to Make this Happen 1. Keyword Extraction: Use the synonym filter to extract key words from your documents • Alternatively (for Solr) use the SolrTextTagger (CareerBuilder use this) • Both the synonym filter and the SolrTextTagger use a finite state transducer • This gives you a very naïve but also very fast way to extract entities from text • Uses a greedy algorithm – if there are multiple possible matches, takes the one with the most tokens • Recommended approach for fast entity extraction. If you need a more accurate approach, train a Named Entity Recognition model
  • 32. 3 2
  • 33. 3 3 Some Solr Tricks to Make this Happen 1. Keyword Extraction: Use the synonym filter to extract key words from your documents 2. Synonym Expansion using Payloads: • Use the synonym filter to expand a keyword to multiple tokens • Each token has an associated payload – used to adjust relevancy scores at index or query time • If we do this at query time, it can be considered query term expansion, using word vector similarity to determine the boosts of the related terms
  • 34. 3 4
  • 35. 3 5 Synonym File Examples – Vector Method (1) • Each keyword maps to a set of tokens via a synonym file • Example Vector Synonym file entry (5 element vector, usually 100+ elements): • java developer=>001|0.4 002|0.1 003|0.5 005|.9 • Uses a custom token filter that averages these vectors over the entire document (see GitHub - DiceTechJobs/SolrPlugins) • Relatively fast at index time but some additional indexing overhead • Very slow to query
  • 36. 3 6 Synonym File Examples – Top N Method (2) • Each keyword maps to a set of most similar keywords via a synonym file • Top N Synonym file entry (top 5): • java_developer=>java_j2ee_developer|0.907526 java_architect|0.889903 lead_java_developer|0.867594 j2ee_developer|0.864028 java_engineer|0.861407 • Can configure this in solr at index time with payloads, a payload aware query parser and a payload similarity function • Or you can configure this at query time with a special token filter that converts payloads into term boosts, along with a special parser (see GitHub - DiceTechJobs/SolrPlugins) • Fast at index and query time if N is reasonable (10-30)
  • 37. 3 7 Searching over Clustered Terms • After we have learned word vectors, we can use a clustering algorithm to cluster terms by their vectors to give clusters of related words • Each keyword is mapped to it’s cluster, and matching occurs between clusters • Can learn several different sizes of cluster, such as 500, 1000, 5000 clusters • Apply stronger boosts to fields with smaller clusters (e.g. the 5000 cluster field) using the edismax qf parameter - tighter clusters get more weight • Code for clustering vectors in GitHub - DiceTechJobs/ConceptualSearch
  • 38. 3 8 Synonym File Examples – Clustering Method (3) • Each keyword in a cluster maps to the same artificial token for that cluster • Cluster Synonym file entries: • java=>cluster_171 • java applications=>cluster_171 • java coding=>cluster_171 • java design=>cluster_171 • Doesn’t use payloads so does not require any special plugins • No noticeable impact on query or indexing performance
  • 39. 3 9 Example Clusters Learned from Dice Job Postings • Note: Labels in bold are manually assigned for interpretability: • Natural Languages: bi lingual, bilingual, chinese, fluent, french, german, japanese, korean, lingual, localized, portuguese, russian, spanish, speak, speaker • Apple Programming Languages: cocoa, swift • Search Engine Technologies: apache solr, elasticsearch, lucene, lucene solr, search, search engines, search technologies, solr, solr lucene • Microsoft .Net Technologies: c# wcf, microsoft c#, microsoft.net, mvc web, wcf web services, web forms, webforms, windows forms, winforms, wpf wcf
  • 40. 4 0 Example Clusters Learned from Dice Job Postings AttentionAttitude: attention, attentive, close attention, compromising, conscientious, conscious, customer oriented, customer service focus, customer service oriented, deliver results, delivering results, demonstrated commitment, dependability, dependable, detailed oriented, diligence, diligent, do attitude, ethic, excellent follow, extremely detail oriented, good attention, meticulous, meticulous attention, organized, orientated, outgoing, outstanding customer service, pay attention, personality, pleasant, positive attitude, professional appearance, professional attitude, professional demeanor, punctual, punctuality, self motivated, self motivation, superb, superior, thoroughness
  • 41. 4 1 Ideas for Future Work • Use a machine learning algorithm to learn a sparse representation of the word vectors • Use a word vectors derived from a word-context matrix instead of word2vec – gives a sparse vector representation of each word (e.g. HAL – Hyperspace Analogue of Language) • Incorporate several different vector space models that focus on different aspects of word meaning, such as Dependency-Based Word Embeddings (uses grammatical relationships) and the HAL model mentioned (weights terms by their proximity in the local context).
  • 42. 4 2 Conceptual Search Summary • It’s easy to overlook recall when performing relevancy tuning • Conceptual search improves recall while maintaining high precision by matching documents on concepts or ideas. • In reality this involves learning which terms are related to one another • Word2Vec is a scalable algorithm for learning related words from a set of documents, that gives state of the art results in word analogy tasks • We can train a Word2Vec model offline, and embed it’s output into Solr by using the in-built synonym filter and payload functionality, combined with some custom plugins • Video of my Lucene Revolution 2015 Conceptual Search Talk
  • 43. Part 2 – How to Automatically Optimize the Search Engine Configuration
  • 44. Search Engine Configuration •Modern search engines contain a lot of parameters that can impact the relevancy of the results •These tend to fall into 2 main areas: 1. Query Configuration Parameters •Control how the search engine generates queries and computes relevancy scores •What boosts to specify per field? What similarity algorithm to use? TF IDF, BM25, custom? •Disable length normalization on some fields? •Boosting by attributes - how to boost by the age or location of the document 2. Index and Query Analysis Configuration •Controls how documents and queries get tokenized for the purpose of matching •Use stemming? Synonyms? Stop words? Ngrams?
  • 45. How Do We Ensure the Optimal Configuration? •Testing on a golden test set: •Run the top few hundred or thousand queries and measure IR metrics •Takes a long time to evaluate each configuration – minutes to hours •Common Approaches: •Manual Tuning? •Slow (human in the loop) and subjective •Grid Search – try every possible combination of settings? •Naïve – searches the entire grid, does not adjust its search based on its findings •Slow test time limits how many configurations we can try •Can we do better?
  • 46. Can We Apply Machine Learning? •We have a labelled dataset of queries with relevancy judgements •Can we frame this as a supervised learning problem, and apply gradient descent? •No… •Popular IR metrics (MAP, NDCG, etc) are non-differentiable •The set of query configuration and analysis settings cannot be optimized in this way as a gradient is unavailable
  • 47. Solution: Use a Black Box Optimization Algorithm •Use an optimization algorithm to optimize a ‘black box’ function •Black box function – provide the optimization algorithm with a function that takes a set of parameters as inputs and computes a score •The black box algorithm will then try and choose parameter settings to optimize the score •This can be thought of as a form of reinforcement learning •These algorithms will intelligently search the space of possible search configurations to arrive at a solution
  • 48. Implementation •The black box function: •Inputs: – search engine configuration settings •Output: - a score denoting the performance of that search engine configuration on the golden test set •Example Optimization Algorithms: •Co-ordinate ascent •Genetic algorithm •Bayesian optimization •Simulated annealing
  • 49. Implementation •The problem •Optimize the configuration parameters of the search engine powering our content-based recommender engine •IR Metric to optimize •Mean Average Precision at 5 documents, averaged over all recommender queries •Train Test Split: •80% training data, 20% test data •Optimization Library : •GPyOpt – open source Bayesian optimization library from the university of Sheffield, UK •Result: •13% improvement in the MAP @ 5 on the test dataset
  • 50. Ideas for Future Work •Learn a better ranking function for the search engine: •Use genetic programming to evolve a ranking function that works better on our dataset •Use metric learning algorithms (e.g. maximum margin nearest neighbors)
  • 51. Part 3: Beware of Feedback Loops
  • 52. Building a Machine Learning System 1. Users interact with the system to produce data 2. Machine learning algorithms turn that data into a model Users Interact with the System Data Model Machine Learning Produce
  • 53. Building a Machine Learning System 1. Users interact with the system to produce data 2. Machine learning algorithms turn that data into a model What happens if the model’s predictions influence the user’s behavior? Users Interact with the System Data Model Machine Learning Produce
  • 54. Feedback Loops 1. Users produce labelled data 2. Machine learning algorithms turn that data into a model 3. Model changes user behavior, modifying its own future training data Data Model Machine Learning Produce Model changes behavior Users Interact with the System
  • 55. Direct Feedback Loops •If the predictions of the machine learning system alter user behavior in such a way as to affect its own training data, then you have a feedback loop •Because the system is influencing its own training data, this can lead to gradual changes in the system behavior over time •This can lead to behavior that is hard to predict before the system is released into production •Examples – recommender systems, search engines that use machine learned ranking models, route finding systems, ad-targeting systems
  • 56. Hidden Feedback Loops •You can also get complex interactions when you have two separate machine learning models that influence each other through the real world •For example, improvements to a search engine might result in more purchases of certain products, which in turn could influence a recommender system that uses item popularity as one of the features used to generating recommendations •Feedback loops, and many other challenges involved in building and maintaining machine learning systems in the wild are covered in these two excellent papers from google: •Machine Learning: The High-Interest Credit Card of Technical Debt •Hidden Technical Debt in Machine Learning Systems
  • 57. Preventing Feedback Loops 1. Isolate a subset of data from being influenced by the model, for use in training the system •E.g. generate a subset of recommendations at random, or by using an unsupervised model •E.g. leave a small proportion of user searches un-ranked by the MLR model 2. Use a reinforcement learning model instead (such as a multi-armed bandit) - the system will dynamically adapt to the users’ behavior, balancing exploring different hypotheses with exploiting what it’s learned to produce accurate predictions
  • 58. Summary •To improve relevancy in a search engine, it is important first to gather a golden dataset of relevancy judgements •Machine learned ranking is an excellent approach for improving relevancy. •However, it mainly focuses on improving precision not recall, and relies heavily on the quality of the the top search results •Conceptual search is one approach for improving the recall of the search engine •Black box optimization algorithms can be used to auto-tune the search engine configuration to improve relevancy •Search and recommendation engines that use supervised machine learning models are prone to feedback loops, which can lead to poor quality predictions
  • 59. END OF TALK – Questions?

Editor's Notes

  1. Show MJLT page (prepare it before talk)
  2. Show MJLT page (prepare it before talk)
  3. Idiom examples – bob’s your uncle, a bird in the hand, too many cooks spoil the broth, turn the other cheek, chip of the old block, etc
  4. Idiom examples – bob’s your uncle, a bird in the hand, too many cooks spoil the broth, turn the other cheek, chip of the old block, etc
  5. Picture taken from http://www.mlguru.cz/word2vec-jednoducha-aritmetika-se-slovy/, last accessed 9/28/2015
  6. Chart taken from http://google-opensource.blogspot.com/2013/08/learning-meaning-behind-words.html – last accessed 9/28/2015