Haystack 2019 - Search with Vectors - Simon Hughes

OpenSource Connections
OpenSource ConnectionsPrincipal, OpenSource Connections and Solr Consultant at OpenSource Connections
Searching with Vectors
Simon Hughes
Chief Data Scientist, Dice.com
Twitter: @hughes_meister
Who Am I?
• Chief Data Scientist at DHI (owns Dice.com)
• Key Projects:
• Search and Match
• Dice Recommender Systems
• Dice Job Search
• Dice Talent Search 3.0 and 4.0
• Dice Skill Center
• Dice Career Advisory Pages
• Dice Salary Predictor
• Dice Career Paths
• PhD Candidate DePaul University
• Subject Area – Machine Learning and NLP
• Thesis – Extracting Causal Relations from Scientific Essays
• Contact Info:
• Email: simon.hughes@dhigroupinc.com
• Twitter: https://twitter.com/hughes_meister
Motivation
• Dice.com - leading US technology professional job board
• Jobs marketplace
• We connect technology talent with employers
• High quality searching and matching are critical to our value proposition, for both our customers
and our clients
• Need – high quality content-based recommender engine
• Automatically determine how well a job seeker matches a particular position, and vice versa
• Requirements:
• A semantic matching engine – goes beyond keyword search, to extracting semantic
information from job postings and resume
• Deployed at scale using existing search infrastructure (Solr and ElasticSearch)
• Github Repository for Talk:
• https://github.com/DiceTechJobs/VectorsInSearch
Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
Understanding Textual Data
Key Challenges:
• Synonymy – Multiple Words with the Same Meaning
• Related – typos, miss-spellings, acronyms, metonyms
• E.g. QA, Quality Assurance, Tester
• Polysemy – Ambiguity, a word has multiple meanings
• E.g. Bank, Book, Ape
• Hypernyms/Hyponyms – ‘type of’ relationships
• E.g. a dog (hyponym) is a type of animal (hypernym)
• Meronyms/Holonyms – ‘part of’ relationships
• E.g. finger (meronym) is a ‘part of’ a hand (holonym)
• What Words / Phrases are More Important?
• Named Entity Extraction (NER), Controlled Vocabularies
• Colocation (phrases) detection – e.g. “data scientist” vs “scientist who works with data”
• Stop words
• Term weighting schemes - e.g. tf.idf
How to Solve these Problems?
• Map documents and queries to a semantic space
• “From Strings to Things”?
• Google KG marketing
• Map words into concepts / semantics
• From strings to concepts
• How to represent?
Java
Technologies
Big Data Tools
Javascript
Frameworks
Representations
Java
• Local representation
• Non distributed
• Sparse
• E.g. one-hot-vector
• One vector component per unique word
• Similar items have different representations
Representations
• Distributed Representation
• Dense vector
• Components of the vector represent learned concepts / latent variables
• Similar items have similar representations
• Most existing approaches produce dense vectors
Java
Java
• Local representation
• Non distributed
• Sparse
• E.g. one-hot-vector
• One vector component per unique word
• Similar items have different representations
Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
The Importance of Context
How do we learn the meaning (semantics) of words?
• Distributional Hypothesis
• Words occurring in similar contexts have similar meanings
• Harris 1954
• “a word is characterized by the company it keeps”
• Firth 1957
• Ignores word order, grammar and syntax
• Latent Relation Hypothesis
• Pairs of words occurring in similar patterns have similar semantic relations
• Turney et al, 2003
• Patterns – X cuts Y, X works with Y, etc
• Word order and grammatical relations matter
• Further reading - Distributional approaches to word meanings
Learning Meaning from Context
Bag of Words Approaches – ignore word order
• Latent Models
• Context - Documents
• LSA
• LDA
• Semantic Vector Space Model
• Word Embeddings
• Context – word window
• Word2vec
• Glove
• Simple linear language models
• History - http://blog.aylien.com/a-review-of-the-recent-history-of-natural-language-processing/
• For document embeddings
• Average or idf weighted average of word vectors
• Sentence / Document Embeddings
• Context – document + word window
• E.g. Doc2vec
• Context – surrounding sentences
• E.g. skip-thought vectors
Word2Vec
• By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ], from Wikimedia Commons
Limitations of BOW Approaches:
• Shallow representation
• Word embeddings – limited to the word level
• Latent models – document level but doesn’t encode relational information
• Synonymy - learn relatedness, not true synonyms
• E.g. Antonyms have similar vectors
• Polysemy – cannot encode different meanings of same word
• Global model not a local model
Beyond BOW - Deep Language Models
• Deep Language Model Embeddings
• Derived from the internal state of a deep LM
• Learns deep representation of sequences of
words in context
• Can adjust word vectors based on their current
context
• “NLP’s imagenet moment”
• Achieved state of the art results on many NLP tasks
• Consistently out-perform word embedding models
• Example models - ELMO, BERT, ULMFit, OpenAI
Transformer
• Used for encoding sentences not whole
documents
• Hard to scale
Deep Language Models
p(w1,w2,w3, w4,…,wn) = p(wn|w1,w2,…,wn-1)
…..
…..
…..
p(w1) p(w2|W1) p(w3|w1,w2) p(w4|w1,w2,w3)
Begin w1 w2 w3
LSTM LSTM LSTM LSTM
Embedding Models for Search
• Word Embedding Approaches
• Cluster Word Embeddings
• “Representing Documents and Queries as Sets of Word Embedded Vectors for Information
Retrieval”
• Clustered word2vec vectors using k-means
• Documents represented as clusters of word vectors
• Query - map query vectors as similarity to cluster centroids
• Out performed Jelinek Mercer LM similarity using VSM
• Average Word Embeddings
• From Chapter 5 of Deep Learning for Search
• Author - Tommaso Teofili
• Query and document represented as average of word2vec vectors
• Computing a weighted average using idf worked best
• Outperformed BM25 using cosine similarity
• BM25 + word2vec – highest NDCG score
Embedding Models for Search
• Dual Embedding Space Model (DESM)
• Research from Microsoft
• Extends word2vec
• Learns a dual embedding for queries and documents
• Paper - https://arxiv.org/pdf/1602.01137.pdf
• Evaluation
• Compared BM25, LSA and DESM on Bing Query Log Data
• Metrics - NDCG@1, NDCG@3, NDCG@5
• Results
• LSA and DESM both out-performed BM25
• DESM out-performed LSA
• DESM + BM25 out-performed all other approaches
Agenda
• Why a Vector Representation?
• Learning Vector Representations
• Vector Based Search in an Inverted Index
Vectors in Search
• Dense Embedding Vector:
• Dense
• D dimensional
• D = 50-1000
• Inverted index:
• Sparse
• Pivoted by term
• V = Vocabulary
• |V| =100k+
• Fast because sparse
[+0.12, -0.34, -0.12, +0.27, +0.63]
Term Posting List
Java 1,5,100,102
.NET 2,4,600,605,1000
C# 2,88,105,800
SQL 130,433,648,899,1200
Html 1,2,10,30,55,202,252,30,598,
Searching with Word Embeddings
Approaches for using word embeddings:
• Top N terms
• Expand query using top n terms from model
• Boost expansions by cosine similarity
• Can use as a boost query, a re-rank query or a straight term expansion
• Q = “java developer”^10
OR ”java j2ee developer”^0.91 OR “java architect”^0.89
OR “lead java developer”^0.87 OR “j2ee developer”^0.86
OR “java engineer”^0.86
• Term Clustering
• Cluster embeddings using a clustering algorithm
• E.g. k-means
• Compute different sized clusters, k=100,1000,10000
• Map clusters to tokens and index
• Different fields for each k
• Larger k fields – bigger boost or rely on idf scoring
• Query expands to top clusters, boosted by similarity
• Q = “java developer”^10
OR cluster_k1000:5894^5
OR cluster_k100:23^2.5
OR cluster_k10:8^1.25
• See https://github.com/DiceTechJobs/ConceptualSearch
Searching Vectors – k-NN Search
• K-NN search
• Find the k closest neighbors to query vector according to similarity metric
• Usually cosine similarity or Euclidean distance
• Definitions
• D = number of components in the vector
• N = number of documents
• Brute Force Search:
• O(ND) = linear
• What if N AND/OR D is(are) very large?
• Vs. Inverted Index
• Sublinear - makes uses of sparsity of terms
• BTree or Distributed Hash Table lookup for terms, iterate posting list, re-rank
matches - O(n log n)
Optimal Vector Representation In An Inverted Index?
What properties would such a representation have?
• For Performance
• Sparse representation necessary to leverage inverted index
• For Relevancy
• Distributed representation
• Each document should be a collection of tokens
• Tokens represent some semantic feature of the space
• Similarity is preserved
• Similar vectors must also be similar under this new representation
• Zipfian distribution of tokens
• “We need a Zipfian Distribution” – John Berryman (Co-author of ‘Relevant Search’)
• Tokenizing Embedding Spaces
Zipf’s Law
• The frequency of terms in a
corpus follow a power law
distribution
• Small number of tokens are
very common - filter out
irrelevant docs
• A large number of tokens
are very rare - discriminate
between similar matches
• Distribution of last names - By Thekohser [CC BY-SA 3.0
(https://creativecommons.org/licenses/by-sa/3.0 )], from Wikimedia Commons
Approximate Nearest Neighbor Search
• Faster than full k-NN, with some loss in accuracy
• Approaches can be either:
• Data Dependent
• Learns and adjusts from the data
• Makes indexing new documents hard
• Data Independent
• Some Approaches:
• KD Tree
• LSH
• Heuristic Methods
• K-Means Tree
• Randomized KD Forest
• Paper: https://arxiv.org/abs/1603.09596
• HNSW (Hierarchical Navigable Small World Graphs – Top on http://ann-benchmarks.com/
• Paper: https://arxiv.org/pdf/1603.09320.pdf
• Vector Thresholding
• Choice of similarity metric is important in choosing an algorithm
KD Trees
• Construction
• Constructs a binary search tree by partitioning the search space along each vector dimension using the
dimensions
• Partitions are chosen orthogonal to each dimension
• Usually the median
• Querying
• Described here - https://en.wikipedia.org/wiki/K-d_tree#Complexity
• Limitations
• How to implement efficiently in an inverted index?
• Lucene 6.0 dimensional points
• See also - https://www.elastic.co/blog/lucene-points-6.0
• Not exposed in Solr and Elastic Search AFAIK
• Tree needs rebalancing on each insertion
• Curse of dimensionality
• N >> 2d - for N points and D dimensions
• Complexity essentially linear for real world vectors (D>= 50)
• Approximate KNN Search
• Possible with KD tree – limit the number of searched nodes
• Typically out-performed by other ANNs approaches
Locality Sensitive Hashing
• LSH hashes items to discrete buckets
• More buckets – slower but more accurate
• Locality Preserving
• Maximizes the probability that similar items occupy the same buckets
• Random Projection LSH (sim Hash)
• LSH variant for cosine similarity
• Generate a random d-dimensional unit vector r, and for each vector v
• ℎ𝑎𝑠ℎ 𝑣 = 𝑠𝑖𝑔𝑛(𝑣. 𝑟)
• Produces a binary encoding, one bit for each hash function (random vector)
• Probability 2 vectors’ hashes match - proportional to cosine similarity
• Output of hash function can be indexed and searched using Hamming Distance
• Intuition - Van Durme and Lall - http://www.cs.jhu.edu/~vandurme/papers/VanDurmeLallACL10-slides.pdf
• Data independent, although data dependent variations exist
• However, for real data, it is typically out-performed by heuristic methods like k-means trees, and randomized KD-
trees
• https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf
Encoding LSH Hash into the Index
• Hash into Bits
• Store hash fingerprint as a single token • Store each bit as a token using it’s position and value
• Use mm parameter to speed up search
• Or store shingles of the binary tokens
• This is not sparse!
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1]
[“10110110100101”] ["00_1","01_0","02_1","04_1","04_0","05_1","06_1","07_0","08_1","09_0","10_0","11_1","12_0","13_1”]
OR
Hamming Similarity
Class
• Custom similarity class
• Computes the number of
matching tokens
K-Means Tree
• Hierarchical Clustering Algorithm
• Recursively partitions vector space using k-Means clustering
• Fast - k-means runs in linear time using Lloyd’s heuristic
• Most other clustering algorithms run in quadratic time or worse
• Tree Construction
• For some branching factor b create b clusters
• Create b nodes, store centroid for each node
• For each new cluster, cluster its members into b smaller clusters
• These form child nodes of their parent clusters, forming a tree structure
• Continue until < b members per cluster
• Paper
• "Scalable Nearest Neighbor Algorithms for High Dimensional Data" - Marius Muja,
2014 – implemented in the FLANN library
K-Means Tree
Second Layer
(Leaf Nodes)
Root Node
First Layer
…. ….…..
….
Documents
• Depth 3 K-Means Tree
Lucene Implementation Details
• Pre-train a k-means tree on a representative subset of the index
• Indexing:
• Convert all nodes from tree into unique tokens
• For each vector, find the closest matching leaf node
• Index vector with tokens for that leaf node, and all parent nodes
• Querying
• Find top n matching nodes from tree
• Convert nodes into a query, boosted by similarity to query vector
• 'q': 'clusters:(“121”^0.9 “909”^0.88 ”523”^0.91)’
• Create a re-rank query to brute force re-rank the top matching documents
• 'rq’: '{!rerank reRankQuery=$rqq reRankDocs=1000 reRankWeight=99}’
• 'rqq': '{!payloadEdismax v=$vq}’
• ‘vq’: vector:(”0”^-0.0136 ”1”^0.05387 ”2”^0.070476 ”3”^0.14529 …)
• Uses a special payload query parser (payload_score is insufficient)
• See https://github.com/DiceTechJobs/VectorsInSearch
• *Better approach – use doc values field or Lucene dimensional points
• Trade speed for accuracy depending on depth of tree search, and how many vectors are re-ranked
• Tree nodes follow a Zipfian distribution
Lucene Implementation Details
• Cluster Field – stores cluster tokens
• Turn off all norms, tf and idf weighting, custom hamming similarity class
• Vector Field – stores vectors for re-ranking
• Stores components plus payloads, custom similarity class using payloads
• Similarity classes: https://github.com/DiceTechJobs/SolrPlugins
Lucene Implementation Details
Vector field analysis chain: Cluster fields:
Other Heuristic Methods
• Randomized KD Forest
• Constructs a number of KD trees choosing axis to split on randomly
• Searches all trees in parallel to a fixed number of leaf nodes
• KD Trees are very deep
• How to implement efficiently in an inverted index?
• Hierarchical Navigable Small World Graphs
• Hierarchical graph based model - https://arxiv.org/pdf/1603.09320.pdf
• Consistently out-performs other ANNs methods on the ANNs benchmarks
page - http://ann-benchmarks.com/
Distribution of Vector
Components
• Distribution of components
from our vectors is Gaussian
• Mean is 0
• This means that most vector
components are very small
• These components will have
minimal impact on cosine
score
Histogram of components taken from 350k vectors
Mean = 0.0
Vector Thresholding with Tokenization
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[ 0, 0, 0, 0, +0.63, 0, 0, -0.48]
• Drop all but the largest components
[“04i+0.6”, “07i-0.5”]
• Round weight to lower precision
• Encode position and weight as a single token
• Paper: “Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines”
Vector Thresholding with Payloads
[+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48]
[ 0, 0, 0, 0, +0.63, 0, 0, -0.48]
• Drop all but the largest components
• I modified the previous idea, using payload score queries
• Indexing: Store remaining (non zero) tokens in index with payloads
• Querying: Uses custom payload query parser + similarity class
• See Github repo, and solr config in Kmeans tree section
Q=vector:(”3”^-0.0136 ”14”^0.05387 ”56”^-0.070476 ”71”^0.14529 …)
&defType=payloadEdismax
Performance Comparison - Initial Results
• Hardware - Mac Book Pro, 2.6Ghz i7 CPU, 16G Ram, SSD
• Search Engine:
• Solr 7.5, single shard
• Index: 700k documents
• 1000 sample vector queries, requests were single threaded
• Metric – precision @10 compared to brute force
• Updated results – check https://github.com/DiceTechJobs/VectorsInSearch
Performance Comparison - Initial Results
• Each algorithm was ran over a range of different parameter values, to show recall – speed trade off
Performance Comparison - Initial Results
Algorithm Precision@10 Queries Per Sec
(Mean Qry Time)
LSH (Hamming Similarity) 0.69 1.3 qps (757 ms)
Kmeans Tree (trained on index) 0.88 9.2 qps (170 ms)
Kmean Tree (trained on sample) 0.85 9.5 qps (105 ms)
Vector Thresholding with Tokenization
(top 40% of components)
0.85 3.5 qps (312 ms)
Vector Threshold with Payloads
(top 40% of components)
0.94 1.8 qps (547 ms)
The Ultimate Solution - Sparse Coding?
• Also called ‘Dictionary Learning’
• Learns a sparse ‘overcomplete’ representation of a vector
• Example Algorithms:
• Sparse Auto-Encoder
• K-SVD
• Encoding needs to preserve the Metric Space
• Similar items need to remain similar after encoding
Other Relevant Approaches
• Word2bits - learns binary quantized word vectors
• https://github.com/agnusmaximus/Word2Bits
Block Max WAND
• https://www.elastic.co/blog/faster-retrieval-of-top-hits-in-elasticsearch-with-
block-max-wand
• ‘Weak AND’ algorithm to be integrated into Lucene 8.0 and ES 7.0
• Speeds up large OR queries by pruning clauses that won’t occur in top N
matches
• Speed up can be 40% to 13x
• Can help address performance of these larger OR queries
Thank you!
Github Repository:
https://github.com/DiceTechJobs/VectorsInSearch
Simon Hughes
Chief Data Scientist, Dice.com
@hughes_meister
1 of 43

Recommended

Building better knowledge graphs through social computing by
Building better knowledge graphs through social computingBuilding better knowledge graphs through social computing
Building better knowledge graphs through social computingElena Simperl
1.4K views29 slides
ESWC 2017 Tutorial Knowledge Graphs by
ESWC 2017 Tutorial Knowledge GraphsESWC 2017 Tutorial Knowledge Graphs
ESWC 2017 Tutorial Knowledge GraphsPeter Haase
2.8K views100 slides
NOSQL- Presentation on NoSQL by
NOSQL- Presentation on NoSQLNOSQL- Presentation on NoSQL
NOSQL- Presentation on NoSQLRamakant Soni
28.5K views18 slides
A Seminar on NoSQL Databases. by
A Seminar on NoSQL Databases.A Seminar on NoSQL Databases.
A Seminar on NoSQL Databases.Navdeep Charan
1.2K views22 slides
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdf by
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdfWord2Vec model to generate synonyms on the fly in Apache Lucene.pdf
Word2Vec model to generate synonyms on the fly in Apache Lucene.pdfSease
414 views42 slides
NOSQL vs SQL by
NOSQL vs SQLNOSQL vs SQL
NOSQL vs SQLMohammed Fazuluddin
12.7K views15 slides

More Related Content

What's hot

A Primer on Entity Resolution by
A Primer on Entity ResolutionA Primer on Entity Resolution
A Primer on Entity ResolutionBenjamin Bengfort
14.4K views58 slides
Introduction to MongoDB by
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDBRavi Teja
53.5K views20 slides
Sentiment Analysis Using Solr by
Sentiment Analysis Using SolrSentiment Analysis Using Solr
Sentiment Analysis Using SolrPradeep Pujari
7.4K views23 slides
Nonrelational Databases by
Nonrelational DatabasesNonrelational Databases
Nonrelational DatabasesUdi Bauman
5.8K views37 slides
NoSQL databases by
NoSQL databasesNoSQL databases
NoSQL databasesMarin Dimitrov
20.5K views60 slides
Chapter1: NoSQL: It’s about making intelligent choices by
Chapter1: NoSQL: It’s about making intelligent choicesChapter1: NoSQL: It’s about making intelligent choices
Chapter1: NoSQL: It’s about making intelligent choicesMaynooth University
2.2K views24 slides

What's hot(20)

Introduction to MongoDB by Ravi Teja
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
Ravi Teja53.5K views
Sentiment Analysis Using Solr by Pradeep Pujari
Sentiment Analysis Using SolrSentiment Analysis Using Solr
Sentiment Analysis Using Solr
Pradeep Pujari7.4K views
Nonrelational Databases by Udi Bauman
Nonrelational DatabasesNonrelational Databases
Nonrelational Databases
Udi Bauman5.8K views
Chapter1: NoSQL: It’s about making intelligent choices by Maynooth University
Chapter1: NoSQL: It’s about making intelligent choicesChapter1: NoSQL: It’s about making intelligent choices
Chapter1: NoSQL: It’s about making intelligent choices
Maynooth University2.2K views
Sql vs NoSQL-Presentation by Shubham Tomar
 Sql vs NoSQL-Presentation Sql vs NoSQL-Presentation
Sql vs NoSQL-Presentation
Shubham Tomar406 views
Microservice architecture design principles by Sanjoy Kumar Roy
Microservice architecture design principlesMicroservice architecture design principles
Microservice architecture design principles
Sanjoy Kumar Roy3.7K views
Big data components - Introduction to Flume, Pig and Sqoop by Jeyamariappan Guru
Big data components - Introduction to Flume, Pig and SqoopBig data components - Introduction to Flume, Pig and Sqoop
Big data components - Introduction to Flume, Pig and Sqoop
9. Document Oriented Databases by Fabio Fumarola
9. Document Oriented Databases9. Document Oriented Databases
9. Document Oriented Databases
Fabio Fumarola19.7K views
Elasticsearch for beginners by Neil Baker
Elasticsearch for beginnersElasticsearch for beginners
Elasticsearch for beginners
Neil Baker15.6K views
Challenges in Building a Data Pipeline by Manish Kumar
Challenges in Building a Data PipelineChallenges in Building a Data Pipeline
Challenges in Building a Data Pipeline
Manish Kumar524 views
Hadoop YARN | Hadoop YARN Architecture | Hadoop YARN Tutorial | Hadoop Tutori... by Simplilearn
Hadoop YARN | Hadoop YARN Architecture | Hadoop YARN Tutorial | Hadoop Tutori...Hadoop YARN | Hadoop YARN Architecture | Hadoop YARN Tutorial | Hadoop Tutori...
Hadoop YARN | Hadoop YARN Architecture | Hadoop YARN Tutorial | Hadoop Tutori...
Simplilearn5.2K views
Linked Data and Knowledge Graphs -- Constructing and Understanding Knowledge ... by Jeff Z. Pan
Linked Data and Knowledge Graphs -- Constructing and Understanding Knowledge ...Linked Data and Knowledge Graphs -- Constructing and Understanding Knowledge ...
Linked Data and Knowledge Graphs -- Constructing and Understanding Knowledge ...
Jeff Z. Pan4.4K views

Similar to Haystack 2019 - Search with Vectors - Simon Hughes

Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com by
Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com
Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com Lucidworks
1K views42 slides
Vectors in Search - Towards More Semantic Matching by
Vectors in Search - Towards More Semantic MatchingVectors in Search - Towards More Semantic Matching
Vectors in Search - Towards More Semantic MatchingSimon Hughes
1.3K views42 slides
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S... by
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...Lucidworks
18.3K views37 slides
Improving Search in Workday Products using Natural Language Processing by
Improving Search in Workday Products using Natural Language ProcessingImproving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language ProcessingDataWorks Summit
733 views24 slides
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m... by
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...Joaquin Delgado PhD.
3.5K views73 slides
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... by
 RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...S. Diana Hu
854 views73 slides

Similar to Haystack 2019 - Search with Vectors - Simon Hughes(20)

Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com by Lucidworks
Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com
Vectors in Search – Towards More Semantic Matching - Simon Hughes, Dice.com
Lucidworks1K views
Vectors in Search - Towards More Semantic Matching by Simon Hughes
Vectors in Search - Towards More Semantic MatchingVectors in Search - Towards More Semantic Matching
Vectors in Search - Towards More Semantic Matching
Simon Hughes1.3K views
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S... by Lucidworks
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
Implementing Conceptual Search in Solr using LSA and Word2Vec: Presented by S...
Lucidworks18.3K views
Improving Search in Workday Products using Natural Language Processing by DataWorks Summit
Improving Search in Workday Products using Natural Language ProcessingImproving Search in Workday Products using Natural Language Processing
Improving Search in Workday Products using Natural Language Processing
DataWorks Summit733 views
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m... by Joaquin Delgado PhD.
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial - Scalable Recommender Systems: Where Machine Learning m...
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... by S. Diana Hu
 RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning... RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
RecSys 2015 Tutorial – Scalable Recommender Systems: Where Machine Learning...
S. Diana Hu854 views
An introduction to Metadata Application Profiles by kcoylenet
An introduction to Metadata Application ProfilesAn introduction to Metadata Application Profiles
An introduction to Metadata Application Profiles
kcoylenet63 views
First Steps in Semantic Data Modelling and Search & Analytics in the Cloud by Ontotext
First Steps in Semantic Data Modelling and Search & Analytics in the CloudFirst Steps in Semantic Data Modelling and Search & Analytics in the Cloud
First Steps in Semantic Data Modelling and Search & Analytics in the Cloud
Ontotext2.2K views
Dice.com Bay Area Search - Beyond Learning to Rank Talk by Simon Hughes
Dice.com Bay Area Search - Beyond Learning to Rank TalkDice.com Bay Area Search - Beyond Learning to Rank Talk
Dice.com Bay Area Search - Beyond Learning to Rank Talk
Simon Hughes945 views
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com by Simon Hughes
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.comEnhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Enhancing Enterprise Search with Machine Learning - Simon Hughes, Dice.com
Simon Hughes354 views
Why do they call it Linked Data when they want to say...? by Oscar Corcho
Why do they call it Linked Data when they want to say...?Why do they call it Linked Data when they want to say...?
Why do they call it Linked Data when they want to say...?
Oscar Corcho3.9K views
Neo4j Training Introduction by Max De Marzi
Neo4j Training IntroductionNeo4j Training Introduction
Neo4j Training Introduction
Max De Marzi426 views
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent... by Parang Saraf
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...
Slides: Concurrent Inference of Topic Models and Distributed Vector Represent...
Parang Saraf693 views
State of Search 2017 - Semantics and Science - Upasna Gautam by Upasna Gautam
State of Search 2017 - Semantics and Science - Upasna GautamState of Search 2017 - Semantics and Science - Upasna Gautam
State of Search 2017 - Semantics and Science - Upasna Gautam
Upasna Gautam206 views
Knowledge engineering and the Web by Guus Schreiber
Knowledge engineering and the WebKnowledge engineering and the Web
Knowledge engineering and the Web
Guus Schreiber8.5K views
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh... by Lucidworks
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
Enriching Solr with Deep Learning for a Question Answering System - Sanket Sh...
Lucidworks564 views

More from OpenSource Connections

Encores by
EncoresEncores
EncoresOpenSource Connections
2K views53 slides
Test driven relevancy by
Test driven relevancyTest driven relevancy
Test driven relevancyOpenSource Connections
272 views20 slides
How To Structure Your Search Team for Success by
How To Structure Your Search Team for SuccessHow To Structure Your Search Team for Success
How To Structure Your Search Team for SuccessOpenSource Connections
162 views25 slides
The right path to making search relevant - Taxonomy Bootcamp London 2019 by
The right path to making search relevant  - Taxonomy Bootcamp London 2019The right path to making search relevant  - Taxonomy Bootcamp London 2019
The right path to making search relevant - Taxonomy Bootcamp London 2019OpenSource Connections
995 views56 slides
Payloads and OCR with Solr by
Payloads and OCR with SolrPayloads and OCR with Solr
Payloads and OCR with SolrOpenSource Connections
655 views22 slides
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie Hull by
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie HullHaystack 2019 Lightning Talk - The Future of Quepid - Charlie Hull
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie HullOpenSource Connections
498 views5 slides

More from OpenSource Connections(20)

The right path to making search relevant - Taxonomy Bootcamp London 2019 by OpenSource Connections
The right path to making search relevant  - Taxonomy Bootcamp London 2019The right path to making search relevant  - Taxonomy Bootcamp London 2019
The right path to making search relevant - Taxonomy Bootcamp London 2019
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie Hull by OpenSource Connections
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie HullHaystack 2019 Lightning Talk - The Future of Quepid - Charlie Hull
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie Hull
Haystack 2019 Lightning Talk - State of Apache Tika - Tim Allison by OpenSource Connections
Haystack 2019 Lightning Talk - State of Apache Tika - Tim AllisonHaystack 2019 Lightning Talk - State of Apache Tika - Tim Allison
Haystack 2019 Lightning Talk - State of Apache Tika - Tim Allison
Haystack 2019 Lightning Talk - Relevance on 17 million full text documents - ... by OpenSource Connections
Haystack 2019 Lightning Talk - Relevance on 17 million full text documents - ...Haystack 2019 Lightning Talk - Relevance on 17 million full text documents - ...
Haystack 2019 Lightning Talk - Relevance on 17 million full text documents - ...
Haystack 2019 Lightning Talk - Solr Cloud on Kubernetes - Manoj Bharadwaj by OpenSource Connections
Haystack 2019 Lightning Talk - Solr Cloud on Kubernetes - Manoj BharadwajHaystack 2019 Lightning Talk - Solr Cloud on Kubernetes - Manoj Bharadwaj
Haystack 2019 Lightning Talk - Solr Cloud on Kubernetes - Manoj Bharadwaj
Haystack 2019 Lightning Talk - Quaerite a Search relevance evaluation toolkit... by OpenSource Connections
Haystack 2019 Lightning Talk - Quaerite a Search relevance evaluation toolkit...Haystack 2019 Lightning Talk - Quaerite a Search relevance evaluation toolkit...
Haystack 2019 Lightning Talk - Quaerite a Search relevance evaluation toolkit...
Haystack 2019 - Search-based recommendations at Politico - Ryan Kohl by OpenSource Connections
Haystack 2019 - Search-based recommendations at Politico - Ryan KohlHaystack 2019 - Search-based recommendations at Politico - Ryan Kohl
Haystack 2019 - Search-based recommendations at Politico - Ryan Kohl
Haystack 2019 - Natural Language Search with Knowledge Graphs - Trey Grainger by OpenSource Connections
Haystack 2019 - Natural Language Search with Knowledge Graphs - Trey GraingerHaystack 2019 - Natural Language Search with Knowledge Graphs - Trey Grainger
Haystack 2019 - Natural Language Search with Knowledge Graphs - Trey Grainger
Haystack 2019 - Search Logs + Machine Learning = Auto-Tagging Inventory - Joh... by OpenSource Connections
Haystack 2019 - Search Logs + Machine Learning = Auto-Tagging Inventory - Joh...Haystack 2019 - Search Logs + Machine Learning = Auto-Tagging Inventory - Joh...
Haystack 2019 - Search Logs + Machine Learning = Auto-Tagging Inventory - Joh...
Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse... by OpenSource Connections
Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse...Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse...
Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse...
Haystack 2019 - Architectural considerations on search relevancy in the conte... by OpenSource Connections
Haystack 2019 - Architectural considerations on search relevancy in the conte...Haystack 2019 - Architectural considerations on search relevancy in the conte...
Haystack 2019 - Architectural considerations on search relevancy in the conte...
Haystack 2019 - Custom Solr Query Parser Design Option, and Pros & Cons - Ber... by OpenSource Connections
Haystack 2019 - Custom Solr Query Parser Design Option, and Pros & Cons - Ber...Haystack 2019 - Custom Solr Query Parser Design Option, and Pros & Cons - Ber...
Haystack 2019 - Custom Solr Query Parser Design Option, and Pros & Cons - Ber...
Haystack 2019 - Establishing a relevance focused culture in a large organizat... by OpenSource Connections
Haystack 2019 - Establishing a relevance focused culture in a large organizat...Haystack 2019 - Establishing a relevance focused culture in a large organizat...
Haystack 2019 - Establishing a relevance focused culture in a large organizat...
Haystack 2019 - Solving for Satisfaction: Introduction to Click Models - Eliz... by OpenSource Connections
Haystack 2019 - Solving for Satisfaction: Introduction to Click Models - Eliz...Haystack 2019 - Solving for Satisfaction: Introduction to Click Models - Eliz...
Haystack 2019 - Solving for Satisfaction: Introduction to Click Models - Eliz...
2019 Haystack - How The New York Times Tackles Relevance - Jeremiah Via by OpenSource Connections
2019 Haystack - How The New York Times Tackles Relevance - Jeremiah Via2019 Haystack - How The New York Times Tackles Relevance - Jeremiah Via
2019 Haystack - How The New York Times Tackles Relevance - Jeremiah Via
Haystack 2019 - Addressing variance in AB tests: Interleaved evaluation of ra... by OpenSource Connections
Haystack 2019 - Addressing variance in AB tests: Interleaved evaluation of ra...Haystack 2019 - Addressing variance in AB tests: Interleaved evaluation of ra...
Haystack 2019 - Addressing variance in AB tests: Interleaved evaluation of ra...

Recently uploaded

DGST Methodology Presentation.pdf by
DGST Methodology Presentation.pdfDGST Methodology Presentation.pdf
DGST Methodology Presentation.pdfmaddierlegum
8 views9 slides
Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language... by
Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language...Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language...
Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language...patiladiti752
9 views15 slides
Listed Instruments Survey 2022.pptx by
Listed Instruments Survey  2022.pptxListed Instruments Survey  2022.pptx
Listed Instruments Survey 2022.pptxsecretariat4
148 views12 slides
GDG Community Day 2023 - Interpretable ML in production by
GDG Community Day 2023 - Interpretable ML in productionGDG Community Day 2023 - Interpretable ML in production
GDG Community Day 2023 - Interpretable ML in productionSARADINDU SENGUPTA
7 views19 slides
Employees attrition by
Employees attritionEmployees attrition
Employees attritionMaryAlejandraDiaz
8 views5 slides
[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptx by
[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptx[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptx
[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptxDataScienceConferenc1
5 views15 slides

Recently uploaded(20)

DGST Methodology Presentation.pdf by maddierlegum
DGST Methodology Presentation.pdfDGST Methodology Presentation.pdf
DGST Methodology Presentation.pdf
maddierlegum8 views
Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language... by patiladiti752
Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language...Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language...
Enhancing Financial Sentiment Analysis via Retrieval Augmented Large Language...
patiladiti7529 views
Listed Instruments Survey 2022.pptx by secretariat4
Listed Instruments Survey  2022.pptxListed Instruments Survey  2022.pptx
Listed Instruments Survey 2022.pptx
secretariat4148 views
GDG Community Day 2023 - Interpretable ML in production by SARADINDU SENGUPTA
GDG Community Day 2023 - Interpretable ML in productionGDG Community Day 2023 - Interpretable ML in production
GDG Community Day 2023 - Interpretable ML in production
[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptx by DataScienceConferenc1
[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptx[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptx
[DSC Europe 23] Branka Panic - Peace in the age of artificial intelligence.pptx
[DSC Europe 23] Irena Cerovic - AI in International Development.pdf by DataScienceConferenc1
[DSC Europe 23] Irena Cerovic - AI in International Development.pdf[DSC Europe 23] Irena Cerovic - AI in International Development.pdf
[DSC Europe 23] Irena Cerovic - AI in International Development.pdf
DGIQ East 2023 AI Ethics SIG by Karen Lopez
DGIQ East 2023 AI Ethics SIGDGIQ East 2023 AI Ethics SIG
DGIQ East 2023 AI Ethics SIG
Karen Lopez6 views
Games, Queries, and Argumentation Frameworks: Time for a Family Reunion by Bertram Ludäscher
Games, Queries, and Argumentation Frameworks: Time for a Family ReunionGames, Queries, and Argumentation Frameworks: Time for a Family Reunion
Games, Queries, and Argumentation Frameworks: Time for a Family Reunion
Lack of communication among family.pptx by ahmed164023
Lack of communication among family.pptxLack of communication among family.pptx
Lack of communication among family.pptx
ahmed16402317 views
Customer Data Cleansing Project.pptx by Nat O
Customer Data Cleansing Project.pptxCustomer Data Cleansing Project.pptx
Customer Data Cleansing Project.pptx
Nat O7 views
GDG Cloud Community Day 2022 - Managing data quality in Machine Learning by SARADINDU SENGUPTA
GDG Cloud Community Day 2022 -  Managing data quality in Machine LearningGDG Cloud Community Day 2022 -  Managing data quality in Machine Learning
GDG Cloud Community Day 2022 - Managing data quality in Machine Learning
Analytics Center of Excellence | Data CoE |Analytics CoE| WNS Triange by RNayak3
Analytics Center of Excellence | Data CoE |Analytics CoE| WNS TriangeAnalytics Center of Excellence | Data CoE |Analytics CoE| WNS Triange
Analytics Center of Excellence | Data CoE |Analytics CoE| WNS Triange
RNayak35 views
6498-Butun_Beyinli_Cocuq-Daniel_J.Siegel-Tina_Payne_Bryson-2011-259s.pdf by 10urkyr34
6498-Butun_Beyinli_Cocuq-Daniel_J.Siegel-Tina_Payne_Bryson-2011-259s.pdf6498-Butun_Beyinli_Cocuq-Daniel_J.Siegel-Tina_Payne_Bryson-2011-259s.pdf
6498-Butun_Beyinli_Cocuq-Daniel_J.Siegel-Tina_Payne_Bryson-2011-259s.pdf
10urkyr348 views
Dr. Ousmane Badiane-2023 ReSAKSS Conference by AKADEMIYA2063
Dr. Ousmane Badiane-2023 ReSAKSS ConferenceDr. Ousmane Badiane-2023 ReSAKSS Conference
Dr. Ousmane Badiane-2023 ReSAKSS Conference
AKADEMIYA20636 views
Running PostgreSQL in a Kubernetes cluster: CloudNativePG by Nick Ivanov
Running PostgreSQL in a Kubernetes cluster: CloudNativePGRunning PostgreSQL in a Kubernetes cluster: CloudNativePG
Running PostgreSQL in a Kubernetes cluster: CloudNativePG
Nick Ivanov10 views

Haystack 2019 - Search with Vectors - Simon Hughes

  • 1. Searching with Vectors Simon Hughes Chief Data Scientist, Dice.com Twitter: @hughes_meister
  • 2. Who Am I? • Chief Data Scientist at DHI (owns Dice.com) • Key Projects: • Search and Match • Dice Recommender Systems • Dice Job Search • Dice Talent Search 3.0 and 4.0 • Dice Skill Center • Dice Career Advisory Pages • Dice Salary Predictor • Dice Career Paths • PhD Candidate DePaul University • Subject Area – Machine Learning and NLP • Thesis – Extracting Causal Relations from Scientific Essays • Contact Info: • Email: simon.hughes@dhigroupinc.com • Twitter: https://twitter.com/hughes_meister
  • 3. Motivation • Dice.com - leading US technology professional job board • Jobs marketplace • We connect technology talent with employers • High quality searching and matching are critical to our value proposition, for both our customers and our clients • Need – high quality content-based recommender engine • Automatically determine how well a job seeker matches a particular position, and vice versa • Requirements: • A semantic matching engine – goes beyond keyword search, to extracting semantic information from job postings and resume • Deployed at scale using existing search infrastructure (Solr and ElasticSearch) • Github Repository for Talk: • https://github.com/DiceTechJobs/VectorsInSearch
  • 4. Agenda • Why a Vector Representation? • Learning Vector Representations • Vector Based Search in an Inverted Index
  • 5. Understanding Textual Data Key Challenges: • Synonymy – Multiple Words with the Same Meaning • Related – typos, miss-spellings, acronyms, metonyms • E.g. QA, Quality Assurance, Tester • Polysemy – Ambiguity, a word has multiple meanings • E.g. Bank, Book, Ape • Hypernyms/Hyponyms – ‘type of’ relationships • E.g. a dog (hyponym) is a type of animal (hypernym) • Meronyms/Holonyms – ‘part of’ relationships • E.g. finger (meronym) is a ‘part of’ a hand (holonym) • What Words / Phrases are More Important? • Named Entity Extraction (NER), Controlled Vocabularies • Colocation (phrases) detection – e.g. “data scientist” vs “scientist who works with data” • Stop words • Term weighting schemes - e.g. tf.idf
  • 6. How to Solve these Problems? • Map documents and queries to a semantic space • “From Strings to Things”? • Google KG marketing • Map words into concepts / semantics • From strings to concepts • How to represent? Java Technologies Big Data Tools Javascript Frameworks
  • 7. Representations Java • Local representation • Non distributed • Sparse • E.g. one-hot-vector • One vector component per unique word • Similar items have different representations
  • 8. Representations • Distributed Representation • Dense vector • Components of the vector represent learned concepts / latent variables • Similar items have similar representations • Most existing approaches produce dense vectors Java Java • Local representation • Non distributed • Sparse • E.g. one-hot-vector • One vector component per unique word • Similar items have different representations
  • 9. Agenda • Why a Vector Representation? • Learning Vector Representations • Vector Based Search in an Inverted Index
  • 10. The Importance of Context How do we learn the meaning (semantics) of words? • Distributional Hypothesis • Words occurring in similar contexts have similar meanings • Harris 1954 • “a word is characterized by the company it keeps” • Firth 1957 • Ignores word order, grammar and syntax • Latent Relation Hypothesis • Pairs of words occurring in similar patterns have similar semantic relations • Turney et al, 2003 • Patterns – X cuts Y, X works with Y, etc • Word order and grammatical relations matter • Further reading - Distributional approaches to word meanings
  • 11. Learning Meaning from Context Bag of Words Approaches – ignore word order • Latent Models • Context - Documents • LSA • LDA • Semantic Vector Space Model • Word Embeddings • Context – word window • Word2vec • Glove • Simple linear language models • History - http://blog.aylien.com/a-review-of-the-recent-history-of-natural-language-processing/ • For document embeddings • Average or idf weighted average of word vectors • Sentence / Document Embeddings • Context – document + word window • E.g. Doc2vec • Context – surrounding sentences • E.g. skip-thought vectors
  • 12. Word2Vec • By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0) ], from Wikimedia Commons
  • 13. Limitations of BOW Approaches: • Shallow representation • Word embeddings – limited to the word level • Latent models – document level but doesn’t encode relational information • Synonymy - learn relatedness, not true synonyms • E.g. Antonyms have similar vectors • Polysemy – cannot encode different meanings of same word • Global model not a local model
  • 14. Beyond BOW - Deep Language Models • Deep Language Model Embeddings • Derived from the internal state of a deep LM • Learns deep representation of sequences of words in context • Can adjust word vectors based on their current context • “NLP’s imagenet moment” • Achieved state of the art results on many NLP tasks • Consistently out-perform word embedding models • Example models - ELMO, BERT, ULMFit, OpenAI Transformer • Used for encoding sentences not whole documents • Hard to scale
  • 15. Deep Language Models p(w1,w2,w3, w4,…,wn) = p(wn|w1,w2,…,wn-1) ….. ….. ….. p(w1) p(w2|W1) p(w3|w1,w2) p(w4|w1,w2,w3) Begin w1 w2 w3 LSTM LSTM LSTM LSTM
  • 16. Embedding Models for Search • Word Embedding Approaches • Cluster Word Embeddings • “Representing Documents and Queries as Sets of Word Embedded Vectors for Information Retrieval” • Clustered word2vec vectors using k-means • Documents represented as clusters of word vectors • Query - map query vectors as similarity to cluster centroids • Out performed Jelinek Mercer LM similarity using VSM • Average Word Embeddings • From Chapter 5 of Deep Learning for Search • Author - Tommaso Teofili • Query and document represented as average of word2vec vectors • Computing a weighted average using idf worked best • Outperformed BM25 using cosine similarity • BM25 + word2vec – highest NDCG score
  • 17. Embedding Models for Search • Dual Embedding Space Model (DESM) • Research from Microsoft • Extends word2vec • Learns a dual embedding for queries and documents • Paper - https://arxiv.org/pdf/1602.01137.pdf • Evaluation • Compared BM25, LSA and DESM on Bing Query Log Data • Metrics - NDCG@1, NDCG@3, NDCG@5 • Results • LSA and DESM both out-performed BM25 • DESM out-performed LSA • DESM + BM25 out-performed all other approaches
  • 18. Agenda • Why a Vector Representation? • Learning Vector Representations • Vector Based Search in an Inverted Index
  • 19. Vectors in Search • Dense Embedding Vector: • Dense • D dimensional • D = 50-1000 • Inverted index: • Sparse • Pivoted by term • V = Vocabulary • |V| =100k+ • Fast because sparse [+0.12, -0.34, -0.12, +0.27, +0.63] Term Posting List Java 1,5,100,102 .NET 2,4,600,605,1000 C# 2,88,105,800 SQL 130,433,648,899,1200 Html 1,2,10,30,55,202,252,30,598,
  • 20. Searching with Word Embeddings Approaches for using word embeddings: • Top N terms • Expand query using top n terms from model • Boost expansions by cosine similarity • Can use as a boost query, a re-rank query or a straight term expansion • Q = “java developer”^10 OR ”java j2ee developer”^0.91 OR “java architect”^0.89 OR “lead java developer”^0.87 OR “j2ee developer”^0.86 OR “java engineer”^0.86 • Term Clustering • Cluster embeddings using a clustering algorithm • E.g. k-means • Compute different sized clusters, k=100,1000,10000 • Map clusters to tokens and index • Different fields for each k • Larger k fields – bigger boost or rely on idf scoring • Query expands to top clusters, boosted by similarity • Q = “java developer”^10 OR cluster_k1000:5894^5 OR cluster_k100:23^2.5 OR cluster_k10:8^1.25 • See https://github.com/DiceTechJobs/ConceptualSearch
  • 21. Searching Vectors – k-NN Search • K-NN search • Find the k closest neighbors to query vector according to similarity metric • Usually cosine similarity or Euclidean distance • Definitions • D = number of components in the vector • N = number of documents • Brute Force Search: • O(ND) = linear • What if N AND/OR D is(are) very large? • Vs. Inverted Index • Sublinear - makes uses of sparsity of terms • BTree or Distributed Hash Table lookup for terms, iterate posting list, re-rank matches - O(n log n)
  • 22. Optimal Vector Representation In An Inverted Index? What properties would such a representation have? • For Performance • Sparse representation necessary to leverage inverted index • For Relevancy • Distributed representation • Each document should be a collection of tokens • Tokens represent some semantic feature of the space • Similarity is preserved • Similar vectors must also be similar under this new representation • Zipfian distribution of tokens • “We need a Zipfian Distribution” – John Berryman (Co-author of ‘Relevant Search’) • Tokenizing Embedding Spaces
  • 23. Zipf’s Law • The frequency of terms in a corpus follow a power law distribution • Small number of tokens are very common - filter out irrelevant docs • A large number of tokens are very rare - discriminate between similar matches • Distribution of last names - By Thekohser [CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0 )], from Wikimedia Commons
  • 24. Approximate Nearest Neighbor Search • Faster than full k-NN, with some loss in accuracy • Approaches can be either: • Data Dependent • Learns and adjusts from the data • Makes indexing new documents hard • Data Independent • Some Approaches: • KD Tree • LSH • Heuristic Methods • K-Means Tree • Randomized KD Forest • Paper: https://arxiv.org/abs/1603.09596 • HNSW (Hierarchical Navigable Small World Graphs – Top on http://ann-benchmarks.com/ • Paper: https://arxiv.org/pdf/1603.09320.pdf • Vector Thresholding • Choice of similarity metric is important in choosing an algorithm
  • 25. KD Trees • Construction • Constructs a binary search tree by partitioning the search space along each vector dimension using the dimensions • Partitions are chosen orthogonal to each dimension • Usually the median • Querying • Described here - https://en.wikipedia.org/wiki/K-d_tree#Complexity • Limitations • How to implement efficiently in an inverted index? • Lucene 6.0 dimensional points • See also - https://www.elastic.co/blog/lucene-points-6.0 • Not exposed in Solr and Elastic Search AFAIK • Tree needs rebalancing on each insertion • Curse of dimensionality • N >> 2d - for N points and D dimensions • Complexity essentially linear for real world vectors (D>= 50) • Approximate KNN Search • Possible with KD tree – limit the number of searched nodes • Typically out-performed by other ANNs approaches
  • 26. Locality Sensitive Hashing • LSH hashes items to discrete buckets • More buckets – slower but more accurate • Locality Preserving • Maximizes the probability that similar items occupy the same buckets • Random Projection LSH (sim Hash) • LSH variant for cosine similarity • Generate a random d-dimensional unit vector r, and for each vector v • ℎ𝑎𝑠ℎ 𝑣 = 𝑠𝑖𝑔𝑛(𝑣. 𝑟) • Produces a binary encoding, one bit for each hash function (random vector) • Probability 2 vectors’ hashes match - proportional to cosine similarity • Output of hash function can be indexed and searched using Hamming Distance • Intuition - Van Durme and Lall - http://www.cs.jhu.edu/~vandurme/papers/VanDurmeLallACL10-slides.pdf • Data independent, although data dependent variations exist • However, for real data, it is typically out-performed by heuristic methods like k-means trees, and randomized KD- trees • https://lear.inrialpes.fr/pubs/2011/JDS11/jegou_searching_with_quantization.pdf
  • 27. Encoding LSH Hash into the Index • Hash into Bits • Store hash fingerprint as a single token • Store each bit as a token using it’s position and value • Use mm parameter to speed up search • Or store shingles of the binary tokens • This is not sparse! [+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48] [1, 0, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1] [“10110110100101”] ["00_1","01_0","02_1","04_1","04_0","05_1","06_1","07_0","08_1","09_0","10_0","11_1","12_0","13_1”] OR
  • 28. Hamming Similarity Class • Custom similarity class • Computes the number of matching tokens
  • 29. K-Means Tree • Hierarchical Clustering Algorithm • Recursively partitions vector space using k-Means clustering • Fast - k-means runs in linear time using Lloyd’s heuristic • Most other clustering algorithms run in quadratic time or worse • Tree Construction • For some branching factor b create b clusters • Create b nodes, store centroid for each node • For each new cluster, cluster its members into b smaller clusters • These form child nodes of their parent clusters, forming a tree structure • Continue until < b members per cluster • Paper • "Scalable Nearest Neighbor Algorithms for High Dimensional Data" - Marius Muja, 2014 – implemented in the FLANN library
  • 30. K-Means Tree Second Layer (Leaf Nodes) Root Node First Layer …. ….….. …. Documents • Depth 3 K-Means Tree
  • 31. Lucene Implementation Details • Pre-train a k-means tree on a representative subset of the index • Indexing: • Convert all nodes from tree into unique tokens • For each vector, find the closest matching leaf node • Index vector with tokens for that leaf node, and all parent nodes • Querying • Find top n matching nodes from tree • Convert nodes into a query, boosted by similarity to query vector • 'q': 'clusters:(“121”^0.9 “909”^0.88 ”523”^0.91)’ • Create a re-rank query to brute force re-rank the top matching documents • 'rq’: '{!rerank reRankQuery=$rqq reRankDocs=1000 reRankWeight=99}’ • 'rqq': '{!payloadEdismax v=$vq}’ • ‘vq’: vector:(”0”^-0.0136 ”1”^0.05387 ”2”^0.070476 ”3”^0.14529 …) • Uses a special payload query parser (payload_score is insufficient) • See https://github.com/DiceTechJobs/VectorsInSearch • *Better approach – use doc values field or Lucene dimensional points • Trade speed for accuracy depending on depth of tree search, and how many vectors are re-ranked • Tree nodes follow a Zipfian distribution
  • 32. Lucene Implementation Details • Cluster Field – stores cluster tokens • Turn off all norms, tf and idf weighting, custom hamming similarity class • Vector Field – stores vectors for re-ranking • Stores components plus payloads, custom similarity class using payloads • Similarity classes: https://github.com/DiceTechJobs/SolrPlugins
  • 33. Lucene Implementation Details Vector field analysis chain: Cluster fields:
  • 34. Other Heuristic Methods • Randomized KD Forest • Constructs a number of KD trees choosing axis to split on randomly • Searches all trees in parallel to a fixed number of leaf nodes • KD Trees are very deep • How to implement efficiently in an inverted index? • Hierarchical Navigable Small World Graphs • Hierarchical graph based model - https://arxiv.org/pdf/1603.09320.pdf • Consistently out-performs other ANNs methods on the ANNs benchmarks page - http://ann-benchmarks.com/
  • 35. Distribution of Vector Components • Distribution of components from our vectors is Gaussian • Mean is 0 • This means that most vector components are very small • These components will have minimal impact on cosine score Histogram of components taken from 350k vectors Mean = 0.0
  • 36. Vector Thresholding with Tokenization [+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48] [ 0, 0, 0, 0, +0.63, 0, 0, -0.48] • Drop all but the largest components [“04i+0.6”, “07i-0.5”] • Round weight to lower precision • Encode position and weight as a single token • Paper: “Semantic Vector Encoding and Similarity Search Using Fulltext Search Engines”
  • 37. Vector Thresholding with Payloads [+0.08, -0.16, -0.12, +0.27, +0.63, -0.01, +0.16, -0.48] [ 0, 0, 0, 0, +0.63, 0, 0, -0.48] • Drop all but the largest components • I modified the previous idea, using payload score queries • Indexing: Store remaining (non zero) tokens in index with payloads • Querying: Uses custom payload query parser + similarity class • See Github repo, and solr config in Kmeans tree section Q=vector:(”3”^-0.0136 ”14”^0.05387 ”56”^-0.070476 ”71”^0.14529 …) &defType=payloadEdismax
  • 38. Performance Comparison - Initial Results • Hardware - Mac Book Pro, 2.6Ghz i7 CPU, 16G Ram, SSD • Search Engine: • Solr 7.5, single shard • Index: 700k documents • 1000 sample vector queries, requests were single threaded • Metric – precision @10 compared to brute force • Updated results – check https://github.com/DiceTechJobs/VectorsInSearch
  • 39. Performance Comparison - Initial Results • Each algorithm was ran over a range of different parameter values, to show recall – speed trade off
  • 40. Performance Comparison - Initial Results Algorithm Precision@10 Queries Per Sec (Mean Qry Time) LSH (Hamming Similarity) 0.69 1.3 qps (757 ms) Kmeans Tree (trained on index) 0.88 9.2 qps (170 ms) Kmean Tree (trained on sample) 0.85 9.5 qps (105 ms) Vector Thresholding with Tokenization (top 40% of components) 0.85 3.5 qps (312 ms) Vector Threshold with Payloads (top 40% of components) 0.94 1.8 qps (547 ms)
  • 41. The Ultimate Solution - Sparse Coding? • Also called ‘Dictionary Learning’ • Learns a sparse ‘overcomplete’ representation of a vector • Example Algorithms: • Sparse Auto-Encoder • K-SVD • Encoding needs to preserve the Metric Space • Similar items need to remain similar after encoding Other Relevant Approaches • Word2bits - learns binary quantized word vectors • https://github.com/agnusmaximus/Word2Bits
  • 42. Block Max WAND • https://www.elastic.co/blog/faster-retrieval-of-top-hits-in-elasticsearch-with- block-max-wand • ‘Weak AND’ algorithm to be integrated into Lucene 8.0 and ES 7.0 • Speeds up large OR queries by pruning clauses that won’t occur in top N matches • Speed up can be 40% to 13x • Can help address performance of these larger OR queries
  • 43. Thank you! Github Repository: https://github.com/DiceTechJobs/VectorsInSearch Simon Hughes Chief Data Scientist, Dice.com @hughes_meister

Editor's Notes

  1. Metrics – recall often used for measuring synonymy and related problems, while precision and traditional IR metrics are better at measuring the efficacy at disambiguating a user’s intent
  2. Context – bag of words Global - learn semantic representations of terms Address synonymy (word level) Learn colocations (phrases) Local – can be used to disambiguate ambiguous terms Address polysemy
  3. By Aelu013 [CC BY-SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0)], from Wikimedia Commons. For LSA illustration, and an excellent explanation, see here - http://iv.slis.indiana.edu/sw/lsa.html
  4. Word vectors - don’t learn true synonyms – don’t truly solve synonymy problem, and don’t handle polsemy as the same vector is used for a word regardless of it’s context. Deep LM’s capture the meaning of a a sequence of words in context – not just individual words in isolation. Context – bag of words Global - learn semantic representations of terms Address synonymy (word level) Learn colocations (phrases) Local – can be used to disambiguate ambiguous terms Address polysemy
  5. Word vectors - don’t learn true synonyms – don’t truly solve synonymy problem, and don’t handle polysemy as the same vector is used for a word regardless of it’s context. Deep LM’s capture the meaning of a a sequence of words in context – not just individual words in isolation. Context – bag of words Global - learn semantic representations of terms Address synonymy (word level) Learn colocations (phrases) Local – can be used to disambiguate ambiguous terms Address polysemy
  6. How do we represent dense vectors in a form that works inside an inverted index? Dense
  7. Note – important to do colocation (phrase detection) before building an embedding model. Embeddings work better when phrases are passed as single tokens.
  8. Excellent explanation of the simHash- Dan Durme and Lall presentation, slide 15