SlideShare a Scribd company logo
1 of 46
Download to read offline
tech talk @ ferret
Andrii Gakhov
PROBABILISTIC DATA STRUCTURES
ALL YOU WANTED TO KNOW BUT WERE AFRAID TO ASK
PART 4: SIMILARITY
SIMILARITY
Agenda:
▸ Locality-Sensitive Hashing (LSH)
▸ MinHash
▸ SimHash
• To find clusters of similar documents from the
document set
• To find duplicates of the document in the
document set
THE PROBLEM
LOCALITY SENSITIVE HASHING
LOCALITY SENTSITIVE HASHING
• Locality Sentsitive Hashing (LSH) was introduced by Indyk and
Motwani in 1998 as a family of functions with the following property:
• similar input objects (from the domain of such functions) have a
higher probability of colliding in the range space than non-similar
ones
• LSH differs from conventional and cryptographic hash functions because
it aims to maximize the probability of a “collision” for similar
items.
• Intuitively, LSH is based on the simple idea that, if two points are close
together, then after a “projection” operation these two points will remain
close together.
LSH: PROPERTIES
• One general approach to LSH is to “hash” items several
times, in such a way that similar items are more likely to
be hashed to the same bucket than dissimilar items are.
We then consider any pair that hashed to the same
bucket for any of the hashings to be a candidate pair.
LSH: PROPERTIES
• Examples of LSH functions are known for resemblance,
cosine and hamming distances.
• LSH functions do not exist for certain commonly used
similarity measures in information retrieval, the Dice
coefficient and the Overlap coefficient (Moses S. Charikar,
2002)
LSH: PROBLEM
• Given a query point, we wish to find the points in a large
database that are closest to the query. We wish to guarantee
with a high probability equal to 1 − δ that we return the
nearest neighbor for any query point.
• Conceptually, this problem is easily solved by iterating through each point in the
database and calculating the distance to the query object. However, our database
may contain billions of objects — each object described by a vector that contains
hundreds of dimensions.
• LSH proves useful to identify nearest neighbors quickly even
when the database is very large.
LSH: FINDING DUPLICATE PAGES ON THE WEB
How to select features for the webpage?
• document vector from page content
• Compute “document-vector” by case-folding, stop-word removal, stemming, computing
term-frequencies and finally, weighting each term by its inverse document frequency (IDF)
• shingles from page content
• Consider a document as an sequence of words.A shingle is a hash value of a k-gram
which is sub-sequence of k consequent words.
• Hashes for shingles can be computed using Rabin’s fingerprints
• connectivity information
• The idea that similar web pages have several incoming links in common.
• anchor text and anchor window
• The idea that similar documents should have similar anchor text.The words in the anchor
text or anchor window are folded into the document-vector itself.
• phrases
• Idea is to identify phrases and compute a document-vector that includes phrases as terms.
LSH: FINDING DUPLICATE PAGES ON THE WEB
• The Web contains many duplicate pages, partly because content is duplicated
across sites and partly because there is more than one URL that points to the
same page.
• If we use shingles as features, each shingle represents a portion of a Web page
and is computed by forming a histogram of the words found within that portion
of the page.
• We can test to see if a portion of the page is duplicated elsewhere on the Web
by looking for other shingles with the same histogram. If the shingles of the
new page match shingles from the database, then it is likely that the new page
bears a strong resemblance to an existing page.
• Because Web pages are surrounded by navigational and other information that
changes from site to site it is useful to use nearest-neighbor solution that can
employ LSH functions.
LSH: RETRIEVING IMAGES
• LSH can be used in image retrieval as an object recognition
tool. We compute a detailed metric for many different
orientations and configurations of an object we wish to
recognize.
• Then, given a new image we simply check our database to see
if a precomputed object’s metrics are close to our query. This
database contains millions of poses and LSH allows us to
quickly check if the query object is known.
LSH: RETRIEVING MUSIC
• In music retrieval conventional hashes and robust features are typically used to find
musical matches.
• The features can be fingerprints, i.e., representations of the audio signal that are robust to
common types of abuse that are performed to audio before it reaches our ears.
• Fingerprints can be computed, for instance, by noting the peaks in the spectrum (because they are
robust to noise) and encoding their position in time and space. User then just has to query the database
for the same fingerprint.
• However, to find similar songs we cannot use fingerprints because these are
different when a song is remixed for a new audience or when a different artist performs the
same song. Instead, we can use several seconds of the song— a snippet—as a shingle.
• To determine if two songs are similar, we need to query the database and see if a large enough number of
the query shingles are close to one song in the database.
• Although closeness depends on the feature vector, we know that long shingles provide
specificity. This is particularly important because we can eliminate duplicates to improve
search results and to link recommendation data between similar songs.
LSH: PYTHON
• https://github.com/ekzhu/datasketch

datasketch gives you probabilistic data structures that
can process vary large amount of data
• https://github.com/simonemainardi/LSHash

LSHash is a fast Python implementation of locality
sensitive hashing with persistence support
• https://github.com/go2starr/lshhdc

LSHHDC: Locality-Sensitive Hashing based High
Dimensional Clustering
LSH: READ MORE
• http://www.cs.princeton.edu/courses/archive/spr04/cos598B/
bib/IndykM-curse.pdf
• http://infolab.stanford.edu/~ullman/mmds/book.pdf
• http://www.cs.princeton.edu/courses/archive/spr04/cos598B/
bib/CharikarEstim.pdf
• http://www.matlabi.ir/wp-content/uploads/bank_papers/
g_paper/g15_Matlabi.ir_Locality-
Sensitive%20Hashing%20for%20Finding%20Nearest%20Nei
ghbors.pdf
• https://en.wikipedia.org/wiki/Locality-sensitive_hashing
MINHASH
MINHASH: RESEMBLANCE SIMILARITY
• While talking about relations of two documents A and B, we mostly interesting in such
concept as "roughly the same", that could be mathematically formalized as their
resemblance (or Jaccard similarity).
• Every document can be seen as a set of features and such issue could be reduced to a set
intersection problem (find similarity between sets).
• The resemblance r(A,B) of two documents is a number between 0 and 1, such that it is
close to 1 for the documents that are "roughly the same”:
• For duplicate detection problem we can reformulate is as a task to find pairs of
documents whose resemblance r(A,B) exceeds certain threshold R.
• For the clustering problem we can use d(A,B) = 1 - r(A,B) as a metric to design
algorithms intended to cluster a collection of documents into sets of closely resembling
documents.
J(A,B) = r(A,B) =
A∩ B
A∪ B
MINHASH
• MinHash (minwise hashing algorithm) has been proposed by Andrei
Broder in 1997
• The minwise hashing family applies a random permutation π: Ω → Ω,
on the given set S, and stores only the minimum values after the
permutation mapping.
• Formally MinHash is defined as:
hπ
min
S( )= min π S( )( )= min π1 S( ),…πk S( )( )
MINHASH: MATRIX REPRESENTATION
• Collection of sets could be visualised as characteristic matrix CM.
• The columns of the matrix correspond to the sets, and the rows
correspond to elements of the universal set from which elements of the
sets are drawn.
• CM(r, c) =1 if the element for row r is a member of the set for
column c, otherwise CM(r,c) =0
• NOTE: The characteristic matrix is unlikely to be the way the data is
stored, but it is useful as a way to visualize the data.
• For one reason not to store data as a matrix, these matrices are almost always sparse
(they have many more 0’s than 1’s) in practice.
MINHASH: MATRIX REPRESENTATION
• Consider the following sets documents (represented as bag of words):
• S1 = {python, is, a, programming, language}
• S2 = {java, is, a, programming, language}
• S3 = {a, programming, language}
• S4 = {python, is, a, snake}
• We can assume that our universal set limits to words from these 4 documents.The characteristic
matrix of these sets is
row S1 S2 S3 S4
a 0 1 1 1 1
is 1 1 1 0 1
java 2 0 1 0 0
language 3 1 1 1 0
programming 4 1 1 1 0
python 5 1 0 0 1
snake 6 0 0 0 1
MINHASH: INTUITION
• The minhash value of any column of the characteristic
matrix is the number of the first row, in the
permuted order, in which the column has a 1.
• To minhash a set represented by such columns, we need
to pick a permutation of the rows.
MINHASH: ONE STEP EXAMPLE
row S1 S2 S3 S4 permuted row
1 1 0 1 0 2
2 1 0 0 0 5
3 0 0 1 1 6
4 1 0 0 1 1
5 0 0 1 1 4
6 0 1 1 0 3
• Consider the characteristic matrix:
• Record the first “1” in each column:
m(S1) = 2, m(S2) = 3, m(S3) = 2, m(S4) = 6
• Estimate the Jaccard similarity: J(Si,Sj) = 1 if m(Si) = m(Sj), or 0 otherwise
J(S1,S3) = 1, J(S1, S4) = 0
MINHASH: INTUITION
• It is not feasible to permute a large characteristic
matrix explicitly.
• Even picking a random permutation of millions of rows is time-
consuming, and the necessary sorting of the rows would take even
more time.
• Fortunately, it is possible to simulate the effect of a
random permutation by a random hash function that
maps row numbers to as many buckets as there are rows.
• So, instead of picking n random permutations of rows, we pick n
randomly chosen hash functions h1, h2, . . . , hn on the rows
MINHASH: PROPERTIES
• Connection between minhash and resemblance (Jaccard)
similarity of the sets that are minhashed:
• The probability that the minhash function for a random permutation
of rows produces the same value for two sets equals the Jaccard
similarity of those sets
• Minhash(π) of a set is the number of the row (element) with first non-
zero in the permuted order π.
• MinHash is an LSH for resemblance (Jaccard) similarity, which defined
over binary vectors.
MINHASH: ALGORITHM
• Pick k randomly chosen hash functions h1, h2, . . . , hk
• From the column representing set S, construct the minhash signature for S - the
vector [h1(S), h2(S), . . . , hk(S)].
• Build the characteristic matrix CM for the set S.
• Construct a signature matrix SIG, where SIG(i, c) corresponds to the i
th
hash
function and column c. Initially, set SIG(i, c) = ∞ for each i and c.
• On each row r:
• Compute h1(r), h2(r), . . . , hk(r)
• For each column c:
• If CM(r,c) = 1, then set SIG(i, c) = min{SIG(i,c), hi(r)} for each i = 1. . . n
• Estimate the resemblance (Jaccard) similarities of the underlying sets from the
final signature matrix
MINHASH: EXAMPLE
• Consider 2 hash functions:
• h1(x) = x + 1 mod 7
• h2(x) = 3x + 1 mod 7
row S1 S2 S3 S4 h1(row) h2 (row)
a 0 1 1 1 1 1 1
is 1 1 1 0 1 2 4
java 2 0 1 0 0 3 0
language 3 1 1 1 0 4 3
programming 4 1 1 1 0 5 6
python 5 1 0 0 1 6 2
snake 6 0 0 0 1 0 5
• Consider the same set of 4 documents and universal set that consists of 7 words
MINHASH: EXAMPLE
S1 S2 S3 S4
h1 ∞ ∞ ∞ ∞
h2 ∞ ∞ ∞ ∞
• Initially the signature matrix will have all ∞:
• First, consider row 0, all values h1(0) and h2(0) are both 1. So, for all sets with 1 in
the row 0 we set value h1(0)=h2(0)=1 in the signature matrix:
r S1 S2 S3 S4 h1 h2
0 1 1 1 1 1 1
1 1 1 0 1 2 4
2 0 1 0 0 3 0
3 1 1 1 0 4 3
4 1 1 1 0 5 6
5 1 0 0 1 6 2
6 0 0 0 1 0 5
S1 S2 S3 S4
h1 1 1 1 1
h2 1 1 1 1
• Next, consider row 1, its values h1(1) = 2 and h2(1) = 4. In this row only S1, S2
and S4 have 1’s, so we set the appropriate cells with the minimum between
existing values and 2 for h1 or 4 for h2:
S1 S2 S3 S4
h1 1 1 1 1
h2 1 1 1 1
MINHASH: EXAMPLE
S1 S2 S3 S4
h1 1 1 1 1
h2 1 0 1 1
• Continue with other rows and after row 5 the signature matrix is:
• Finally, consider row 6, its values h1(6) = 0 and h2(6) = 5. In this row only S4
has 1’s, so we set the appropriate cells with the minimum between existing
values and 0 for h1 or 5 for h2.The final signature matrix is:
S1 S2 S3 S4
h1 1 1 1 0
h2 1 0 1 1
r S1 S2 S3 S4 h1 h2
0 1 1 1 1 1 1
1 1 1 0 1 2 4
2 0 1 0 0 3 0
3 1 1 1 0 4 3
4 1 1 1 0 5 6
5 1 0 0 1 6 2
6 0 0 0 1 0 5
• Document S1 is similar to S3 (identical columns), so similarity is 1 (the true
value J ≈ 0.75)
• Documents S2 and S4 have no rows in common, so similarity is 0 (the true
value J ≈ 0.28)
MINHASH: PYTHON
• https://github.com/ekzhu/datasketch

datasketch gives you probabilistic data structures that
can process vary large amount of data
• https://github.com/anthonygarvan/MinHash

MinHash is an effective pure python implementation of
Minhash
MINHASH: READ MORE
• http://infolab.stanford.edu/~ullman/mmds/book.pdf
• http://www.cs.cmu.edu/~guyb/realworld/slidesS13/
minhash.pdf
• https://en.wikipedia.org/wiki/MinHash
• http://www2007.org/papers/paper570.pdf
• https://www.cs.utah.edu/~jeffp/teaching/cs5955/L4-
Jaccard+Shingle.pdf
B-BIT MINHASH
B-BIT MINHASH
• A modification of minwise hashing, called b-bit minwise
hashing, was proposed by Ping Li and Arnd Christian König
in 2010.
• The method of b-bit minwise hashing provides a simple
solution by storing only the lowest b bits of each hashed
data, reducing the dimensionality of the (expanded)
hashed data matrix to just 2b
•k.
• b-bit minwise hashing is simple and requires only minimal
modification to the original minwise hashing algorithm.
B-BIT MINHASH
• Intuitively, using fewer bits per sample will increase the estimation
variance, compared to the original MinHash, at the same "sample size" k.
Thus, we have to increase k to maintain the same accuracy.
• Interestingly, the theoretical results demonstrate that, when resemblance is not
too small (e.g. R ≥ 0.5 as a common threshold), we do not have to increase
k much.
• This means that, compared to the earlier approach, the b-bit minwise hashing can be used to improve
estimation accuracy and significantly reduce storage requirement at the same time.
• For example, for b=1 and R=0.5, the estimation variance will increase at
most by factor of 3.
• So, in order not to lose accuracy, we have to increase the sample size k by factor 3.
• If we originally stored each hashed value using 64 bits, the improvement by using
b=1 will be 64/3 = 21.3
B-BIT MINHASH: READ MORE
• https://www.microsoft.com/en-us/research/publication/b-
bit-minwise-hashing/
• https://www.microsoft.com/en-us/research/publication/
theory-and-applications-of-b-bit-minwise-hashing/
SIMHASH
SIMHASH
• SimHash (a sign normal random projection) algorithm has been
proposed by Moses S. Charikar in 2002.
• SimHash originate form the concept of sign random projections (SRP):
• In short, for a given vector x SRP utilizes a random vector w with each component
generated from i.i.d normal, i.e. wi ~ N(0,1), and only stores the sign of the
projected data.
• Currently, SimHash is the only known LSH for cosine similarity
sim(A,B) =
A⋅ B
A ⋅ B
=
aibi
i=1
n
∑
ai
2
i=1
n
∑ bi
2
i=1
n
∑
SIMHASH
• In fact, it is a dimensionality reduction technique that maps high-
dimensional vectors to ƒ-bit fingerprints, where ƒ is small (e.g. 32 or 64).
• SimHash algorithm consider a fingerprint as a "hash" of its properties
and assumes that similar documents have similar hash values.
• This assumption requires the hash functions to be LSH and, as we already know, it
isn't trivial as it could be seen for the first time.
• If we consider 2 documents that are different just in a single byte and apply such
popular hash functions as SHA-1 or MD5, they will definitely get two completely
different hash-values, that aren't close.This is why such approach requires a
special rule how to define hash functions that could have such required property.
• An attractive feature of the SimHash hash functions is that the output is
a single bit (the sign)
SIMHASH: FINGERPRINTS
• to generate an ƒ-bit fingerprint, maintain an ƒ-dimensional vector V (all
dimensions are 0 at the beginning).
• a feature is hashed into an ƒ-bit hash value (using a hash function) and can be
seen as a binary vector of ƒ bits
• these ƒ bits (unique to the feature) increment/decrement the ƒ components of
the vector by the weight of that feature as follows:
• if the ith
bit of the hash value is 1 => the ith
component of V is incremented by the
weight of that feature
• if the ith
bit of the hash value is 0 => the ith
component of V is decremented by the
weight of that feature.
• When all features have been processed, components of V are positive or
negative.
• The signs of components determine the corresponding bits of the final fingerprint.
SIMHASH: FINGERPRINTS EXAMPLE
• Consider a document after POS tagging: {(“ferret”, NOUN), (“go”, VERB)}
• decide to have weights for features based on their POS tags: {“NOUN”: 1.0, “VERB”: 0.9}
• h(x) - some hash function, and we will calculate 16-bit (ƒ=16) fingerprint F
• Maintain V = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) for the document
• h(“hello”) = 0101010101010111
• h(“hello”)[1] == 0 => V[1] = V[1] - 1.0 = -1.0
• h(“hello”)[2] == 1 => V[2] = V[2] + 1.0 = 1.0
• …
• V = (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, 1.0, 1.0)
• h(“go”) = 0111011100100111
• h(“go”)[1] == 0 => V[1] = V[1] - 0.9 = -1.9
• h(“go”)[2] == 1 => V[2] = V[2] + 0.9 = 1.9
• …
• V = (-1.9, 1.9, -0.1, 1.9, -1.9, 1.9, -0.1, 1.9, -1.9, 0.1, -0.1, 0.1, -1.9, 1.9, 1.9, 1.9)
• sign(V) = (-, +, -, +, -, +, -, +, -, +, -, +, -, +, +, +)
• fingerprint F = 0101010101010111
SIMHASH: FINGERPRINTS
After converting document into simhash fingerprints, we face the following
design problem:
• Given a ƒ-bit fingerprint of a document, how quickly discover other
fingerprints that differ in at most k bit-positions?
SIMHASH: DATA STRUCTURE
• SimHash data structure consists of t tables: T1, T2, … Tt.With each table
Ti is associated 2 quantities:
• an integer pi (number of top bit-positions used in comparisons)
• a permutation πi over the ƒ bit-positions
• Table Ti is constructed by applying permutation πi to each existing
fingerprint F and the resulting set of permuted ƒ-bit fingerprints is sorted.
• To optimize such structure to store it in the main-memory, it is possible to compress
each such table (that can decrease it size by half approximately).
• Example:
DF = {0110, 0010, 0010, 1001, 1110}

πi = {1→2, 2→4, 3→3, 4→1 }
• πi (0110)= 0011, πi (0010)= 0010,

πi (1001)= 1100, πi (1110)= 0111
Ti =
0 0 1 0
0 0 1 1
0 0 1 1
0 1 1 1
1 1 0 0
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
SIMHASH: ALGORITHM
• Actually, the algorithm follows the general approach we saw for LSH
• Given a fingerprint F and an integer k we probe tables from the data structure in
parallel:
• For each table Ti we identify all permuted fingerprints in Ti whose top pi bit-
positions match the top pi bit-positions of πi(F).
• After that, for each such permuted fingerprints, we check if it differs from the
πi(F) in at most k bit-positions (in fact, comparing the Hamming distance
between such fingerprints).
• Identification of the first fingerprint in table Ti whose top pi bit-positions match
the top pi bit-positions of πi(F) (Step 1) can be done in O(pi) steps by binary
search.
• If we assume that each fingerprint were truly a random bit sequence, it is possible
to shrinks the run time to O(log pi) steps in expectation (interpolation search).
SIMHASH: PARAMETERS
• For a repository of 8B web-pages, ƒ=64 (64-bit simhash fingerprints)
and k = 3 are reasonable. (Gurmeet Singh Manku et al., 2007)
• For the given ƒ and k, to find a reasonable combination of t (number of
tables) and pi (number of top bit-positions used in comparisons) we have
to note that they are not independent from each other:
• Increasing the number of tables t increases pi and hence reduces the query time.
(large values for various pi let to avoid checking too many fingerprints in Step 2)
• Decreasing the number of tables t reduces storage requirements, but reduces pi
and hence increases the query time (small set of permutations let to avoid space
blowup)
SIMHASH: PARAMETERS
• Consider ƒ = 64 (64-bit fingerprints) and k = 3, so documents whose fingerprints
differ at most 3 bit-positions will be similar for us (duplicates).
• Assume we have 8B = 234
existing fingerprints and decide to use t = 10 tables:
• For instance, t = 10 = (5
2) , so we can use 5 blocks and choose 2 out of them in
10 ways.
• Split 64 bits into 5 blocks having 13, 13, 13, 13 and 12 bits respectively.
• For each such choice, permutation π corresponds to making the bits lying in the
chosen blocks the leading bits.
• pi is the total number of bits in the chosen blocks, thus pi = 25 or 26.
• On average, a probe retrieves at most 234−25
= 512 (permuted) fingerprints.
• if we seek all (permuted) fingerprints which match the top pi bits of a given (permuted)
fingerprint, we expect 2d−pi
fingerprints as matches
• The total number of probes for 64-bit fingerprints and k = 3 is (64
3) = 41664 probes
SIMHASH: PYTHON
• https://github.com/leonsunliang/simhash

A Python implementation of Simhash.
• http://bibliographie-trac.ub.rub.de/browser/simhash.py

Implementation of Charikar's simhashes in Python
SIMHASH: READ MORE
• http://www.cs.princeton.edu/courses/archive/spr04/
cos598B/bib/CharikarEstim.pdf
• http://jmlr.org/proceedings/papers/v33/
shrivastava14.pdf
• http://www.wwwconference.org/www2007/papers/
paper215.pdf
• http://ferd.ca/simhashing-hopefully-made-simple.html
▸ @gakhov
▸ linkedin.com/in/gakhov
▸ www.datacrucis.com
THANK YOU

More Related Content

What's hot

Overview of Storage and Indexing ...
Overview of Storage and Indexing                                             ...Overview of Storage and Indexing                                             ...
Overview of Storage and Indexing ...Javed Khan
 
Cost estimation for Query Optimization
Cost estimation for Query OptimizationCost estimation for Query Optimization
Cost estimation for Query OptimizationRavinder Kamboj
 
Little o and little omega
Little o and little omegaLittle o and little omega
Little o and little omegaRajesh K Shukla
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmHema Kashyap
 
Protocols for wireless sensor networks
Protocols for wireless sensor networks Protocols for wireless sensor networks
Protocols for wireless sensor networks DEBABRATASINGH3
 
Elasticsearch for beginners
Elasticsearch for beginnersElasticsearch for beginners
Elasticsearch for beginnersNeil Baker
 
Count-Distinct Problem
Count-Distinct ProblemCount-Distinct Problem
Count-Distinct ProblemKai Zhang
 
Vertica 7.0 Architecture Overview
Vertica 7.0 Architecture OverviewVertica 7.0 Architecture Overview
Vertica 7.0 Architecture OverviewAndrey Karpov
 
9. chapter 8 np hard and np complete problems
9. chapter 8   np hard and np complete problems9. chapter 8   np hard and np complete problems
9. chapter 8 np hard and np complete problemsJyotsna Suryadevara
 
Introduction to redis - version 2
Introduction to redis - version 2Introduction to redis - version 2
Introduction to redis - version 2Dvir Volk
 
Living with SQL and NoSQL at craigslist, a Pragmatic Approach
Living with SQL and NoSQL at craigslist, a Pragmatic ApproachLiving with SQL and NoSQL at craigslist, a Pragmatic Approach
Living with SQL and NoSQL at craigslist, a Pragmatic ApproachJeremy Zawodny
 
SUN Network File system - Design, Implementation and Experience
SUN Network File system - Design, Implementation and Experience SUN Network File system - Design, Implementation and Experience
SUN Network File system - Design, Implementation and Experience aniadkar
 
Properties of Regular Expressions
Properties of Regular ExpressionsProperties of Regular Expressions
Properties of Regular ExpressionsShiraz316
 

What's hot (20)

Overview of Storage and Indexing ...
Overview of Storage and Indexing                                             ...Overview of Storage and Indexing                                             ...
Overview of Storage and Indexing ...
 
Cost estimation for Query Optimization
Cost estimation for Query OptimizationCost estimation for Query Optimization
Cost estimation for Query Optimization
 
Little o and little omega
Little o and little omegaLittle o and little omega
Little o and little omega
 
Cache optimization
Cache optimizationCache optimization
Cache optimization
 
6LoWPAN.pptx
6LoWPAN.pptx6LoWPAN.pptx
6LoWPAN.pptx
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithm
 
Protocols for wireless sensor networks
Protocols for wireless sensor networks Protocols for wireless sensor networks
Protocols for wireless sensor networks
 
OLSR | Optimized Link State Routing Protocol
OLSR | Optimized Link State Routing ProtocolOLSR | Optimized Link State Routing Protocol
OLSR | Optimized Link State Routing Protocol
 
Multicast in OpenStack
Multicast in OpenStackMulticast in OpenStack
Multicast in OpenStack
 
Elasticsearch for beginners
Elasticsearch for beginnersElasticsearch for beginners
Elasticsearch for beginners
 
Count-Distinct Problem
Count-Distinct ProblemCount-Distinct Problem
Count-Distinct Problem
 
Vertica 7.0 Architecture Overview
Vertica 7.0 Architecture OverviewVertica 7.0 Architecture Overview
Vertica 7.0 Architecture Overview
 
9. chapter 8 np hard and np complete problems
9. chapter 8   np hard and np complete problems9. chapter 8   np hard and np complete problems
9. chapter 8 np hard and np complete problems
 
NoSQL
NoSQLNoSQL
NoSQL
 
N queen problem
N queen problemN queen problem
N queen problem
 
Dataflow Analysis
Dataflow AnalysisDataflow Analysis
Dataflow Analysis
 
Introduction to redis - version 2
Introduction to redis - version 2Introduction to redis - version 2
Introduction to redis - version 2
 
Living with SQL and NoSQL at craigslist, a Pragmatic Approach
Living with SQL and NoSQL at craigslist, a Pragmatic ApproachLiving with SQL and NoSQL at craigslist, a Pragmatic Approach
Living with SQL and NoSQL at craigslist, a Pragmatic Approach
 
SUN Network File system - Design, Implementation and Experience
SUN Network File system - Design, Implementation and Experience SUN Network File system - Design, Implementation and Experience
SUN Network File system - Design, Implementation and Experience
 
Properties of Regular Expressions
Properties of Regular ExpressionsProperties of Regular Expressions
Properties of Regular Expressions
 

Viewers also liked

Probabilistic data structures. Part 3. Frequency
Probabilistic data structures. Part 3. FrequencyProbabilistic data structures. Part 3. Frequency
Probabilistic data structures. Part 3. FrequencyAndrii Gakhov
 
Probabilistic data structures. Part 2. Cardinality
Probabilistic data structures. Part 2. CardinalityProbabilistic data structures. Part 2. Cardinality
Probabilistic data structures. Part 2. CardinalityAndrii Gakhov
 
Bloom filter
Bloom filterBloom filter
Bloom filterfeng lee
 
Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014Andrii Gakhov
 
Вероятностные структуры данных
Вероятностные структуры данныхВероятностные структуры данных
Вероятностные структуры данныхAndrii Gakhov
 
Big Data with Semantics - StampedeCon 2012
Big Data with Semantics - StampedeCon 2012Big Data with Semantics - StampedeCon 2012
Big Data with Semantics - StampedeCon 2012StampedeCon
 
Thinking in MapReduce - StampedeCon 2013
Thinking in MapReduce - StampedeCon 2013Thinking in MapReduce - StampedeCon 2013
Thinking in MapReduce - StampedeCon 2013StampedeCon
 
[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributions[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributionsWooSung Choi
 
Learning a nonlinear embedding by preserving class neibourhood structure 최종
Learning a nonlinear embedding by preserving class neibourhood structure   최종Learning a nonlinear embedding by preserving class neibourhood structure   최종
Learning a nonlinear embedding by preserving class neibourhood structure 최종WooSung Choi
 
Dsh data sensitive hashing for high dimensional k-nn search
Dsh  data sensitive hashing for high dimensional k-nn searchDsh  data sensitive hashing for high dimensional k-nn search
Dsh data sensitive hashing for high dimensional k-nn searchWooSung Choi
 
Economic Development and Satellite Images
Economic Development and Satellite ImagesEconomic Development and Satellite Images
Economic Development and Satellite ImagesPaul Raschky
 
An optimal and progressive algorithm for skyline queries slide
An optimal and progressive algorithm for skyline queries slideAn optimal and progressive algorithm for skyline queries slide
An optimal and progressive algorithm for skyline queries slideWooSung Choi
 
Implementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and LuaImplementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and LuaAndrii Gakhov
 
Bloom filter
Bloom filterBloom filter
Bloom filterwang ping
 

Viewers also liked (20)

Probabilistic data structures. Part 3. Frequency
Probabilistic data structures. Part 3. FrequencyProbabilistic data structures. Part 3. Frequency
Probabilistic data structures. Part 3. Frequency
 
Probabilistic data structures. Part 2. Cardinality
Probabilistic data structures. Part 2. CardinalityProbabilistic data structures. Part 2. Cardinality
Probabilistic data structures. Part 2. Cardinality
 
Bloom filter
Bloom filterBloom filter
Bloom filter
 
Bloom filters
Bloom filtersBloom filters
Bloom filters
 
Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014Data Mining - lecture 7 - 2014
Data Mining - lecture 7 - 2014
 
爬虫点滴
爬虫点滴爬虫点滴
爬虫点滴
 
14 Skip Lists
14 Skip Lists14 Skip Lists
14 Skip Lists
 
Вероятностные структуры данных
Вероятностные структуры данныхВероятностные структуры данных
Вероятностные структуры данных
 
Big Data with Semantics - StampedeCon 2012
Big Data with Semantics - StampedeCon 2012Big Data with Semantics - StampedeCon 2012
Big Data with Semantics - StampedeCon 2012
 
Thinking in MapReduce - StampedeCon 2013
Thinking in MapReduce - StampedeCon 2013Thinking in MapReduce - StampedeCon 2013
Thinking in MapReduce - StampedeCon 2013
 
[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributions[Vldb 2013] skyline operator on anti correlated distributions
[Vldb 2013] skyline operator on anti correlated distributions
 
Learning a nonlinear embedding by preserving class neibourhood structure 최종
Learning a nonlinear embedding by preserving class neibourhood structure   최종Learning a nonlinear embedding by preserving class neibourhood structure   최종
Learning a nonlinear embedding by preserving class neibourhood structure 최종
 
Dsh data sensitive hashing for high dimensional k-nn search
Dsh  data sensitive hashing for high dimensional k-nn searchDsh  data sensitive hashing for high dimensional k-nn search
Dsh data sensitive hashing for high dimensional k-nn search
 
Economic Development and Satellite Images
Economic Development and Satellite ImagesEconomic Development and Satellite Images
Economic Development and Satellite Images
 
An optimal and progressive algorithm for skyline queries slide
An optimal and progressive algorithm for skyline queries slideAn optimal and progressive algorithm for skyline queries slide
An optimal and progressive algorithm for skyline queries slide
 
Implementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and LuaImplementing a Fileserver with Nginx and Lua
Implementing a Fileserver with Nginx and Lua
 
Bloom filter
Bloom filterBloom filter
Bloom filter
 
Bloom filter
Bloom filterBloom filter
Bloom filter
 
Locality sensitive hashing
Locality sensitive hashingLocality sensitive hashing
Locality sensitive hashing
 
skip list
skip listskip list
skip list
 

Similar to Tech talk explores probabilistic data structures

Local sensitive hashing & minhash on facebook friend
Local sensitive hashing & minhash on facebook friendLocal sensitive hashing & minhash on facebook friend
Local sensitive hashing & minhash on facebook friendChengeng Ma
 
Hashing And Hashing Tables
Hashing And Hashing TablesHashing And Hashing Tables
Hashing And Hashing TablesChinmaya M. N
 
Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Big Data Spain
 
large_scale_search.pdf
large_scale_search.pdflarge_scale_search.pdf
large_scale_search.pdfEmerald72
 
Open LSH - september 2014 update
Open LSH  - september 2014 updateOpen LSH  - september 2014 update
Open LSH - september 2014 updateJ Singh
 
Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)J Singh
 
introduction to trees,graphs,hashing
introduction to trees,graphs,hashingintroduction to trees,graphs,hashing
introduction to trees,graphs,hashingAkhil Prem
 
Data Step Hash Object vs SQL Join
Data Step Hash Object vs SQL JoinData Step Hash Object vs SQL Join
Data Step Hash Object vs SQL JoinGeoff Ness
 
Algorithms for Query Processing and Optimization of Spatial Operations
Algorithms for Query Processing and Optimization of Spatial OperationsAlgorithms for Query Processing and Optimization of Spatial Operations
Algorithms for Query Processing and Optimization of Spatial OperationsNatasha Mandal
 
Segmentation - based Historical Handwritten Word Spotting using document-spec...
Segmentation - based Historical Handwritten Word Spotting using document-spec...Segmentation - based Historical Handwritten Word Spotting using document-spec...
Segmentation - based Historical Handwritten Word Spotting using document-spec...Konstantinos Zagoris
 
Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...
Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...
Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...DECK36
 
Conceptos básicos. Seminario web 1: Introducción a NoSQL
Conceptos básicos. Seminario web 1: Introducción a NoSQLConceptos básicos. Seminario web 1: Introducción a NoSQL
Conceptos básicos. Seminario web 1: Introducción a NoSQLMongoDB
 
Chapter 4.pptx
Chapter 4.pptxChapter 4.pptx
Chapter 4.pptxTekle12
 
Sergey Nikolenko and Anton Alekseev User Profiling in Text-Based Recommende...
Sergey Nikolenko and  Anton Alekseev  User Profiling in Text-Based Recommende...Sergey Nikolenko and  Anton Alekseev  User Profiling in Text-Based Recommende...
Sergey Nikolenko and Anton Alekseev User Profiling in Text-Based Recommende...AIST
 
Deep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکی
Deep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکیDeep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکی
Deep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکیEhsan Asgarian
 

Similar to Tech talk explores probabilistic data structures (20)

Local sensitive hashing & minhash on facebook friend
Local sensitive hashing & minhash on facebook friendLocal sensitive hashing & minhash on facebook friend
Local sensitive hashing & minhash on facebook friend
 
Hashing And Hashing Tables
Hashing And Hashing TablesHashing And Hashing Tables
Hashing And Hashing Tables
 
Hashing
HashingHashing
Hashing
 
Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015Building graphs to discover information by David Martínez at Big Data Spain 2015
Building graphs to discover information by David Martínez at Big Data Spain 2015
 
large_scale_search.pdf
large_scale_search.pdflarge_scale_search.pdf
large_scale_search.pdf
 
Open LSH - september 2014 update
Open LSH  - september 2014 updateOpen LSH  - september 2014 update
Open LSH - september 2014 update
 
Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)Mining of massive datasets using locality sensitive hashing (LSH)
Mining of massive datasets using locality sensitive hashing (LSH)
 
Isam
IsamIsam
Isam
 
asdfew.pptx
asdfew.pptxasdfew.pptx
asdfew.pptx
 
introduction to trees,graphs,hashing
introduction to trees,graphs,hashingintroduction to trees,graphs,hashing
introduction to trees,graphs,hashing
 
Data Step Hash Object vs SQL Join
Data Step Hash Object vs SQL JoinData Step Hash Object vs SQL Join
Data Step Hash Object vs SQL Join
 
Algorithms for Query Processing and Optimization of Spatial Operations
Algorithms for Query Processing and Optimization of Spatial OperationsAlgorithms for Query Processing and Optimization of Spatial Operations
Algorithms for Query Processing and Optimization of Spatial Operations
 
Hashing
HashingHashing
Hashing
 
Presentation shexer
Presentation shexerPresentation shexer
Presentation shexer
 
Segmentation - based Historical Handwritten Word Spotting using document-spec...
Segmentation - based Historical Handwritten Word Spotting using document-spec...Segmentation - based Historical Handwritten Word Spotting using document-spec...
Segmentation - based Historical Handwritten Word Spotting using document-spec...
 
Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...
Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...
Real-time Data De-duplication using Locality-sensitive Hashing powered by Sto...
 
Conceptos básicos. Seminario web 1: Introducción a NoSQL
Conceptos básicos. Seminario web 1: Introducción a NoSQLConceptos básicos. Seminario web 1: Introducción a NoSQL
Conceptos básicos. Seminario web 1: Introducción a NoSQL
 
Chapter 4.pptx
Chapter 4.pptxChapter 4.pptx
Chapter 4.pptx
 
Sergey Nikolenko and Anton Alekseev User Profiling in Text-Based Recommende...
Sergey Nikolenko and  Anton Alekseev  User Profiling in Text-Based Recommende...Sergey Nikolenko and  Anton Alekseev  User Profiling in Text-Based Recommende...
Sergey Nikolenko and Anton Alekseev User Profiling in Text-Based Recommende...
 
Deep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکی
Deep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکیDeep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکی
Deep dive to ElasticSearch - معرفی ابزار جستجوی الاستیکی
 

More from Andrii Gakhov

Let's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architectureLet's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architectureAndrii Gakhov
 
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...Andrii Gakhov
 
Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...Andrii Gakhov
 
Pecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food TraditionsPecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food TraditionsAndrii Gakhov
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryAndrii Gakhov
 
Apache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected TalksApache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected TalksAndrii Gakhov
 
Swagger / Quick Start Guide
Swagger / Quick Start GuideSwagger / Quick Start Guide
Swagger / Quick Start GuideAndrii Gakhov
 
API Days Berlin highlights
API Days Berlin highlightsAPI Days Berlin highlights
API Days Berlin highlightsAndrii Gakhov
 
ELK - What's new and showcases
ELK - What's new and showcasesELK - What's new and showcases
ELK - What's new and showcasesAndrii Gakhov
 
Apache Spark Overview @ ferret
Apache Spark Overview @ ferretApache Spark Overview @ ferret
Apache Spark Overview @ ferretAndrii Gakhov
 
Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014Andrii Gakhov
 
Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014Andrii Gakhov
 
Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014Andrii Gakhov
 
Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014Andrii Gakhov
 
Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014Andrii Gakhov
 
Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)Andrii Gakhov
 
Data Mining - lecture 2 - 2014
Data Mining - lecture 2 - 2014Data Mining - lecture 2 - 2014
Data Mining - lecture 2 - 2014Andrii Gakhov
 
Data Mining - lecture 1 - 2014
Data Mining - lecture 1 - 2014Data Mining - lecture 1 - 2014
Data Mining - lecture 1 - 2014Andrii Gakhov
 
Buzzwords 2014 / Overview / part2
Buzzwords 2014 / Overview / part2Buzzwords 2014 / Overview / part2
Buzzwords 2014 / Overview / part2Andrii Gakhov
 

More from Andrii Gakhov (20)

Let's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architectureLet's start GraphQL: structure, behavior, and architecture
Let's start GraphQL: structure, behavior, and architecture
 
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...
 
Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...Too Much Data? - Just Sample, Just Hash, ...
Too Much Data? - Just Sample, Just Hash, ...
 
DNS Delegation
DNS DelegationDNS Delegation
DNS Delegation
 
Pecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food TraditionsPecha Kucha: Ukrainian Food Traditions
Pecha Kucha: Ukrainian Food Traditions
 
Recurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: TheoryRecurrent Neural Networks. Part 1: Theory
Recurrent Neural Networks. Part 1: Theory
 
Apache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected TalksApache Big Data Europe 2015: Selected Talks
Apache Big Data Europe 2015: Selected Talks
 
Swagger / Quick Start Guide
Swagger / Quick Start GuideSwagger / Quick Start Guide
Swagger / Quick Start Guide
 
API Days Berlin highlights
API Days Berlin highlightsAPI Days Berlin highlights
API Days Berlin highlights
 
ELK - What's new and showcases
ELK - What's new and showcasesELK - What's new and showcases
ELK - What's new and showcases
 
Apache Spark Overview @ ferret
Apache Spark Overview @ ferretApache Spark Overview @ ferret
Apache Spark Overview @ ferret
 
Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014Data Mining - lecture 8 - 2014
Data Mining - lecture 8 - 2014
 
Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014Data Mining - lecture 6 - 2014
Data Mining - lecture 6 - 2014
 
Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014Data Mining - lecture 5 - 2014
Data Mining - lecture 5 - 2014
 
Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014Data Mining - lecture 4 - 2014
Data Mining - lecture 4 - 2014
 
Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014Data Mining - lecture 3 - 2014
Data Mining - lecture 3 - 2014
 
Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)Decision Theory - lecture 1 (introduction)
Decision Theory - lecture 1 (introduction)
 
Data Mining - lecture 2 - 2014
Data Mining - lecture 2 - 2014Data Mining - lecture 2 - 2014
Data Mining - lecture 2 - 2014
 
Data Mining - lecture 1 - 2014
Data Mining - lecture 1 - 2014Data Mining - lecture 1 - 2014
Data Mining - lecture 1 - 2014
 
Buzzwords 2014 / Overview / part2
Buzzwords 2014 / Overview / part2Buzzwords 2014 / Overview / part2
Buzzwords 2014 / Overview / part2
 

Recently uploaded

FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxOnBoard
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 

Recently uploaded (20)

FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
The transition to renewables in India.pdf
The transition to renewables in India.pdfThe transition to renewables in India.pdf
The transition to renewables in India.pdf
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
Maximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptxMaximizing Board Effectiveness 2024 Webinar.pptx
Maximizing Board Effectiveness 2024 Webinar.pptx
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 

Tech talk explores probabilistic data structures

  • 1. tech talk @ ferret Andrii Gakhov PROBABILISTIC DATA STRUCTURES ALL YOU WANTED TO KNOW BUT WERE AFRAID TO ASK PART 4: SIMILARITY
  • 3. • To find clusters of similar documents from the document set • To find duplicates of the document in the document set THE PROBLEM
  • 5. LOCALITY SENTSITIVE HASHING • Locality Sentsitive Hashing (LSH) was introduced by Indyk and Motwani in 1998 as a family of functions with the following property: • similar input objects (from the domain of such functions) have a higher probability of colliding in the range space than non-similar ones • LSH differs from conventional and cryptographic hash functions because it aims to maximize the probability of a “collision” for similar items. • Intuitively, LSH is based on the simple idea that, if two points are close together, then after a “projection” operation these two points will remain close together.
  • 6. LSH: PROPERTIES • One general approach to LSH is to “hash” items several times, in such a way that similar items are more likely to be hashed to the same bucket than dissimilar items are. We then consider any pair that hashed to the same bucket for any of the hashings to be a candidate pair.
  • 7. LSH: PROPERTIES • Examples of LSH functions are known for resemblance, cosine and hamming distances. • LSH functions do not exist for certain commonly used similarity measures in information retrieval, the Dice coefficient and the Overlap coefficient (Moses S. Charikar, 2002)
  • 8. LSH: PROBLEM • Given a query point, we wish to find the points in a large database that are closest to the query. We wish to guarantee with a high probability equal to 1 − δ that we return the nearest neighbor for any query point. • Conceptually, this problem is easily solved by iterating through each point in the database and calculating the distance to the query object. However, our database may contain billions of objects — each object described by a vector that contains hundreds of dimensions. • LSH proves useful to identify nearest neighbors quickly even when the database is very large.
  • 9. LSH: FINDING DUPLICATE PAGES ON THE WEB How to select features for the webpage? • document vector from page content • Compute “document-vector” by case-folding, stop-word removal, stemming, computing term-frequencies and finally, weighting each term by its inverse document frequency (IDF) • shingles from page content • Consider a document as an sequence of words.A shingle is a hash value of a k-gram which is sub-sequence of k consequent words. • Hashes for shingles can be computed using Rabin’s fingerprints • connectivity information • The idea that similar web pages have several incoming links in common. • anchor text and anchor window • The idea that similar documents should have similar anchor text.The words in the anchor text or anchor window are folded into the document-vector itself. • phrases • Idea is to identify phrases and compute a document-vector that includes phrases as terms.
  • 10. LSH: FINDING DUPLICATE PAGES ON THE WEB • The Web contains many duplicate pages, partly because content is duplicated across sites and partly because there is more than one URL that points to the same page. • If we use shingles as features, each shingle represents a portion of a Web page and is computed by forming a histogram of the words found within that portion of the page. • We can test to see if a portion of the page is duplicated elsewhere on the Web by looking for other shingles with the same histogram. If the shingles of the new page match shingles from the database, then it is likely that the new page bears a strong resemblance to an existing page. • Because Web pages are surrounded by navigational and other information that changes from site to site it is useful to use nearest-neighbor solution that can employ LSH functions.
  • 11. LSH: RETRIEVING IMAGES • LSH can be used in image retrieval as an object recognition tool. We compute a detailed metric for many different orientations and configurations of an object we wish to recognize. • Then, given a new image we simply check our database to see if a precomputed object’s metrics are close to our query. This database contains millions of poses and LSH allows us to quickly check if the query object is known.
  • 12. LSH: RETRIEVING MUSIC • In music retrieval conventional hashes and robust features are typically used to find musical matches. • The features can be fingerprints, i.e., representations of the audio signal that are robust to common types of abuse that are performed to audio before it reaches our ears. • Fingerprints can be computed, for instance, by noting the peaks in the spectrum (because they are robust to noise) and encoding their position in time and space. User then just has to query the database for the same fingerprint. • However, to find similar songs we cannot use fingerprints because these are different when a song is remixed for a new audience or when a different artist performs the same song. Instead, we can use several seconds of the song— a snippet—as a shingle. • To determine if two songs are similar, we need to query the database and see if a large enough number of the query shingles are close to one song in the database. • Although closeness depends on the feature vector, we know that long shingles provide specificity. This is particularly important because we can eliminate duplicates to improve search results and to link recommendation data between similar songs.
  • 13. LSH: PYTHON • https://github.com/ekzhu/datasketch
 datasketch gives you probabilistic data structures that can process vary large amount of data • https://github.com/simonemainardi/LSHash
 LSHash is a fast Python implementation of locality sensitive hashing with persistence support • https://github.com/go2starr/lshhdc
 LSHHDC: Locality-Sensitive Hashing based High Dimensional Clustering
  • 14. LSH: READ MORE • http://www.cs.princeton.edu/courses/archive/spr04/cos598B/ bib/IndykM-curse.pdf • http://infolab.stanford.edu/~ullman/mmds/book.pdf • http://www.cs.princeton.edu/courses/archive/spr04/cos598B/ bib/CharikarEstim.pdf • http://www.matlabi.ir/wp-content/uploads/bank_papers/ g_paper/g15_Matlabi.ir_Locality- Sensitive%20Hashing%20for%20Finding%20Nearest%20Nei ghbors.pdf • https://en.wikipedia.org/wiki/Locality-sensitive_hashing
  • 16. MINHASH: RESEMBLANCE SIMILARITY • While talking about relations of two documents A and B, we mostly interesting in such concept as "roughly the same", that could be mathematically formalized as their resemblance (or Jaccard similarity). • Every document can be seen as a set of features and such issue could be reduced to a set intersection problem (find similarity between sets). • The resemblance r(A,B) of two documents is a number between 0 and 1, such that it is close to 1 for the documents that are "roughly the same”: • For duplicate detection problem we can reformulate is as a task to find pairs of documents whose resemblance r(A,B) exceeds certain threshold R. • For the clustering problem we can use d(A,B) = 1 - r(A,B) as a metric to design algorithms intended to cluster a collection of documents into sets of closely resembling documents. J(A,B) = r(A,B) = A∩ B A∪ B
  • 17. MINHASH • MinHash (minwise hashing algorithm) has been proposed by Andrei Broder in 1997 • The minwise hashing family applies a random permutation π: Ω → Ω, on the given set S, and stores only the minimum values after the permutation mapping. • Formally MinHash is defined as: hπ min S( )= min π S( )( )= min π1 S( ),…πk S( )( )
  • 18. MINHASH: MATRIX REPRESENTATION • Collection of sets could be visualised as characteristic matrix CM. • The columns of the matrix correspond to the sets, and the rows correspond to elements of the universal set from which elements of the sets are drawn. • CM(r, c) =1 if the element for row r is a member of the set for column c, otherwise CM(r,c) =0 • NOTE: The characteristic matrix is unlikely to be the way the data is stored, but it is useful as a way to visualize the data. • For one reason not to store data as a matrix, these matrices are almost always sparse (they have many more 0’s than 1’s) in practice.
  • 19. MINHASH: MATRIX REPRESENTATION • Consider the following sets documents (represented as bag of words): • S1 = {python, is, a, programming, language} • S2 = {java, is, a, programming, language} • S3 = {a, programming, language} • S4 = {python, is, a, snake} • We can assume that our universal set limits to words from these 4 documents.The characteristic matrix of these sets is row S1 S2 S3 S4 a 0 1 1 1 1 is 1 1 1 0 1 java 2 0 1 0 0 language 3 1 1 1 0 programming 4 1 1 1 0 python 5 1 0 0 1 snake 6 0 0 0 1
  • 20. MINHASH: INTUITION • The minhash value of any column of the characteristic matrix is the number of the first row, in the permuted order, in which the column has a 1. • To minhash a set represented by such columns, we need to pick a permutation of the rows.
  • 21. MINHASH: ONE STEP EXAMPLE row S1 S2 S3 S4 permuted row 1 1 0 1 0 2 2 1 0 0 0 5 3 0 0 1 1 6 4 1 0 0 1 1 5 0 0 1 1 4 6 0 1 1 0 3 • Consider the characteristic matrix: • Record the first “1” in each column: m(S1) = 2, m(S2) = 3, m(S3) = 2, m(S4) = 6 • Estimate the Jaccard similarity: J(Si,Sj) = 1 if m(Si) = m(Sj), or 0 otherwise J(S1,S3) = 1, J(S1, S4) = 0
  • 22. MINHASH: INTUITION • It is not feasible to permute a large characteristic matrix explicitly. • Even picking a random permutation of millions of rows is time- consuming, and the necessary sorting of the rows would take even more time. • Fortunately, it is possible to simulate the effect of a random permutation by a random hash function that maps row numbers to as many buckets as there are rows. • So, instead of picking n random permutations of rows, we pick n randomly chosen hash functions h1, h2, . . . , hn on the rows
  • 23. MINHASH: PROPERTIES • Connection between minhash and resemblance (Jaccard) similarity of the sets that are minhashed: • The probability that the minhash function for a random permutation of rows produces the same value for two sets equals the Jaccard similarity of those sets • Minhash(π) of a set is the number of the row (element) with first non- zero in the permuted order π. • MinHash is an LSH for resemblance (Jaccard) similarity, which defined over binary vectors.
  • 24. MINHASH: ALGORITHM • Pick k randomly chosen hash functions h1, h2, . . . , hk • From the column representing set S, construct the minhash signature for S - the vector [h1(S), h2(S), . . . , hk(S)]. • Build the characteristic matrix CM for the set S. • Construct a signature matrix SIG, where SIG(i, c) corresponds to the i th hash function and column c. Initially, set SIG(i, c) = ∞ for each i and c. • On each row r: • Compute h1(r), h2(r), . . . , hk(r) • For each column c: • If CM(r,c) = 1, then set SIG(i, c) = min{SIG(i,c), hi(r)} for each i = 1. . . n • Estimate the resemblance (Jaccard) similarities of the underlying sets from the final signature matrix
  • 25. MINHASH: EXAMPLE • Consider 2 hash functions: • h1(x) = x + 1 mod 7 • h2(x) = 3x + 1 mod 7 row S1 S2 S3 S4 h1(row) h2 (row) a 0 1 1 1 1 1 1 is 1 1 1 0 1 2 4 java 2 0 1 0 0 3 0 language 3 1 1 1 0 4 3 programming 4 1 1 1 0 5 6 python 5 1 0 0 1 6 2 snake 6 0 0 0 1 0 5 • Consider the same set of 4 documents and universal set that consists of 7 words
  • 26. MINHASH: EXAMPLE S1 S2 S3 S4 h1 ∞ ∞ ∞ ∞ h2 ∞ ∞ ∞ ∞ • Initially the signature matrix will have all ∞: • First, consider row 0, all values h1(0) and h2(0) are both 1. So, for all sets with 1 in the row 0 we set value h1(0)=h2(0)=1 in the signature matrix: r S1 S2 S3 S4 h1 h2 0 1 1 1 1 1 1 1 1 1 0 1 2 4 2 0 1 0 0 3 0 3 1 1 1 0 4 3 4 1 1 1 0 5 6 5 1 0 0 1 6 2 6 0 0 0 1 0 5 S1 S2 S3 S4 h1 1 1 1 1 h2 1 1 1 1 • Next, consider row 1, its values h1(1) = 2 and h2(1) = 4. In this row only S1, S2 and S4 have 1’s, so we set the appropriate cells with the minimum between existing values and 2 for h1 or 4 for h2: S1 S2 S3 S4 h1 1 1 1 1 h2 1 1 1 1
  • 27. MINHASH: EXAMPLE S1 S2 S3 S4 h1 1 1 1 1 h2 1 0 1 1 • Continue with other rows and after row 5 the signature matrix is: • Finally, consider row 6, its values h1(6) = 0 and h2(6) = 5. In this row only S4 has 1’s, so we set the appropriate cells with the minimum between existing values and 0 for h1 or 5 for h2.The final signature matrix is: S1 S2 S3 S4 h1 1 1 1 0 h2 1 0 1 1 r S1 S2 S3 S4 h1 h2 0 1 1 1 1 1 1 1 1 1 0 1 2 4 2 0 1 0 0 3 0 3 1 1 1 0 4 3 4 1 1 1 0 5 6 5 1 0 0 1 6 2 6 0 0 0 1 0 5 • Document S1 is similar to S3 (identical columns), so similarity is 1 (the true value J ≈ 0.75) • Documents S2 and S4 have no rows in common, so similarity is 0 (the true value J ≈ 0.28)
  • 28. MINHASH: PYTHON • https://github.com/ekzhu/datasketch
 datasketch gives you probabilistic data structures that can process vary large amount of data • https://github.com/anthonygarvan/MinHash
 MinHash is an effective pure python implementation of Minhash
  • 29. MINHASH: READ MORE • http://infolab.stanford.edu/~ullman/mmds/book.pdf • http://www.cs.cmu.edu/~guyb/realworld/slidesS13/ minhash.pdf • https://en.wikipedia.org/wiki/MinHash • http://www2007.org/papers/paper570.pdf • https://www.cs.utah.edu/~jeffp/teaching/cs5955/L4- Jaccard+Shingle.pdf
  • 31. B-BIT MINHASH • A modification of minwise hashing, called b-bit minwise hashing, was proposed by Ping Li and Arnd Christian König in 2010. • The method of b-bit minwise hashing provides a simple solution by storing only the lowest b bits of each hashed data, reducing the dimensionality of the (expanded) hashed data matrix to just 2b •k. • b-bit minwise hashing is simple and requires only minimal modification to the original minwise hashing algorithm.
  • 32. B-BIT MINHASH • Intuitively, using fewer bits per sample will increase the estimation variance, compared to the original MinHash, at the same "sample size" k. Thus, we have to increase k to maintain the same accuracy. • Interestingly, the theoretical results demonstrate that, when resemblance is not too small (e.g. R ≥ 0.5 as a common threshold), we do not have to increase k much. • This means that, compared to the earlier approach, the b-bit minwise hashing can be used to improve estimation accuracy and significantly reduce storage requirement at the same time. • For example, for b=1 and R=0.5, the estimation variance will increase at most by factor of 3. • So, in order not to lose accuracy, we have to increase the sample size k by factor 3. • If we originally stored each hashed value using 64 bits, the improvement by using b=1 will be 64/3 = 21.3
  • 33. B-BIT MINHASH: READ MORE • https://www.microsoft.com/en-us/research/publication/b- bit-minwise-hashing/ • https://www.microsoft.com/en-us/research/publication/ theory-and-applications-of-b-bit-minwise-hashing/
  • 35. SIMHASH • SimHash (a sign normal random projection) algorithm has been proposed by Moses S. Charikar in 2002. • SimHash originate form the concept of sign random projections (SRP): • In short, for a given vector x SRP utilizes a random vector w with each component generated from i.i.d normal, i.e. wi ~ N(0,1), and only stores the sign of the projected data. • Currently, SimHash is the only known LSH for cosine similarity sim(A,B) = A⋅ B A ⋅ B = aibi i=1 n ∑ ai 2 i=1 n ∑ bi 2 i=1 n ∑
  • 36. SIMHASH • In fact, it is a dimensionality reduction technique that maps high- dimensional vectors to ƒ-bit fingerprints, where ƒ is small (e.g. 32 or 64). • SimHash algorithm consider a fingerprint as a "hash" of its properties and assumes that similar documents have similar hash values. • This assumption requires the hash functions to be LSH and, as we already know, it isn't trivial as it could be seen for the first time. • If we consider 2 documents that are different just in a single byte and apply such popular hash functions as SHA-1 or MD5, they will definitely get two completely different hash-values, that aren't close.This is why such approach requires a special rule how to define hash functions that could have such required property. • An attractive feature of the SimHash hash functions is that the output is a single bit (the sign)
  • 37. SIMHASH: FINGERPRINTS • to generate an ƒ-bit fingerprint, maintain an ƒ-dimensional vector V (all dimensions are 0 at the beginning). • a feature is hashed into an ƒ-bit hash value (using a hash function) and can be seen as a binary vector of ƒ bits • these ƒ bits (unique to the feature) increment/decrement the ƒ components of the vector by the weight of that feature as follows: • if the ith bit of the hash value is 1 => the ith component of V is incremented by the weight of that feature • if the ith bit of the hash value is 0 => the ith component of V is decremented by the weight of that feature. • When all features have been processed, components of V are positive or negative. • The signs of components determine the corresponding bits of the final fingerprint.
  • 38. SIMHASH: FINGERPRINTS EXAMPLE • Consider a document after POS tagging: {(“ferret”, NOUN), (“go”, VERB)} • decide to have weights for features based on their POS tags: {“NOUN”: 1.0, “VERB”: 0.9} • h(x) - some hash function, and we will calculate 16-bit (ƒ=16) fingerprint F • Maintain V = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) for the document • h(“hello”) = 0101010101010111 • h(“hello”)[1] == 0 => V[1] = V[1] - 1.0 = -1.0 • h(“hello”)[2] == 1 => V[2] = V[2] + 1.0 = 1.0 • … • V = (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, 1.0, 1.0) • h(“go”) = 0111011100100111 • h(“go”)[1] == 0 => V[1] = V[1] - 0.9 = -1.9 • h(“go”)[2] == 1 => V[2] = V[2] + 0.9 = 1.9 • … • V = (-1.9, 1.9, -0.1, 1.9, -1.9, 1.9, -0.1, 1.9, -1.9, 0.1, -0.1, 0.1, -1.9, 1.9, 1.9, 1.9) • sign(V) = (-, +, -, +, -, +, -, +, -, +, -, +, -, +, +, +) • fingerprint F = 0101010101010111
  • 39. SIMHASH: FINGERPRINTS After converting document into simhash fingerprints, we face the following design problem: • Given a ƒ-bit fingerprint of a document, how quickly discover other fingerprints that differ in at most k bit-positions?
  • 40. SIMHASH: DATA STRUCTURE • SimHash data structure consists of t tables: T1, T2, … Tt.With each table Ti is associated 2 quantities: • an integer pi (number of top bit-positions used in comparisons) • a permutation πi over the ƒ bit-positions • Table Ti is constructed by applying permutation πi to each existing fingerprint F and the resulting set of permuted ƒ-bit fingerprints is sorted. • To optimize such structure to store it in the main-memory, it is possible to compress each such table (that can decrease it size by half approximately). • Example: DF = {0110, 0010, 0010, 1001, 1110}
 πi = {1→2, 2→4, 3→3, 4→1 } • πi (0110)= 0011, πi (0010)= 0010,
 πi (1001)= 1100, πi (1110)= 0111 Ti = 0 0 1 0 0 0 1 1 0 0 1 1 0 1 1 1 1 1 0 0 ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥
  • 41. SIMHASH: ALGORITHM • Actually, the algorithm follows the general approach we saw for LSH • Given a fingerprint F and an integer k we probe tables from the data structure in parallel: • For each table Ti we identify all permuted fingerprints in Ti whose top pi bit- positions match the top pi bit-positions of πi(F). • After that, for each such permuted fingerprints, we check if it differs from the πi(F) in at most k bit-positions (in fact, comparing the Hamming distance between such fingerprints). • Identification of the first fingerprint in table Ti whose top pi bit-positions match the top pi bit-positions of πi(F) (Step 1) can be done in O(pi) steps by binary search. • If we assume that each fingerprint were truly a random bit sequence, it is possible to shrinks the run time to O(log pi) steps in expectation (interpolation search).
  • 42. SIMHASH: PARAMETERS • For a repository of 8B web-pages, ƒ=64 (64-bit simhash fingerprints) and k = 3 are reasonable. (Gurmeet Singh Manku et al., 2007) • For the given ƒ and k, to find a reasonable combination of t (number of tables) and pi (number of top bit-positions used in comparisons) we have to note that they are not independent from each other: • Increasing the number of tables t increases pi and hence reduces the query time. (large values for various pi let to avoid checking too many fingerprints in Step 2) • Decreasing the number of tables t reduces storage requirements, but reduces pi and hence increases the query time (small set of permutations let to avoid space blowup)
  • 43. SIMHASH: PARAMETERS • Consider ƒ = 64 (64-bit fingerprints) and k = 3, so documents whose fingerprints differ at most 3 bit-positions will be similar for us (duplicates). • Assume we have 8B = 234 existing fingerprints and decide to use t = 10 tables: • For instance, t = 10 = (5 2) , so we can use 5 blocks and choose 2 out of them in 10 ways. • Split 64 bits into 5 blocks having 13, 13, 13, 13 and 12 bits respectively. • For each such choice, permutation π corresponds to making the bits lying in the chosen blocks the leading bits. • pi is the total number of bits in the chosen blocks, thus pi = 25 or 26. • On average, a probe retrieves at most 234−25 = 512 (permuted) fingerprints. • if we seek all (permuted) fingerprints which match the top pi bits of a given (permuted) fingerprint, we expect 2d−pi fingerprints as matches • The total number of probes for 64-bit fingerprints and k = 3 is (64 3) = 41664 probes
  • 44. SIMHASH: PYTHON • https://github.com/leonsunliang/simhash
 A Python implementation of Simhash. • http://bibliographie-trac.ub.rub.de/browser/simhash.py
 Implementation of Charikar's simhashes in Python
  • 45. SIMHASH: READ MORE • http://www.cs.princeton.edu/courses/archive/spr04/ cos598B/bib/CharikarEstim.pdf • http://jmlr.org/proceedings/papers/v33/ shrivastava14.pdf • http://www.wwwconference.org/www2007/papers/ paper215.pdf • http://ferd.ca/simhashing-hopefully-made-simple.html
  • 46. ▸ @gakhov ▸ linkedin.com/in/gakhov ▸ www.datacrucis.com THANK YOU