• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
2013 open analytics_countingv3
 

2013 open analytics_countingv3

on

  • 944 views

AddThis' OA DC Summit Presentaiton

AddThis' OA DC Summit Presentaiton

Statistics

Views

Total Views
944
Views on SlideShare
597
Embed Views
347

Actions

Likes
1
Downloads
11
Comments
0

2 Embeds 347

http://www.ikanow.com 345
http://192.254.196.224 2

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • 2.5M people
  • Given that a good hash function produces a uniformly random number of 0s and 1s we can make observations about the probability of certain conditions appearing in the hashed value.
  • Given that a good hash function produces a uniformly random number of 0s and 1s we can make observations about the probability of certain conditions appearing in the hashed value.
  • Given that a good hash function produces a uniformly random number of 0s and 1s we can make observations about the probability of certain conditions appearing in the hashed value.
  • Given that a good hash function produces a uniformly random number of 0s and 1s we can make observations about the probability of certain conditions appearing in the hashed value.
  • Given that a good hash function produces a uniformly random number of 0s and 1s we can make observations about the probability of certain conditions appearing in the hashed value.

2013 open analytics_countingv3 2013 open analytics_countingv3 Presentation Transcript

  • Cardinality Estimation for Very Large Data Sets Matt Abrams, VP Data and Operations March 25, 2013
  • THANKS FORCOMING!I build large scale distributed systems and work onalgorithms that make sense of the data stored inthemContributor to the open source project Stream-Lib, a Java library for summarizing data streams(https://github.com/clearspring/stream-lib)Ask me questions: @abramsm
  • HOW CAN WE COUNTTHE NUMBER OFDISTINCT ELEMENTSIN LARGE DATASETS?
  • HOW CAN WE COUNTTHE NUMBER OFDISTINCT ELEMENTSIN VERY LARGE DATASETS?
  • GOALS FORCOUNTING SOLUTIONSupport high throughput data streams (upto many 100s of thousands per second)Estimate cardinality with known errorthresholds in sets up to around 1 billion (oreven 1 trillion when needed)Support set operations (unions andintersections)Support data streams with large number ofdimensions
  • 1 UID = 128 bits513a71b843e54b73
  • In one month AddThis logs 5B+ UIDs 2,500,000 * 2000 = 5,000,000,000
  • That’s 596GB of just UIDS
  • NAÏVE SOLUTIONS• Select count(distinct UID) from table where dimension = foo• HashSet<K>• Run a batch job for each new query request
  • WE ARE NOT A BANK This means a estimate rather than exact value is acceptable. http://graphics8.nytimes.com/images/2008/01/30/timestopics/feddc.jp g
  • THREE INTUITIONS• It is possible to estimate the cardinality of a set by understanding the probability of a sequence of events occurring in a random variable (e.g. how many coins were flipped if I saw n heads in a row?)• Averaging the the results of multiple observations can reduce the variance associated with random variables• By applying a good hash function effectively de- duplicates the input stream
  • INTUITION What is the probability that a binary string starts with ’01’?
  • INTUITION (1/2)2 = 25%
  • INTUITION(1/2)3 = 12.5%
  • INTUITIONCrude analysis: If a streamhas 8 unique values the hashof at least one of them shouldstart with ‘001’
  • INTUITIONGiven the variability of a singlerandom value we can not usea single variable for accuratecardinality estimations
  • MULTIPLE OBSERVATIONS HELPREDUCE VARIANCEBy taking the mean of the standarddeviation of multiple random variables wecan make the error rate as small as desiredby controlling the size of m (the numberrandom variables) error = s / m
  • THE PROBLEM WITHMULTIPLE HASHFUNCTIONS• It is too costly from a computational perspective to apply m hash functions to each data point• It is not clear that it is possible to generate m good hash functions that are independent
  • STOCHASTICAVERAGING• Emulating the effect of m experiments with a single hash function• Divide input stream h(M) into m sub- streams é1 2 m -1 ù ê , ,..., ëm m ,1ú m û• An average of the observable values for each sub-stream will yield a cardinality that improves in proportion to 1/ m as m increases
  • HASH FUNCTIONS32 Bit 64 Bit 160 Bit Odds of aHash Hash Hash Collision77163 5.06 Billion 1.42 * 1 in 2 10^1430084 1.97 Billion 5.55 * 1 in 10 10^239292 609 million 1.71 * 1 in 100 10^232932 192 million 5.41 * 1 in 1000 10^22 http://preshing.com/20110504/hash-collision-probabilities
  • HYPERLOGLOG (2007)Counts up to1 Billion in 1.5KB of space Philippe Flajolet (1948-2011)
  • HYPERLOGLOG (HLL)• Operates with a single pass over the input data set• Produces a typical error of of 1.04 / m• Error decreases as m increases. Error is not a function of the number of elements in the set
  • HLL SUBSTREAMS HLL uses a single hash function and splits the result into m buckets Bucket 1 HashInput Values Function S Bucket 2 Bucket m
  • HLL ALGORITHMBASICS• Each substream maintains an Observable • Observable is largest value p(x) which is the position of the leftmost 1-bit in a binary string x• 32 bit hashing function with 5 bit “short bytes”• Harmonic mean • Increases quality of estimates by reducing variance
  • WHAT ARE “SHORT BYTES”?• We know a priori that the value of a given substream of the multiset M is in the range 0..(L +1- log2 m)• Assuming L = 32 we only need 5 bits to store the value of the register• 85% less memory usage as compared to standard java int
  • ADDING VALUES TOHLL r ( xb+1 xb+2 ×××) index =1+ x1x2 ××× xb 2• The first b bits of the new value define the index for the multiset M that may be updated when the new value is added• The bits b+1 to m are used to determine the leading number of zeros (p)
  • ADDING VALUES TOHLL Observations{M[1], M[2],..., M[m]}The multiset is updated using the equation: M[ j] := max(M[ j], r (w )) Number of leading zeros + 1
  • INTUITION ONEXTRACTINGCARDINALITY FROM HLL• If we add n elements to a stream then each substream will contain roughly n/m elements• The MAX value in each substream should be about log2 ( n / m) (from earlier intuition re random variables)• The harmonic mean (mZ) of 2MAX is on the order of n/m• So m2Z is on the order of n  That’s the cardinality!
  • HLL CARDINALITYESTIMATE -1 æ m ö E := a m m × çå 2 -M [ j ] 2 ç ÷ ÷ è j=1 ø (2 ) p 2 Harmonic Mean• m2Z has systematic multiplicative bias that needs to be corrected. This is done by multiplying a constant value
  • A NOTE ON LONGRANGE CORRECTIONS• The paper says to apply a long range correction function when the estimate is greater than: E > 1 232 30• The correction function is: E := -2 log(1- E / 2 * 32• DON’T DO THIS! It doesn’t work and increases error. Better approach is to use a bigger/better hash function
  • DEMO TIME!Lets look at HLL in Action. http://www.aggregateknowledge.com/science/blog/hll.html
  • HLL UNIONS Root• Merging two or more HLL data structures is a MON HLL similar process to adding a new value to a single HLL TUE HLL• For each register in the HLL take the max value of the HLLs you are merging WED HLL and the resulting register set can be used to estimate the cardinality of THU HLL the combined sets FRI HLL
  • HLL INTERSECTION C = A + B - AÈ B A C B You must understand the properties of your sets to know if you can trust the resulting intersection
  • HYPERLOGLOG++• Google researches have recently released an update to the HLL algorithm• Uses clever encoding/decoding techniques to create a single data structure that is very accurate for small cardinality sets and can estimate sets that have over a trillion elements in them• Empirical bias correction. Observations show that most of the error in HLL comes from the bias function. Using empirically derived values significantly reduces error• Already available in Stream-Lib!
  • OTHER PROBABILISTICDATA STRUCTURES• Bloom Filters – set membership detection• CountMinSketch – estimate number of occurrences for a given element• TopK Estimators – estimate the frequency and top elements from a stream
  • REFERENCES• Stream-Lib - https://github.com/clearspring/stream-lib• HyperLogLog - http://citeseerx.ist.psu.edu/viewdoc/summary?d oi=10.1.1.142.9475• HyperLogLog In Practice - http://research.google.com/pubs/pub40671.html• Aggregate Knowledge HLL Blog Posts - http://blog.aggregateknowledge.com/tag/hyperlo glog/
  • THANKS! AddThis is hiring!