Nearest Neighbor Customer Insight

1,583 views
1,325 views

Published on

Nearest neighbor models are conceptually just about the simplest kind of model possible. The problem is that they generally aren’t feasible to apply. Or at least, they weren’t feasible until the advent of Big Data techniques. These slides will describe some of the techniques used in the knn project to reduce thousand-year computations to a few hours. The knn project uses the Mahout math library and Hadoop to speed up these enormous computations to the point that they can be usefully applied to real problems. These same techniques can also be used to do real-time model scoring.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,583
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
29
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • The basic idea here is that I have colored slides to be presented by you in blue. You should substitute and reword those slides as you like. In a few places, I imagined that we would have fast back and forth as in the introduction or final slide where we can each say we are hiring in turn.The overall thrust of the presentation is for you to make these points:Amex does lots of modelingit is expensivehaving a way to quickly test models and new variables would be awesomeso we worked on a new project with MapRMy part will say the following:Knn basic pictorial motivation (could move to you if you like)describe knn quality metric of overlapshow how bad metric breaks knn (optional)quick description of LSH and projection searchpicture of why k-means search is coolmotivate k-means speed as tool for k-means searchdescribe single pass k-means algorithmdescribe basic data structuresshow parallel speedupOur summary should state that we have achievedsuper-fast k-means clusteringinitial version of super-fast knn search with good overlap
  • These are just suggestions. The “other modeling approaches” is particularly suspect.
  • The sub-bullets are just for reference and should be deleted later
  • The idea here is to guess what color a new dot should be by looking at the points within the circle. The first should obviously be purple. The second cyan. The third is uncertain, but probably isn’t green or cyan and probably is a bit more likely to be red than purple.
  • This slide is red to indicate missing data
  • Nearest Neighbor Customer Insight

    1. 1. Nearest Neighbor Analysis of Customer Behavior
    2. 2. whoami – Chao Yuan• SVP, Risk and Information Management, American Express
    3. 3. whoami – Ted Dunning• Chief Application Architect, MapR Technologies• Committer, member, Apache Software Foundation – particularly Mahout, Zookeeper and Drill• Contact me at tdunning@maprtech.com tdunning@apache.com ted.dunning@gmail.com @ted_dunning• Get slides and more info at http://www.mapr.com/company/events/speaking/oanyc-9-27-12
    4. 4. Agenda – The Business Side• Digital Transformation• Modeling opportunity• Potential applications of agile modeling• Required scale and speed of KNN
    5. 5. Agenda – The Math Side• Nearest neighbor models – Colored dots; need good distance metric; projection, LSH and k-means search• K-means algorithms – O(k d log n) per point for Lloyd’s algorithm … not good for k = 2000, n = 108 – Surrogate methods • fast, sloppy single pass clustering with κ = k log n • fast sloppy search for nearest cluster, O(log κ) = O(d (log k + log log n)) per point • fast, in-memory, high-quality clustering of κ weighted centroids • result consists of k high-quality centroids for the original data• Results
    6. 6. Context• Digital transformation.• Data helps us better serve our customers.• Privacy is paramount.
    7. 7. Our Business• We are and continue to strive to be best-in- class.• We have 100 million cards in circulation.• Quick and accurate decision-making is key. – Marketing offers – Fraud prevention
    8. 8. Opportunity• Demand of modeling is increasing rapidly• So we are testing something simpler and more agile• Like k-nearest neighbor
    9. 9. What’s that?• Find the k nearest training examples – lookalike customers• This is easy … but hard – easy because it is so conceptually simple and you don’t have knobs to turn or models to build – hard because of the stunning amount of math – also hard because we need top 50,000 results• Initial rapid prototype was massively too slow – 3K queries x 200K examples takes hours – needed 20M x 25M in the same time
    10. 10. Comparison to Other Modeling Approaches• Logistic regression• Tree-based methods
    11. 11. K-Nearest Neighbor Example
    12. 12. Required Scale and Speed and Accuracy• Want 20 million queries against 25 million references in 10,000 s• Should be able to search > 100 million references• Should be linearly and horizontally scalable• Must have >50% overlap against reference search• Evaluation by sub-sampling is viable, but tricky
    13. 13. How Hard is That?• 20 M x 25 M x 100 Flop = 50 P Flop• 1 CPU = 5 Gflops• We need 10 M CPU seconds => 10,000 CPU’s• Real-world efficiency losses may increase that by 10x• Not good!
    14. 14. How Can We Search Faster?• First rule: don’t do it – If we can eliminate most candidates, we can do less work – Projection search and k-means search• Second rule: don’t do it – We can convert big floating point math to clever bit-wise integer math – Locality sensitive hashing• Third rule: reduce dimensionality – Projection search – Random projection for very high dimension
    15. 15. Projection Search java.lang.TreeSet!
    16. 16. How Many Projections?
    17. 17. LSH Search• Each random projection produces independent sign bit• If two vectors have the same projected sign bits, they probably point in the same direction (i.e. cos θ ≈ 1)• Distance in L2 is closely related to cosine x - y 2 = x - 2(x × y) + y 2 2 = x 2 - 2 x y cosq + y 2• We can replace (some) vector dot products with long integer XOR
    18. 18. 1 LSH Bit-match Versus Cosine 0.8 0.6 0.4 0.2Y Ax is 0 0 8 16 24 32 40 48 56 64 - 0.2 - 0.4 - 0.6 - 0.8 -1 X Ax is
    19. 19. Results
    20. 20. K-means Search• First do clustering with lots (thousands) of clusters• Then search nearest clusters to find nearest points• We win if we find >50% overlap with “true” answer• We lose if we can’t cluster super-fast – more on this later
    21. 21. Lots of Clusters Are Fine
    22. 22. Lots of Clusters Are Fine
    23. 23. Some Details• Clumpy data works better – Real data is clumpy • Speedups of 100-200x seem practical with 50% overlap – Projection search and LSH can be used to accelerate that (some)• More experiments needed• Definitely need fast search
    24. 24. Lloyd’s Algorithm• Part of CS folk-lore• Developed in the late 50’s for signal quantization, published in 80’s initialize k cluster centroids somehow for each of many iterations: for each data point: assign point to nearest cluster recompute cluster centroids from points assigned to clusters• Highly variable quality, several restarts recommended
    25. 25. Ball k-means• Provably better for highly clusterable data• Tries to find initial centroids in the “core” of real clusters• Avoids outliers in centroid computation initialize centroids randomly with distance maximizing tendency for each of a very few iterations: for each data point: assign point to nearest cluster recompute centroids using only points much closer than closest cluster
    26. 26. Surrogate Method• Start with sloppy clustering into κ = k log n clusters• Use these clusters as a weighted surrogate for the data• Cluster surrogate data using ball k-means• Results are provably high quality for highly clusterable data• Sloppy clustering can be done on-line• Surrogate can be kept in memory• Ball k-means pass can be done at any time
    27. 27. Algorithm Costs• O(k d log n) per point for Lloyd’s algorithm … not so good for k = 2000, n = 108• Surrogate methods – fast, sloppy single pass clustering with κ = k log n – fast sloppy search for nearest cluster, O(d log κ) = O(d (log k + log log n)) per point – fast, in-memory, high-quality clustering of κ weighted centroids – result consists of k high-quality centroids• This is a big deal: – k d log n = 2000 x 10 x 26 = 50,000 – log k + log log n = 11 + 5 = 17 – 3000 times faster makes the grade as a bona fide big deal
    28. 28. The Internals• Mechanism for extending Mahout Vectors – DelegatingVector, WeightedVector, Centroid• Searcher interface – ProjectionSearch, KmeansSearch, LshSearch, Brute• Super-fast clustering – Kmeans, StreamingKmeans
    29. 29. How It Works• For each point – Find approximately nearest centroid (distance = d) – If d > threshold, new centroid – Else possibly new cluster – Else add to nearest centroid• If centroids > K ~ C log N – Recursively cluster centroids with higher threshold• Result is large set of centroids – these provide approximation of original distribution – we can cluster centroids to get a close approximation of clustering original – or we can just use the result directly
    30. 30. Parallel Speedup? 200 Non- threaded ✓ 100 2Tim e per point (μs) Threaded version 3 50 4 40 6 5 8 30 10 14 12 20 Perfect Scaling 16 10 1 2 3 4 5 20 Threads
    31. 31. What About Map-Reduce• Map-reduce implementation is nearly trivial – Compute surrogate on each split – Total surrogate is union of all partial surrogates – Do in-memory clustering on total surrogate• Threaded version shows linear speedup already – Map-reduce speedup is likely, not entirely guaranteed
    32. 32. How Well Does it Work?• Theoretical guarantees for well clusterable data – Shindler, Wong and Meyerson, NIPS, 2011• Evaluation on synthetic data – Rough clustering produces correct surrogates – Possible issue in ball k-means initialization (still produces good clustering on test data)
    33. 33. Summary• Nearest neighbor algorithms can be blazing fast• But you need blazing fast clustering – Which we now have
    34. 34. Contact Us!• We’re hiring at MapR in California• We’re hiring at Amex in Phoenix and New York• Come get the slides at http://www.mapr.com/company/events/speaking/oanyc-9-27-12• Contact Ted at tdunning@maprtech.com or @ted_dunning• Contact Chao at chao.yuan@aexp.com

    ×