Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Jorge Silva, Sr. Research Statistician Developer, SAS at MLconf ATL - 9/18/15

1,706 views

Published on

Estimating the Number of Clusters in Big Data with the Aligned Box Criterion: Finding the number, k, of clusters in a dataset is a fundamental problem in unsupervised learning. It is also an important business problem, e.g. in market segmentation. Existing approaches include the silhouette measure, the gap statistic and Dirichlet process clustering. For thirty years SAS procedures have included the option of using the cubic clustering criterion (CCC) to estimate k. While CCC remains competitive, we propose a significant and original improvement, referred to herein as the aligned box criterion (ABC). Like CCC, ABC is based on a hypothesis-testing framework, but instead of a heuristic measure we use data-adaptive reference distributions to generate more realistic null hypotheses in a scalable and easily parallelizable manner. We have implemented ABC using SAS’ High Performance Analytics platform, and achieve state-of-the-art accuracy in the estimation of k.

Published in: Technology
  • Be the first to comment

Jorge Silva, Sr. Research Statistician Developer, SAS at MLconf ATL - 9/18/15

  1. 1. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. DETERMINING THE NUMBER OF CLUSTERS IN A DATASET USING ABC I. KABUL, P. HALL, J. SILVA, W. SARLE ENTERPRISE MINER R&D SAS INSTITUTE
  2. 2. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. CLUSTERING Objects within a cluster are as similar as possible Objects from different clusters are as dissimilar as possible Hossein Parsaei
  3. 3. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. CHALLENGES IN CLUSTERING • No prior knowledge • Which similarity measure ? • Which clustering algorithm? • How to evaluate the results? • How many clusters? The Aligned Box Criterion (ABC) addresses the unsolved, important problem of determining the number of clusters in a data set. ABC can be applied in Market Segmentation and many other types of statistical, data mining and machine learning analyses.
  4. 4. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. CONTENTS • Background • Aligned Box Criterion (ABC) Method • Results • ABC Method in Parallel and Distributed Architecture • Conclusions
  5. 5. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. BACKGROUND
  6. 6. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. FINDING THE RIGHT NUMBER OF CLUSTERS • Many methods have been proposed: • Calinski-Harabasz index [Calinski 1974] • Cubic clustering criterion (CCC) [Sarle 1983] • Silhouette statistic [Rousseeuw 1987] • Gap statistic [Tibshirani 2001] • Jump method [Sugar 2003] • Prediction strength [Tibshirani 2005] • Dirichlet process [Teh 2006]
  7. 7. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. WITHIN CLUSTER SUM OF SQUARES • A good clustering yields clusters where observations have small within-cluster sum-of-squares (and high between- cluster sum-of-squares). • Low values when the partition is good, BUT these are by construction monotone nonincreasing (within cluster dissimilarity always decreases with more clusters)         r r r Ci ir Ci Cj jir xxn xxD 2 2 2   k r r r k D n W 1 2 1 Within-cluster SSE: Measure of compactness of clusters
  8. 8. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. BACKGROUND USING WK TO DETERMINE # OF CLUSTERS Elbow method (L-curve method) Idea: use the k corresponding to the “elbow” Problem: no reference clustering to compare the differences Wk  Wk1’s are not normalized for comparison
  9. 9. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. BACKGROUND REFERENCE DISTRIBUTIONS • Cubic Clustering Criterion (CCC), Gap Statistic and ABC amplify the elbow phenomenon by using differences between within cluster sum of squares of a clustering solution in the training data (Wk) and a clustering solution in a reference distribution (Wk *). • Aligned box criterion (ABC) • Gap statistic • Cubic clustering criterion (CCC) Reference distribution complexity Cubic Clustering Criterion (CCC): SAS Technical Report A-108, 1983 Gap Statistic: Tibshirani et al, J.R. Statist. Soc., 2001
  10. 10. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. CCC METHOD Instead of using Wk directly, CCC uses R2 . 𝑅2 = 1 − 𝑇𝑟𝑎𝑐𝑒 𝑊 𝑇𝑟𝑎𝑐𝑒 𝑇 , 𝑇𝑟𝑎𝑐𝑒 𝑊 = 𝑊𝑘 For CCC calculation, R2 and E(R2) are approximated by heuristic formulas. 𝐶𝐶𝐶 = log 1 − 𝐸(𝑅2) 1 − 𝑅2 𝑛𝑝∗ 2 (0.001 + 𝐸(𝑅2))1.2 Cubic Clustering Criterion (CCC): SAS Technical Report A-108, 1983 Derived from numerous Monte Carlo simulations to generate one hyper-cube reference distribution based on the dimensions of the given training dataset to test all k of interest.
  11. 11. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. GAP STATISTICS METHOD The Gap Statistic computes the (log) ratio Wk* / Wk. 𝐺𝑎𝑝 𝑘 = log 𝑊𝑘 ∗ − log 𝑊𝑘 Wk* is calculated from a clustering solution in the reference distribution. Finds k that maximizes Gap(k) (within some tolerance)
  12. 12. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. TWO TYPES OF UNIFORM DISTRIBUTIONS 1. Align with feature axes (data-geometry independent) Observations Bounding Box (aligned with feature axes) Monte Carlo Simulations
  13. 13. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. TWO TYPES OF UNIFORM DISTRIBUTIONS 2. Align with principal axes (data-geometry dependent) Observations Bounding Box (aligned with principal axes) Monte Carlo Simulations
  14. 14. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. COMPUTATION OF THE GAP STATISTIC for l = 1 to B Compute Monte Carlo sample X1b, X2b, …, Xnb (n is # obs.) for k = 1 to K Cluster the observations into k groups and compute log Wk for l = 1 to B Cluster the M.C. sample into k groups and compute log Wkb Compute Compute sd(k), the standard deviation of {log Wkb}l=1,…,B Set the total s.e. Find the smallest k such that )(/11 ksdBsk    B b kkb WW B kGap 1 loglog 1 )( 1)1()(  kskGapkGap
  15. 15. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. GAP STATISTIC
  16. 16. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. NO-CLUSTER EXAMPLE (JOURNAL VERSION)
  17. 17. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ABC (ALIGNED BOX CRITERION)
  18. 18. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ABC METHOD ABC improves upon CCC and Gap Statistics by generating better estimates for Wk*. ABC uses k reference distributions, one for each tested k (k is number of clusters). • Data-driven Monte Carlo simulation of reference distribution at each tested k. • The reference distribution is k uniform hyper boxes aligned with the Principal Components from the clustering solution of the input data. Gap Statistic Reference Distribution ABC Reference Distribution
  19. 19. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ABC METHOD Why multiple reference distributions? The gap statistic performs hypothesis testing between k clusters/no-clusters for the whole input space • ABC is similar to recursive hypothesis testing between 1 cluster/2 clusters for each of the k candidate clusters • More stringent test. It is harder for larger k to pass this test. This is desirable. Gap Statistic Reference Distribution ABC Reference Distribution
  20. 20. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k REFERENCE DISTRIBUTIONS Sample Data
  21. 21. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k REFERENCE DISTRIBUTIONS Aligned Box Criterion
  22. 22. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k REFERENCE DISTRIBUTIONS Aligned Box Criterion
  23. 23. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  24. 24. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  25. 25. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  26. 26. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  27. 27. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  28. 28. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  29. 29. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  30. 30. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. Aligned Box Criterion ESTIMATING k REFERENCE DISTRIBUTIONS
  31. 31. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ALIGNED BOX CRITERION (ABC) for k = 1 to K Cluster the observations into k groups and compute log Wk for l = 1 to B Considering each cluster k separately Compute Monte Carlo sample X1b, X2b, …, Xnb (n is # obs.) Cluster the M.C. sample into k groups and compute log Wkb Compute Compute sd(k), the s.d. of {log Wkb}l=1,…,B Set the total s.e. Find the smallest k such that )(/11 ksdBsk  1)1()(  kskABCkABC 𝐴𝐵𝐶(𝑘) = log 𝑊𝑘 + − log 𝑊𝑘
  32. 32. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ABC METHOD RESULTS
  33. 33. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k REFERENCE DISTRIBUTIONS Wk*decreases faster. Gap Statistic Aligned Box Criterion
  34. 34. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k REFERENCE DISTRIBUTIONS Gap Statistic Aligned Box Criterion AlignedBoxCriterion Clearer Maxima.
  35. 35. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS SIMULATED: SEVEN OVERLAPPING CLUSTERS
  36. 36. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS SIMULATED: SEVEN OVERLAPPING CLUSTERS • Observations: 7,000 • Variables: 2 • Monte Carlo Replications: 20 CCC method ABC method
  37. 37. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS SIMULATED: SEVEN OVERLAPPING CLUSTERS
  38. 38. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k CLAIMS PREDICTION CHALLENGE DATA • Anonymized customer data • 32 customer and product features • 13,184,290 customer records
  39. 39. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k EXECUTING CALCULATIONS • Cubic clustering criterion: PROC FASTCLUS • Gap statistic: R cluster package in the Open Source Integration Node in SAS Enterprise Miner • Aligned box criterion: PROC HPCLUS
  40. 40. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k INTERPRETING RESULTS Cubic Clustering Criterion
  41. 41. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k INTERPRETING RESULTS Gap Statistic
  42. 42. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ESTIMATING k INTERPRETING RESULTS Aligned Box Criterion
  43. 43. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. REFERENCE DISTRIBUTION EFFECT OF CHANGING NUMBER OF OBSERVATIONS • How the number of observations in the reference distribution affects the result • Based on the number of observations n in the input dataset, we generated w*n number of observations in the reference distribution where w is between 0 and 1
  44. 44. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS SIMPLE CASE
  45. 45. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS DATA SET WITH MORE CLUSTERS
  46. 46. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS DATA SET WITH MORE OBSERVATIONS
  47. 47. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS REAL DATA Kaggle Claims Prediction Challenge (n= 13,184,290, p= 35), 50 runs
  48. 48. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS SCALABILITY
  49. 49. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS STABILITY
  50. 50. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ABC METHOD FOR PARALLEL AND DISTRIBUTED ARCHITECTURES
  51. 51. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. PARALLEL ABC PART 1-2 Node1 Root ….. Node2 Node3 NodeN 1) Run clustering k-means (in parallel) for k clusters 2) Assign each observation to a cluster 3) Compute 𝑊𝑘 1) Assign each cluster to a node 2) Collect the XX’ matrix for each cluster in the assigned node using a tree-based algorithm 3) Do PCA using XX’ matrix Node1 ….. Node2 Node3 NodeN
  52. 52. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. PARALLEL ABC PART 3-4 Node1 ….. Node2 Node3 NodeN 1) Eigenvectors are broadcasted to every node 2) Based on their assigned clusters, the observations in each node are projected into the new space 1) Bounding boxes are computed locally at each node for each cluster k 2) Bounding box information from each node is collected at the root and the root computes the bounding box coordinates for each cluster k 3) This information is distributed to each node and each node generates reference distributions Node1 ….. Node2 Node3 NodeN Node1 Root ….. Node2 Node3 NodeN Node1 ….. Node2 Node3 NodeN
  53. 53. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. PARALLEL ABC PART 5 Node1 Root ….. Node2 Node3 NodeN Run k-means clustering in parallel for the reference distribution and compute 𝑊𝑘 + Do this for B number of reference distributions Compute ABC for cluster k
  54. 54. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. PARALLEL ABC PART 6 What about the O(n^3) complexity of SVD??? - Computation of XX’ is parallelized - Or, do stochastic SVD
  55. 55. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. ABC METHOD CONCLUSION
  56. 56. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. RESULTS More accurate reference distributions lead to: • Better defined maxima. • Wk* values decreasing rapidly, especially for K > k. • Exposure of possible alternative solutions.
  57. 57. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. CONCLUSION For large, highly dimensional or noisy data ABC is found to be: • Stable • Scalable Moreover, it exhibits desirable properties: • Clearer peaks • More stringent hypothesis test promotes smaller k values
  58. 58. Copyr ight © 2012, SAS Institute Inc. All rights reser ved. www.SAS.com Q&A THANK YOU

×