Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

DMTM Lecture 13 Representative based clustering

378 views

Published on

Slides for the 2016/2017 edition of the Data Mining and Text Mining Course at the Politecnico di Milano. The course is also part of the joint program with the University of Illinois at Chicago.

Published in: Education
  • Be the first to comment

DMTM Lecture 13 Representative based clustering

  1. 1. Prof. Pier Luca Lanzi Representative-Based Clustering Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
  2. 2. Prof. Pier Luca Lanzi Readings • Mining of Massive Datasets (Chapter 7) • Data Mining and Analysis (Section 13.3) 2
  3. 3. Prof. Pier Luca Lanzi How can we represent clusters?
  4. 4. Prof. Pier Luca Lanzi Representation-Based Algorithms • Given a dataset of N instances, and a desired number of clusters k, this class of algorithms generates a partition C of N in k clusters {C1, C2, …, Ck} • For each cluster there is a point that summarizes the cluster • The common choice being the mean of the points in the cluster where ni = |Ci| and μi is the centroid 4
  5. 5. Prof. Pier Luca Lanzi Representation-Based Algorithms • The goal of the clustering process is to select the best partition according to some scoring function • Sum of squared errors is the most common scoring function • The goal of the clustering process is thus to find • Brute-force Approach § Generate all the possible clustering C = {C1, C2, …, Ck} and select the best one. Unfortunately, there are O(kN/k!) possible partitions 5
  6. 6. Prof. Pier Luca Lanzi k-Means Algorithm • Most widely known representative-based algorithm • Assumes an Euclidean space but can be easily extended to the non-Euclidean case • Employs a greedy iterative approaches that minimizes the SSE objective. Accordingly it can converge to a local optimal instead of a globally optimal clustering. 6
  7. 7. Prof. Pier Luca Lanzi 1. Initially choose k points that are likely to be in different clusters; 2. Make these points the centroids of their clusters; 3. FOR each remaining point p DO Find the centroid to which p is closest; Add p to the cluster of that centroid; Adjust the centroid of that cluster to account for p; END;
  8. 8. Prof. Pier Luca Lanzi
  9. 9. Prof. Pier Luca Lanzi
  10. 10. Prof. Pier Luca Lanzi
  11. 11. Prof. Pier Luca Lanzi
  12. 12. Prof. Pier Luca Lanzi
  13. 13. Prof. Pier Luca Lanzi
  14. 14. Prof. Pier Luca Lanzi
  15. 15. Prof. Pier Luca Lanzi
  16. 16. Prof. Pier Luca Lanzi
  17. 17. Prof. Pier Luca Lanzi
  18. 18. Prof. Pier Luca Lanzi
  19. 19. Prof. Pier Luca Lanzi
  20. 20. Prof. Pier Luca Lanzi
  21. 21. Prof. Pier Luca Lanzi
  22. 22. Prof. Pier Luca Lanzi
  23. 23. Prof. Pier Luca Lanzi Initializing Clusters • Solution 1 §Pick points that are as far away from one another as possible. • Variation of solution 1 Pick the first point at random; WHILE there are fewer than k points DO Add the point whose minimum distance from the selected points is as large as possible; END; • Solution 2 §Cluster a sample of the data, perhaps hierarchically, so there are k clusters. Pick a point from each cluster, perhaps that point closest to the centroid of the cluster. 23
  24. 24. Prof. Pier Luca Lanzi Two different K-means Clusterings 24 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Sub-optimal Clustering -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Optimal Clustering Original Points
  25. 25. Prof. Pier Luca Lanzi Importance of Choosing the Initial Centroids 25 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 xy Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 6
  26. 26. Prof. Pier Luca Lanzi Importance of Choosing the Initial Centroids 26 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 5
  27. 27. Prof. Pier Luca Lanzi 27Why Selecting the Best Initial Centroids is Difficult? • If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small. • Chance is relatively small when K is large • If clusters are the same size, n, then • For example, if K = 10, then probability = 10!/1010 = 0.00036 • Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t • Consider an example of five pairs of clusters
  28. 28. Prof. Pier Luca Lanzi Ten Clusters Example 28 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 1 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 2 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 3 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 4 Starting with two initial centroids in one cluster of each pair of clusters
  29. 29. Prof. Pier Luca Lanzi 10 Clusters Example 29 Starting with some pairs of clusters having three initial centroids, while other have only one. 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 1 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 2 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 3 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 4
  30. 30. Prof. Pier Luca Lanzi 30Dealing with the Initial Centroids Issue • Multiple runs, helps, but probability is not on your side • Sample and use another clustering method (hierarchical?) to determine initial centroids • Select more than k initial centroids and then select among these initial centroids • Postprocessing • Bisecting K-means, not as susceptible to initialization issues
  31. 31. Prof. Pier Luca Lanzi 31Updating Centers Incrementally • In the basic K-means algorithm, centroids are updated after all points are assigned to a centroid • An alternative is to update the centroids after each assignment (incremental approach) §Each assignment updates zero or two centroids §More expensive §Introduces an order dependency §Never get an empty cluster §Can use “weights” to change the impact
  32. 32. Prof. Pier Luca Lanzi 32Pre-processing and Post-processing • Pre-processing §Normalize the data §Eliminate outliers • Post-processing §Eliminate small clusters that may represent outliers §Split ‘loose’ clusters, i.e., clusters with relatively high SSE §Merge clusters that are ‘close’ and that have relatively low SSE §These steps can be used during the clustering process
  33. 33. Prof. Pier Luca Lanzi Bisecting K-means • Variant of K-means that can produce a partitional or a hierarchical clustering 33
  34. 34. Prof. Pier Luca Lanzi Bisecting K-means Example 34
  35. 35. Prof. Pier Luca Lanzi Limitation of k-Means 35
  36. 36. Prof. Pier Luca Lanzi 36Limitations of K-means • K-means has problems when clusters are of differing §Sizes §Densities §Non-globular shapes • K-means has also problems when the data contains outliers.
  37. 37. Prof. Pier Luca Lanzi Limitations of K-means: Differing Sizes 37 Original Points K-means (3 Clusters)
  38. 38. Prof. Pier Luca Lanzi Limitations of K-means: Differing Density 38 Original Points K-means (3 Clusters)
  39. 39. Prof. Pier Luca Lanzi Limitations of K-means: Non-globular Shapes 39 Original Points K-means (2 Clusters)
  40. 40. Prof. Pier Luca Lanzi Overcoming K-means Limitations 40 Original Points K-means Clusters One solution is to use many clusters. Find parts of clusters, but need to put together.
  41. 41. Prof. Pier Luca Lanzi Overcoming K-means Limitations 41 Original Points K-means Clusters
  42. 42. Prof. Pier Luca Lanzi Overcoming K-means Limitations 42 Original Points K-means Clusters
  43. 43. Prof. Pier Luca Lanzi 43K-Means Clustering Summary • Strength §Relatively efficient §Often terminates at a local optimum §The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms • Weakness §Applicable only when mean is defined, then what about categorical data? §Need to specify k, the number of clusters, in advance §Unable to handle noisy data and outliers §Not suitable to discover clusters with non-convex shapes
  44. 44. Prof. Pier Luca Lanzi 44K-Means Clustering Summary • Advantages §Simple, understandable §Items automatically assigned to clusters • Disadvantages §Must pick number of clusters before hand §All items forced into a cluster §Too sensitive to outliers
  45. 45. Prof. Pier Luca Lanzi 45Variations of the K-Means Method • A few variants of the k-means which differ in §Selection of the initial k means §Dissimilarity calculations §Strategies to calculate cluster means • Handling categorical data: k-modes §Replacing means of clusters with modes §Using new dissimilarity measures to deal with categorical objects §Using a frequency-based method to update modes of clusters §A mixture of categorical and numerical data: k-prototype method
  46. 46. Prof. Pier Luca Lanzi 46Variations of the K-Means Method • A few variants of the k-means which differ in §Selection of the initial k means §Dissimilarity calculations §Strategies to calculate cluster means • Handling categorical data: k-modes §Replacing means of clusters with modes §Using new dissimilarity measures to deal with categorical objects §Using a frequency-based method to update modes of clusters §A mixture of categorical and numerical data: k-prototype method
  47. 47. Prof. Pier Luca Lanzi The BFR Algorithm
  48. 48. Prof. Pier Luca Lanzi The BFR Algorithm • BFR [Bradley-Fayyad-Reina] is a variant of k-means designed to handle very large (disk-resident) data sets • Assumes that clusters are normally distributed around a centroid in a Euclidean space • Standard deviations in different dimensions may vary • Clusters are axis-aligned ellipses • Efficient way to summarize clusters (want memory required O(clusters) and not O(data)) 48
  49. 49. Prof. Pier Luca Lanzi The BFR Algorithm • Points are read from disk one chunk at the time (so to fit into main memory) • Most points from previous memory loads are summarized by simple statistics • To begin, from the initial load we select the initial k centroids by some sensible approach §Take k random points §Take a small random sample and cluster optimally §Take a sample; pick a random point, and then k–1 more points, each as far from the previously selected points as possible 49
  50. 50. Prof. Pier Luca Lanzi Three Classes of Points • Discard set (DS) §Points close enough to a centroid to be summarized • Compression set (CS) §Groups of points that are close together but not close to any existing centroid §These points are summarized, but not assigned to a cluster • Retained set (RS) §Isolated points waiting to be assigned to a compression set 50
  51. 51. Prof. Pier Luca Lanzi The Status of BFR Algorithm 51 A cluster. Its points are in the DS. The centroid Compressed sets. Their points are in the CS. Points in the RS Discard set (DS): Close enough to a centroid to be summarized Compression set (CS): Summarized, but not assigned to a cluster Retained set (RS): Isolated points
  52. 52. Prof. Pier Luca Lanzi Summarizing Sets of Points • For each cluster, the discard set (DS) is summarized by: • The number of points, N • The vector SUM, whose component SUM(i) is the sum of the coordinates of the points in the ith dimension • The vector SUMSQ whose component SUMSQ(i) is the sum of squares of coordinates in ith dimension 52 A cluster. All its points are in the DS. The centroid
  53. 53. Prof. Pier Luca Lanzi Summarizing Points: Comments • 2d + 1 values represent any size cluster (d is the number of dimensions) • Average in each dimension (the centroid) can be calculated as SUM(i)/N • Variance of a cluster’s discard set in dimension i is computed as (SUMSQ(i)/N) – (SUM(i)/N)2 • And standard deviation is the square root of that variance 53
  54. 54. Prof. Pier Luca Lanzi Processing Data in the BFR Algorithm 1. First, all points that are “sufficiently close” to the centroid of a cluster are added to that cluster (by updating its parameters) then the point is discharged 2. The points that are not “sufficiently close” to any centroid are clustered along with the points in the retained set. Any algorithm can be used even the hierarchical one in this step. 3. The miniclusters derived for new points and the old retained set are merged (e.g., by using the same criteria used for hierarchical clustering) 4. Any point outside a cluster or a minicluster are dropped. When the last chunk of data is processed, the remaining miniclusters and the points in the retained set which might be labeled as outliers or alternatively can be assigned to one of the centroids (as k-means would do). Note that for miniclusters we only have N, SUM and SUMSQ so it is easier to used criteria based on variance and similar statistics. So we might combine two clusters if their combined variance is below some threshold. 54
  55. 55. Prof. Pier Luca Lanzi “Sufficiently Close” • Two approaches have been proposed to determine whether a point is sufficiently close to a cluster • Add p to a cluster if § It has the centroid closest to p § It is also very unlikely that, after all the points have been processed, some other cluster centroid will be found to be nearer to p • We can measure the probability that, if p belongs to a cluster, it would be found as far as it is from the centroid of that cluster § This is where the assumption about the clusters containing normally distributed points aligned with the axes of the space is used 55
  56. 56. Prof. Pier Luca Lanzi Mahalanobis Distance • It is used to decide whether a point is closed enough to a cluster • It is computed as the distance between a point and the centroid of a cluster, normalized by the standard deviation of the cluster in each dimension. • Given p = (p1, … pd) and c = (c1, … cd), the Mahalanobis distance between p and c is computed as • We assign p to the cluster with the least Mahalanobis from p provided that the distance is below a certain threshold. A threshold of 4 means that we have only a chance in a million not to include something that belongs to the cluster 56
  57. 57. Prof. Pier Luca Lanzi k-Means for Arbitrary Shapes (the CURE algorithm)
  58. 58. Prof. Pier Luca Lanzi The CURE Algorithm • Problem with BFR/k-means: §Assumes clusters are normally distributed in each dimension §And axes are fixed – ellipses at an angle are not OK • CURE (Clustering Using REpresentatives): §Assumes a Euclidean distance §Allows clusters to assume any shape §Uses a collection of representative points to represent clusters 58 Vs.
  59. 59. Prof. Pier Luca Lanzi k-means BFR and these?
  60. 60. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  61. 61. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  62. 62. Prof. Pier Luca Lanzi Starting CURE – Pass 1 of 2 • Pick a random sample of points that fit into main memory • Cluster sample points to create initial clusters (e.g. using hierarchical clustering) • Pick representative points §For each cluster pick k representative points (as disperse as possible) §Create synthetic representative points by moving the k points toward the centroid of the cluster (e.g. 20%) 62
  63. 63. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  64. 64. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering synthetic representative points
  65. 65. Prof. Pier Luca Lanzi Starting CURE – Pass 2 of 2 • Rescan the whole dataset (from secondary memory) and for each point p • Place p in the “closest cluster” that is the cluster that has a representative that is closest to p 65
  66. 66. Prof. Pier Luca Lanzi Expectation Maximization
  67. 67. Prof. Pier Luca Lanzi Expectation-Maximization (EM) Clustering • k-means assigns each point to only one cluster (hard assignment) • The approach can be extended to consider soft assignment of points to clusters, so that each point has a probability of belonging to each cluster • We assume that each cluster Ci is characterized by a multivariate normal distribution and thus identified by § The mean vector μi § The covariance matrix Σi • A clustering is identified by a vector of parameter θ defined as θ = {μi Σi P(Ci)} where P(Ci) are the prior probability of all the clusters Ci which sum up to one 67
  68. 68. Prof. Pier Luca Lanzi Expectation-Maximization (EM) Clustering • The goal of maximum likelihood estimation (MLE) is to choose the parameters θ that maximize the likelihood, that is • General idea § Starts with an initial estimate of the parameter vector § Iteratively rescores the patterns against the mixture density produced by the parameter vector § The rescored patterns are used to update the parameter updates § Patterns belonging to the same cluster, if they are placed by their scores in a particular component 68
  69. 69. Prof. Pier Luca Lanzi The EM (Expectation Maximization) Algorithm • Initially, randomly assign k cluster centers • Iteratively refine the clusters based on two steps • Expectation step § Assign each data point xi to cluster Ci with the following probability where p(xi|Ck) follows the normal distribution. • This step calculates the probability of cluster membership of xi for each Ck • Maximization step § The model parameters are estimated from the updated probabilities. § For instance, for the mean, 69
  70. 70. Prof. Pier Luca Lanzi Run the Python notebooks for the algorithms included in this lecture
  71. 71. Prof. Pier Luca Lanzi Examples using R
  72. 72. Prof. Pier Luca Lanzi k-Means Clustering in R set.seed(1234) # random generated points x<-rnorm(12, mean=rep(1:3,each=4), sd=0.2) y<-rnorm(12, mean=rep(c(1,2,1),each=4), sd=0.2) plot(x,y,pch=19,cex=2,col="blue") # distance matrix d <- data.frame(x,y) km <- kmeans(d, 3) names(km) plot(x,y,pch=19,cex=2,col="blue") par(new=TRUE) plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red") 72
  73. 73. Prof. Pier Luca Lanzi k-Means Clustering in R # generate other random centroids to start with km <- kmeans(d, 3, centers=cbind(runif(3,0,3),runif(3,0,2))) plot(x,y,pch=19,cex=2,col="blue") par(new=TRUE) plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red") 73
  74. 74. Prof. Pier Luca Lanzi Evaluation on k-Means & Number of Clusters ### ### Evaluate clustering in kmeans using elbow/knee analysis ### library(foreign) library(GMD) iris = read.arff("iris.arff") # init two vectors that will contain the evaluation # in terms of within and between sum of squares plot_wss = rep(0,12) plot_bss = rep(0,12) # evaluate every clustering for(i in 1:12) { cl <- kmeans(iris[,1:4],i) plot_wss[i] <- cl$tot.withinss plot_bss[i] <- cl$betweenss; } 74
  75. 75. Prof. Pier Luca Lanzi Evaluation on k-Means & Number of Clusters # plot the results x = 1:12 plot(x, y=plot_bss, main="Within/Between Cluster Sum-of-square", cex=2, pch=18, col="blue", xlab="Number of Clusters", ylab="Evaluation", ylim=c(0,700)) lines(x, plot_bss, col="blue") par(new=TRUE) plot(x, y=plot_wss, cex=2, pch=19, col="red", ylab="", xlab="", ylim=c(0,700)) lines(x,plot_wss, col="red"); 75
  76. 76. Prof. Pier Luca Lanzi Elbow & Knee Analysis 76
  77. 77. Prof. Pier Luca Lanzi http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/K-Means http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/Expectation_Maximization_(EM) Software Packages

×