Cure, Clustering Algorithm

15,816 views
15,237 views

Published on

Cure: An Efficient Clustering Algorithm for Large Databases

Published in: Technology
4 Comments
13 Likes
Statistics
Notes
No Downloads
Views
Total views
15,816
On SlideShare
0
From Embeds
0
Number of Embeds
175
Actions
Shares
0
Downloads
0
Comments
4
Likes
13
Embeds 0
No embeds

No notes for slide

Cure, Clustering Algorithm

  1. 1. Cure: An Efficient Clustering Algorithm for Large Databases Possamai Lino, 800509 Department of Computer Science University of Venice www.possamai.it/lino Data Mining Lecture - September 13th, 2006
  2. 2. Introduction <ul><li>Main algorithms for clustering are those who uses partitioning or hierarchical agglomerative techniques. </li></ul><ul><li>These are different because the former starts with one big cluster and downward step by step reaches the number of clusters wanted partitioning the existing clusters. </li></ul><ul><li>The second starts with single point cluster and upward step by step merge cluster until desired number of cluster is reached. </li></ul><ul><li>The second is used in this work </li></ul>
  3. 3. Drawbacks of Traditional Clustering Algorithms <ul><li>The result of clustering process depend on the approach used for represent each cluster. </li></ul><ul><li>In fact, centroid-based approach (using d mean ) consider only one point as representative of a cluster – the cluster centroid. </li></ul><ul><li>Other approach, as for example all-points (based on d min ) uses all the points inside him for cluster rappresentation. </li></ul><ul><li>This choice is extremely sensitive to outliers and to slight changes in the position of data points, as the first approach can’t work well for non-spherical or arbitrary shaped clusters. </li></ul>
  4. 4. Contribution of CURE, ideas <ul><li>CURE employs a new hierarchical algorithm that adopts a middle ground between centroid-based and all-points approach. </li></ul><ul><li>A constant number c of well scattered points in a cluster are chosen as representative. This points catch all the possible form that could have the cluster. </li></ul><ul><li>The clusters with the closest pair of representative points are the cluster that are merged at each step of Cure. </li></ul><ul><li>Random sampling and partitioning are used for reducing the data set of input. </li></ul>
  5. 5. CURE architecture
  6. 6. Random Sampling <ul><li>When all the data set is considered as input of algorithm, execution time could be high due to the I/O costs. </li></ul><ul><li>Random sampling is the answer to this problem. It is demonstrated that with only 2.5% of the original data set, the algorithm results are better than traditional algorithms, execution time are lower and geometry of cluster are preserved. </li></ul><ul><li>For speed-up the algorithm operations, random sampling is fitted in main memory. </li></ul><ul><li>The overhead of generating random sample is very small compared to the time for performing the clustering on the sample. </li></ul>
  7. 7. Partitioning sample <ul><li>When the clusters in the data set became less dense, random sampling with limited points became useless because implies a poor quality of clustering. So we have to increase the random sample. </li></ul><ul><li>They proposed a simple partitioning scheme for speedup CURE algorithm. </li></ul><ul><li>The scheme follows these steps: </li></ul><ul><ul><li>Partition n data points into p partition (n/p each). </li></ul></ul><ul><ul><li>Partially cluster each partition until the final number of cluster created reduces to n/(p*q) with q>1. </li></ul></ul><ul><ul><li>Cluster partially clustered partition starting from n/q cluster created. </li></ul></ul><ul><li>The advantage of partitioning the input is the reduced execution time. </li></ul><ul><li>Each n/p group of points must fit in main memory for increasing performance of partially clustering. </li></ul>
  8. 8. Hierarchical Clustering Algorithm <ul><li>A constant number c of well scattered points in a cluster are chosen as representative. These points catch all the possible form that could have the cluster. </li></ul><ul><li>The points are shrank toward the mean of the cluster by a fraction  . If  =0 the algorithm behavior became similar as all-points representation. Otherwise, (  =1) cure reduces to centroid-based approach. </li></ul><ul><li>Outliers are typically further away from the mean of the cluster so the shrinking consequence is to dampen this effect. </li></ul>
  9. 9. Hierarchical Clustering Algorithm <ul><li>The clusters with the closest pair of representative points are the cluster that are merged at each step of CURE. </li></ul><ul><li>When the number of points inside each cluster increase, the process of choosing c new representative points could became very slowly. </li></ul><ul><li>For this reason, a new procedure is proposed. Instead choosing c new points from among all the points in the merged cluster, we select c points from the 2c scattered points for the two clusters being merged. </li></ul><ul><li>The new points are fairly well scattered. </li></ul>
  10. 10. Example
  11. 11. Handling Outlier <ul><li>In different moments CURE dealt with outliers. </li></ul><ul><li>Random Sampling filter out the majority of outliers. </li></ul><ul><li>Outliers, due to their larger distance from other points, tend to merge with other points less and typically grow at a much slower rate than actual clusters. Thus, the number of points in a collection of outliers is typically much less than the number in a cluster. </li></ul><ul><li>So, first, the clusters which are growing very slowly are identified and eliminated. </li></ul><ul><li>Second, at the end of growing process, very small cluster are eliminated. </li></ul>
  12. 12. Labeling Data on Disk <ul><li>The process of sampling the initial data set, exclude the majority of data points. This data point must be assigned to some cluster created in former phases. </li></ul><ul><li>Each cluster created is represented by a fraction of randomly selected representative points and each point excluded in the first phase are associated to the cluster whose representative point is closer. </li></ul><ul><li>This method is different from BIRCH in which it employs only the centroids of the clusters for “partitioning” the remaining points. </li></ul><ul><li>Since the space defined by a single centroid is a sphere, BIRCH labeling phase has a tendency to split clusters when they have non-spherical shapes of non-uniform sizes. </li></ul>
  13. 13. Experimental Results <ul><li>During experimental phase, CURE was compared to other clustering algorithms and using the same data set results are plotted. </li></ul><ul><li>Algorithm for comparison are BIRCH and MST (Minimum Spanning Tree, same as CURE when shrink factor is 0) </li></ul><ul><li>Data set 1 used is formed by one big circle cluster, two small circle clusters and two ellipsoid connected by a dense chain of outliers. </li></ul><ul><li>Data set 2 used for execution time comparison. </li></ul>
  14. 14. Experimental Results Quality of Clustering <ul><li>As we can see from the picture, BIRCH and MST calculate a wrong result. BIRCH cannot distinguish between big and small cluster, so the consequence is splitting the big one. MST merges the two ellipsoids because it cannot handle the chain of outliers connecting them. </li></ul>
  15. 15. Experimental Results Sensitivity to Parameters <ul><li>Another index to take into account is the a factor. Changes implies a good or poor quality of clustering as we can see from the picture below. </li></ul>
  16. 16. Experimental Results Execution Time <ul><li>To compare the execution time of two algorithms, they have choose dataset 2 because both BIRCH and CURE have the same results. </li></ul><ul><li>Execution time is presented changing the number of data points thus each cluster became more dense as the points increase, but the geometry still remain the same. </li></ul><ul><li>Cure is more than 50% less expensive because BIRCH scan the entire data set where CURE sample count </li></ul><ul><li>always 2500 units. For CURE algorithm we </li></ul><ul><li>must count for a very little contribution of </li></ul><ul><li>sampling from a large data set. </li></ul>
  17. 17. Conclusion <ul><li>We have see that CURE can detect cluster with non-spherical shape and wide variance in size using a set of representative points for each cluster. </li></ul><ul><li>CURE can also have a good execution time in presence of large database using random sampling and partitioning methods. </li></ul><ul><li>CURE works well when the database contains outliers. These are detected and eliminated. </li></ul>
  18. 18. Index <ul><li>Introduction </li></ul><ul><li>Drawbacks of Traditional Clustering Algorithms </li></ul><ul><li>CURE algorithm </li></ul><ul><ul><li>Contribution of Cure, ideas </li></ul></ul><ul><ul><li>CURE architecture </li></ul></ul><ul><ul><li>Random Sampling </li></ul></ul><ul><ul><li>Partitioning sample </li></ul></ul><ul><ul><li>Hierarchical Clustering Algorithm </li></ul></ul><ul><ul><li>Labeling Data on disk </li></ul></ul><ul><ul><li>Handling Outliers </li></ul></ul><ul><li>Example </li></ul><ul><li>Experimental Results </li></ul>
  19. 19. References <ul><li>Sudipto Guha, Rajeev Rastogi, Kyuseok Shim </li></ul><ul><li>Cure: An Efficient Clustering Algorithm for Large Databases. </li></ul><ul><li>Information Systems, Volume 26, Number 1, March 2001 </li></ul>

×