Upcoming SlideShare
×

# Enhance The K Means Algorithm On Spatial Dataset

2,828 views
2,743 views

Published on

Enhance the k-means algorithm on spatial dataset

0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
2,828
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
85
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Enhance The K Means Algorithm On Spatial Dataset

1. 1. ENHANCED K-MEANS ALGORITHM ON SPATIAL DATASET
2. 2. OVERVIEW <ul><li>K-Means algorithm introduced by J.B. MacQueen in 1967, is one of the most common clustering algorithms and it is considered as one of the simplest unsupervised learning algorithms that partition feature vectors into k clusters so that the within group sum of squares is minimized. </li></ul>
3. 3. VARIATION IN K-MEAN ALGORITHM <ul><li>There are several variants of the k-means clustering algorithm, but most variants involve an iterative scheme that operates over a fixed number of clusters, while attempting to satisfy the following properties: </li></ul><ul><li>Each class has a center which is the mean position of all the samples in that class. </li></ul><ul><li>Each sample is in the class whose center it is closest to. </li></ul>
4. 4. PROCEDURE OF K-MEAN ALGORITHM <ul><li>Step 1: Place randomly initial group centroids into the 2d space. Step 2: Assign each object to the group that has the closest centroid. Step 3: Recalculate the positions of the centroids. Step 4: If the positions of the centroids didn't change go to the next step, else go to Step 2. Step 5: End. </li></ul>
5. 5. FLOW CHART
6. 6. HOW K-MEANS ALGORITHM WORKS <ul><ul><li>It accepts the number of clusters to group data into, and the dataset to cluster as input values. </li></ul></ul><ul><ul><li>It then creates the first K initial clusters (K= number of clusters needed) from the dataset by choosing K rows of data randomly from the dataset. For Example, if there are 10,000 rows of data in the dataset and 3 clusters need to be formed, then the first K=3 initial clusters will be created by selecting 3 records randomly from the dataset as the initial clusters. Each of the 3 initial clusters formed will have just one row of data. </li></ul></ul>
7. 7. HOW K-MEANS ALGORITHM WORKS <ul><ul><li>The K-Means algorithm calculates the Arithmetic Mean of each cluster formed in the dataset. The Arithmetic Mean of a cluster is the mean of all the individual records in the cluster. In each of the first K initial clusters, there is only one record. The Arithmetic Mean of a cluster with one record is the set of values that make up that record. For Example if the dataset we are discussing is a set of Height, Weight and Age measurements for students in a University, where a record P in the dataset S is represented by a Height, Weight and Age measurement, then P = {Age, Height, Weight). Then a record containing the measurements of a student John, would be represented as John = {20, 170, 80} where John's Age = 20 years, Height = 1.70 meters and Weight = 80 Pounds. Since there is only one record in each initial cluster then the Arithmetic Mean of a cluster with only the record for John as a member = {20, 170, 80}. </li></ul></ul>
8. 8. HOW K-MEANS ALGORITHM WORKS <ul><ul><li>It Next, K-Means assigns each record in the dataset to only one of the initial clusters. Each record is assigned to the nearest cluster (the cluster which it is most similar to) using a measure of distance or similarity like the Euclidean Distance Measure or Manhattan/City-Block Distance Measure. </li></ul></ul>
9. 9. HOW K-MEANS ALGORITHM WORKS <ul><ul><li>Next, K-Means re-assigns each record in the dataset to the most similar cluster and re-calculates the arithmetic mean of all the clusters in the dataset. The arithmetic mean of a cluster is the arithmetic mean of all the records in that cluster. For Example, if a cluster contains two records where the record of the set of measurements for John = {20, 170, 80} and Henry = {30, 160, 120}, then the arithmetic mean P mean is represented as P mean = {Age mean , Height mean , Weight mean ).  Age mean = (20 + 30)/2, Height mean = (170 + 160)/2 and Weight mean = (80 + 120)/2. The arithmetic mean of this cluster = {25, 165, 100}. This new arithmetic mean becomes the center of this new cluster. Following the same procedure, new cluster centers are formed for all the existing clusters. </li></ul></ul>
10. 10. HOW K-MEANS ALGORITHM WORKS <ul><ul><li>It K-Means re-assigns each record in the dataset to only one of the new clusters formed. A record or data point is assigned to the nearest cluster (the cluster which it is most similar to) using a measure of distance or similarity like the Euclidean Distance Measure or Manhattan/City-Block Distance Measure. </li></ul></ul><ul><ul><li>The preceding steps are repeated until stable clusters are formed and the K-Means clustering procedure is completed. Stable clusters are formed when new iterations or repetitions of the K-Means clustering algorithm does not create new clusters as the cluster center or Arithmetic Mean of each cluster formed is the same as the old cluster center. There are different techniques for determining when a stable cluster is formed or when the k-means clustering algorithm procedure is completed. </li></ul></ul>
11. 11. COMPUTATIONAL COMPLEXITY <ul><li>NP-hard in general Euclidean space d even for 2 clusters. </li></ul><ul><li>NP-hard for a general number of clusters k even in the plane. </li></ul><ul><li>If k and d are fixed, the problem can be exactly solved in time O(n dk+1 log n), where n is the number of entities to be clustered. </li></ul>
12. 12. ADVANTAGES <ul><ul><li>Relatively efficient: O(tkn), where n is the number of instances, c is the number of clusters, and t is the number of iterations. Normally, k, t << n. </li></ul></ul><ul><ul><li>Often terminates at a local optimum. The global optimum may be found using techniques such as: simulated annealing or genetic algorithms </li></ul></ul>
13. 13. DISADVANTAGES <ul><ul><li>Applicable only when mean is defined. </li></ul></ul><ul><ul><li>Need to specify c, the number of clusters, in advance. </li></ul></ul><ul><ul><li>Unable to handle noisy data and outliers. </li></ul></ul><ul><ul><li>Not suitable to discover clusters with non-convex shapes. </li></ul></ul>
14. 14. K-MEANS FOR SPHERICAL CLUSTERS <ul><li>Clustering technique for exploratory data analysis, for summary generation, and as a preprocessing step for other data mining tasks. </li></ul><ul><li>clusters may be of arbitrary shapes and can be nested within one another. </li></ul>
15. 15. EXAMPLES OF SUCH SHAPES <ul><li>chain-like patterns (represent active and inactive volcanoes). </li></ul>(a) Chain-like patterns (b) Clusters detected by K-means
16. 16. <ul><li>k -means algorithm discovers spherical shaped cluster, whose center is the gravity center of points in that cluster. </li></ul><ul><li>The center moves as new points are added to or removed from it. </li></ul>
17. 17. <ul><li>This motion makes the center closer to some points and far apart from the other points, the points that become closer to the center will stay in that cluster. </li></ul><ul><li>The points far apart from the center may change the cluster. </li></ul>
18. 19. SPHERICAL SHAPED WITH LARGE VARIANCE IN SIZES. <ul><li>However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. </li></ul><ul><li>The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. </li></ul>
19. 20. CONT. <ul><li>The proposed method is based on shifting the center of the large cluster toward the small cluster, and re-computing the membership of small cluster points. </li></ul>
20. 22. SPATIAL AUTOCORRELATION <ul><li>Spatial autocorrelation is determined both by similarities in position , and by similarities in attributes . </li></ul>
21. 23. ENHANCED K -MEANS ALGORITHM <ul><li>Spatial databases has a huge amount of data collected and stored.( increases the need for effective analysis methods). </li></ul><ul><li>Cluster analysis is a primary data analysis tasks . </li></ul>
22. 24. <ul><li>Goal of enhancement : Improve the computational speed of the k -means algorithm. </li></ul><ul><li>Using simple data structure(e.g. Arrays) to keep some information in each iteration to be used in the next iteration. </li></ul>
23. 25. <ul><li>K-Means : Computes the distances between data point and all centers(computationally very expensive ). </li></ul><ul><li>Why we do not benefit from previous iteration of k-means algorithm? </li></ul>
24. 26. Point_ID K_ID Distance
25. 27. For each data point, we can keep the distance to the nearest cluster. This saves the time required to compute distances to k −1 cluster centers . If (New distance <= Previous distance) { The point stays in its cluster. }else { Implement K-Means functionality }
26. 29. FUNCTION “ DISTANCE” keep the number of the closest cluster and the distance to the closest cluster Function distance () //assign each point to its nearest cluster 1 For i =1 to n 2 For j =1 to k 3 Compute squared Euclidean distance d 2( xi , mj ); 4 endfor 5 Find the closest centroid mj to xi ; 6 mj = mj + xi ; nj=nj +1; 7 MSE=MSE + d 2( xi , mj ); 8 Clusterid [ i ] = number of the closest centroid; 9 Pointdis [ i ]=Euclidean distance to the closest centroid; 10 endfor 11 For j =1 to k 12 mj=mj / nj ; 13 endfor
27. 30. FUNCTION “DISTANCE _ NEW” No need to compute the distances to the other k −1 centers Function distance _ new () //assign each point to its nearest cluster 1 For i =1 to n Compute squared Euclidean distance d 2( xi , Clusterid [ i ]); If ( d 2( xi , Clusterid [ i ])<= Pointdis [ i ]) Point stay in its cluster; 2 Else 3 For j =1 to k 4 Compute squared Euclidean distance d 2( xi , mj ); 5 endfor 6 Find the closest centroid mj to xi ; 7 mj = mj + xi ; nj = nj +1; 8 MSE = MSE + d 2( xi , mj ); 9 Clustered [ i ]=number of the closest centroid; 10 Pointdis [ i ]=Euclidean distance to the closest centroid; 11 endfor 12 For j =1 to k 13 mj=mj / nj ; 14 endfor
28. 31. COMPLEXITY <ul><li>K-Means Complexity : O ( nkl ) . </li></ul><ul><li>where n is the number of points , k is the number of clusters and l is the number of iterations . </li></ul>
29. 32. <ul><li>If the point stays in its cluster this require O (1), otherwise require O ( k ). </li></ul><ul><li>If we suppose that half points move from their clusters, this </li></ul><ul><li>requires O ( nk /2) , </li></ul><ul><li>since the algorithm converges to local minimum, the number of points moved from their clusters decreases in each iteration. </li></ul>
30. 33. <ul><li>So we expect the total cost is nk Σi=1toL 1/ i . Even for large number of </li></ul><ul><li>iterations, nk Σi=1toL 1/ i is much less than nkl. </li></ul><ul><li>Enhanced k -means algorithm Complexity : O ( nk ) . </li></ul>