Your SlideShare is downloading. ×
0
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Kmeans initialization
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Kmeans initialization

1,727

Published on

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,727
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
37
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. K-Means Clustering Problem Ahmad Sabiq Febri Maspiyanti Indah Kuntum Khairina Wiwin Farhania Yonatan
  • 2. What is k-means?• To partition n objects into k clusters, based on attributes. – Objects of the same cluster are close their attributes are related to each other. – Objects of different clusters are far apart their attributes are very dissimilar.
  • 3. Algorithm• Input: n objects, k (integer k ≤ n)• Output: k clusters• Steps: 1. Select k initial centroids. 2. Calculate the distance between each object and each centroid. 3. Assign each object to the cluster with the nearest centroid. 4. Recalculate each centroid. 5. If the centroids don’t change, stop (convergence). Otherwise, back to step 2.• Complexity: O(k.n.d.total_iteration)
  • 4. Initialization• Why is it important? What does it affect? – Clustering result local optimum! – Total iteration / complexity
  • 5. Good Initialization3 clusters with 2 iterations…
  • 6. Bad Initialization3 clusters with 4 iterations…
  • 7. Initialization Methods1. Random2. Forgy3. Macqueen4. Kaufman
  • 8. Random• Algorithm: 1. Assigns each object to a random cluster. 2. Computes the initial centroid of each cluster.
  • 9. Random
  • 10. Random
  • 11. Random9876543210 0 5 10 15 20 25 30 35
  • 12. Forgy• Algorithm: 1. Chooses k objects at random and uses them as the initial centroids.
  • 13. Forgy9876543210 0 5 10 15 20 25 30 35
  • 14. MacQueen• Algorithm: 1. Chooses k objects at random and uses them as the initial centroids. 2. Assign each object to the cluster with the nearest centroid. 3. After each assignment, recalculate the centroid.
  • 15. MacQueen9876543210 0 5 10 15 20 25 30 35
  • 16. MacQueen
  • 17. MacQueen
  • 18. MacQueen
  • 19. MacQueen
  • 20. MacQueen
  • 21. MacQueen
  • 22. MacQueen
  • 23. MacQueen
  • 24. MacQueen
  • 25. Kaufman
  • 26. Kaufman
  • 27. Kaufman
  • 28. Kaufman
  • 29. Kaufman
  • 30. Kaufman
  • 31. Kaufman
  • 32. Kaufman
  • 33. Kaufman C=0d = 24,33 D = 15,52
  • 34. Kaufman C=0 C=0 C=0 C=0 C=0
  • 35. Kaufman C=0 C=0 C=0 C=0∑C1 = 2,74 C=0
  • 36. Kaufman ∑C5 = 52,55 ∑C6 = 55,88 ∑C9 = 42,69 ∑C7 = 53,77∑C1 = 2,74 ∑C8 = 51,16 ∑C2 = 12,,21 ∑C3 = 12,36 ∑C3 = 8,38
  • 37. Kaufman ∑C5 = 52,55 ∑C6 = 55,88 ∑C9 = 42,69 ∑C7 = 53,77∑C1 = 2,74 ∑C8 = 51,16 ∑C2 = 12,,21 ∑C3 = 12,36 ∑C3 = 8,38
  • 38. Reference1. J.M. Peña, J.A. Lozano, and P. Larrañaga. An Empirical Comparison of Four Initialization Methods for the K- Means Algorithm. Pattern Recognition Letters, vol. 20, pp. 1027–1040. 1999.2. J.R. Cano, O. Cordón, F. Herrera, and L. Sánchez. A Greedy Randomized Adaptive Search Procedure Applied to the Clustering Problem as an Initialization Process Using K-Means as a Local Search Procedure. Journal of Intelligent and Fuzzy Systems, vol. 12, pp. 235 – 242. 2002.3. L. Kaufman and P.J. Rousseeuw. Finding Groups in Data: An Introduction to Cluster Analysis. Wiley. 1990.
  • 39. Questions1. Kenapa inisialisasi penting pada k-means?2. Metode inisialisasi apa yang memiliki greedy choice property?3. Jelaskan kompleksitas O(nkd) pada metode Random.

×