SlideShare a Scribd company logo
1 of 86
Download to read offline
Data Mining:
5. Algoritma Klastering
Romi Satria Wahono
romi@romisatriawahono.net
http://romisatriawahono.net/dm
WA/SMS: +6281586220090
1
Romi Satria Wahono
• SMA Taruna Nusantara Magelang (1993)
• B.Eng, M.Eng and Ph.D in Software Engineering
Saitama University Japan (1994-2004)
Universiti Teknikal Malaysia Melaka (2014)
• Research Interests in Software Engineering and
Machine Learning
• LIPI Researcher (2004-2007)
• Founder and CEO:
• PT IlmuKomputerCom Braindevs Sistema
• PT Brainmatics Cipta Informatika
• Professional Member of IEEE, ACM and PMI
• IT Award Winners from WSIS (United Nations), Kemdikbud,
LIPI, etc
• Industrial IT Certifications: TOGAF, ITIL, CCNA, etc
• Enterprise Architecture Consultant: KPK, UNSRI, LIPI, Ristek
Dikti, DJPK Kemenkeu, Kemsos, INSW, etc
2
8. Text Mining
7. Algoritma Estimasi dan Forecasting
6. Algoritma Asosiasi
5. Algoritma Klastering
4. Algoritma Klasifikasi
3. Persiapan Data
2. Proses Data Mining
1. Pengantar Data Mining
3
Course Outline
5. Algoritma Klastering
5.1 Cluster Analysis: Basic Concepts
5.2 Partitioning Methods
5.3 Hierarchical Methods
5.4 Density-Based Methods
5.5 Grid-Based Methods
5.6 Evaluation of Clustering
4
5.1 Cluster Analysis: Basic Concepts
5
• Cluster: A collection of data objects
• similar (or related) to one another within the same
group
• dissimilar (or unrelated) to the objects in other groups
• Cluster analysis (or clustering, data segmentation, …)
• Finding similarities between data according to the
characteristics found in the data and grouping similar
data objects into clusters
• Unsupervised learning: no predefined classes (i.e., learning
by observations vs. learning by examples: supervised)
• Typical applications
• As a stand-alone tool to get insight into data distribution
• As a preprocessing step for other algorithms
6
What is Cluster Analysis?
• Data reduction
• Summarization: Preprocessing for regression, PCA,
classification, and association analysis
• Compression: Image processing: vector quantization
• Hypothesis generation and testing
• Prediction based on groups
• Cluster & find characteristics/patterns for each group
• Finding K-nearest Neighbors
• Localizing search to one or a small number of clusters
• Outlier detection: Outliers are often viewed as those “far
away” from any cluster
7
Applications of Cluster Analysis
• Biology: taxonomy of living things: kingdom, phylum,
class, order, family, genus and species
• Information retrieval: document clustering
• Land use: Identification of areas of similar land use in an
earth observation database
• Marketing: Help marketers discover distinct groups in
their customer bases, and then use this knowledge to
develop targeted marketing programs
• City-planning: Identifying groups of houses according to
their house type, value, and geographical location
• Earth-quake studies: Observed earth quake epicenters
should be clustered along continent faults
• Climate: understanding earth climate, find patterns of
atmospheric and ocean
• Economic Science: market research
8
Clustering: Application Examples
• Feature selection
• Select info concerning the task of interest
• Minimal information redundancy
• Proximity measure
• Similarity of two feature vectors
• Clustering criterion
• Expressed via a cost function or some rules
• Clustering algorithms
• Choice of algorithms
• Validation of the results
• Validation test (also, clustering tendency test)
• Interpretation of the results
• Integration with applications
9
Basic Steps to Develop a Clustering Task
•A good clustering method will produce high quality
clusters
• high intra-class similarity: cohesive within clusters
• low inter-class similarity: distinctive between clusters
•The quality of a clustering method depends on
• the similarity measure used by the method
• its implementation, and
• Its ability to discover some or all of the hidden patterns
10
Quality: What Is Good Clustering?
• Dissimilarity/Similarity metric
• Similarity is expressed in terms of a distance function,
typically metric: d(i, j)
• The definitions of distance functions are usually rather
different for interval-scaled, boolean, categorical,
ordinal ratio, and vector variables
• Weights should be associated with different variables
based on applications and data semantics
• Quality of clustering:
• There is usually a separate “quality” function that
measures the “goodness” of a cluster.
• It is hard to define “similar enough” or “good enough”
• The answer is typically highly subjective
11
Measure the Quality of Clustering
• Partitioning criteria
• Single level vs. hierarchical partitioning (often, multi-level
hierarchical partitioning is desirable)
• Separation of clusters
• Exclusive (e.g., one customer belongs to only one region) vs.
non-exclusive (e.g., one document may belong to more than one
class)
• Similarity measure
• Distance-based (e.g., Euclidian, road network, vector) vs.
connectivity-based (e.g., density or contiguity)
• Clustering space
• Full space (often when low dimensional) vs. subspaces (often in
high-dimensional clustering)
12
Considerations for Cluster Analysis
• Scalability
• Clustering all the data instead of only on samples
• Ability to deal with different types of attributes
• Numerical, binary, categorical, ordinal, linked, and mixture of
these
• Constraint-based clustering
• User may give inputs on constraints
• Use domain knowledge to determine input parameters
• Interpretability and usability
• Others
• Discovery of clusters with arbitrary shape
• Ability to deal with noisy data
• Incremental clustering and insensitivity to input order
• High dimensionality
13
Requirements and Challenges
• Partitioning approach:
• Construct various partitions and then evaluate them by some
criterion, e.g., minimizing the sum of square errors
• Typical methods: k-means, k-medoids, CLARANS
• Hierarchical approach:
• Create a hierarchical decomposition of the set of data (or
objects) using some criterion
• Typical methods: Diana, Agnes, BIRCH, CAMELEON
• Density-based approach:
• Based on connectivity and density functions
• Typical methods: DBSACN, OPTICS, DenClue
• Grid-based approach:
• based on a multiple-level granularity structure
• Typical methods: STING, WaveCluster, CLIQUE
14
Major Clustering Approaches 1
• Model-based:
• A model is hypothesized for each of the clusters and tries to
find the best fit of that model to each other
• Typical methods: EM, SOM, COBWEB
• Frequent pattern-based:
• Based on the analysis of frequent patterns
• Typical methods: p-Cluster
• User-guided or constraint-based:
• Clustering by considering user-specified or
application-specific constraints
• Typical methods: COD (obstacles), constrained clustering
• Link-based clustering:
• Objects are often linked together in various ways
• Massive links can be used to cluster objects: SimRank,
LinkClus
15
Major Clustering Approaches 2
5.2 Partitioning Methods
16
• Partitioning method: Partitioning a database D of n objects
into a set of k clusters, such that the sum of squared
distances is minimized (where ci
is the centroid or medoid
of cluster Ci
)
• Given k, find a partition of k clusters that optimizes the
chosen partitioning criterion
• Global optimal: exhaustively enumerate all partitions
• Heuristic methods: k-means and k-medoids algorithms
• k-means (MacQueen’67, Lloyd’57/’82): Each cluster is represented by the
center of the cluster
• k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the objects in the
cluster
17
Partitioning Algorithms: Basic Concept
• Given k, the k-means algorithm is implemented in four
steps:
1. Partition objects into k nonempty subsets
2. Compute seed points as the centroids of the clusters
of the current partitioning (the centroid is the center,
i.e., mean point, of the cluster)
3. Assign each object to the cluster with the nearest seed
point
4. Go back to Step 2, stop when the assignment does not
change
18
The K-Means Clustering Method
19
An Example of K-Means Clustering
K=2
Arbitrarily
partition
objects into
k groups
Update the
cluster
centroids
Update the
cluster
centroids
Reassign objects
Loop if
needed
The initial data set
■ Partition objects into k
nonempty subsets
■ Repeat
■ Compute centroid (i.e., mean
point) for each partition
■ Assign each object to the
cluster of its nearest centroid
■ Until no change
1. Pilih jumlah klaster k yang diinginkan
2. Inisialisasi k pusat klaster (centroid) secara random
3. Tempatkan setiap data atau objek ke klaster terdekat. Kedekatan
dua objek ditentukan berdasar jarak. Jarak yang dipakai pada
algoritma k-Means adalah Euclidean distance (d)
• x = x1, x2, . . . , xn, dan y = y1, y2, . . . , yn merupakan banyaknya n
atribut(kolom) antara 2 record
4. Hitung kembali pusat klaster dengan keanggotaan klaster yang
sekarang. Pusat klaster adalah rata-rata (mean) dari semua data
atau objek dalam klaster tertentu
5. Tugaskan lagi setiap objek dengan memakai pusat klaster yang baru.
Jika pusat klaster sudah tidak berubah lagi, maka proses
pengklasteran selesai. Atau, kembali lagi ke langkah nomor 3 sampai
pusat klaster tidak berubah lagi (stabil) atau tidak ada penurunan
yang signifikan dari nilai SSE (Sum of Squared Errors)
20
Tahapan Algoritma k-Means
1. Tentukan jumlah klaster k=2
2. Tentukan centroid awal secara acak
misal dari data disamping m1 =(1,1),
m2=(2,1)
3. Tempatkan tiap objek ke klaster
terdekat berdasarkan nilai centroid
yang paling dekat selisihnya
(jaraknya). Didapatkan hasil, anggota
cluster1 = {A,E,G}, cluster2={B,C,D,F,H}
Nilai SSE yaitu:
21
Contoh Kasus – Iterasi 1
4. Menghitung nilai centroid yang baru
5. Tugaskan lagi setiap objek dengan
memakai pusat klaster yang baru.
Nilai SSE yang baru:
22
Interasi 2
4. Terdapat perubahan anggota
cluster yaitu cluster1={A,E,G,H},
cluster2={B,C,D,F}, maka cari lagi
nilai centroid yang baru yaitu:
m1=(1,25;1,75) dan m2=(4;2,75)
5. Tugaskan lagi setiap objek
dengan memakai pusat klaster
yang baru
Nilai SSE yang baru:
23
Iterasi 3
• Dapat dilihat pada tabel.
Tidak ada perubahan anggota
lagi pada masing-masing
cluster
• Hasil akhir yaitu:
cluster1={A,E,G,H}, dan
cluster2={B,C,D,F}
Dengan nilai SSE = 6,25 dan
jumlah iterasi 3
24
Hasil Akhir
•Lakukan eksperimen mengikuti buku
Matthew North, Data Mining for the Masses,
2012, Chapter 6 k-Means Clustering, pp.
91-103 (CoronaryHeartDisease.csv)
•Gambarkan grafik (chart) dan pilih Scatter 3D
Color untuk menggambarkan data hasil
klastering yang telah dilakukan
•Analisis apa yang telah dilakukan oleh Sonia,
dan apa manfaat k-Means clustering bagi
pekerjaannya?
25
Latihan
•Lakukan pengukuran performance dengan
menggunakan Cluster Distance Performance, untuk
mendapatkan nilai Davies Bouldin Index (DBI)
•Nilai DBI semakin rendah berarti cluster yang kita
bentuk semakin baik
26
Latihan
•Lakukan klastering terhadap data
IMFdata.csv
(http://romisatriawahono.net/lecture/dm/dataset)
27
Latihan
• Strength:
• Efficient: O(tkn), where n is # objects, k is # clusters, and t is #
iterations. Normally, k, t << n.
• Comparing: PAM: O(k(n-k)2
), CLARA: O(ks2
+ k(n-k))
• Comment: Often terminates at a local optimal
• Weakness
• Applicable only to objects in a continuous n-dimensional space
• Using the k-modes method for categorical data
• In comparison, k-medoids can be applied to a wide range of data
• Need to specify k, the number of clusters, in advance (there are ways to
automatically determine the best k (see Hastie et al., 2009)
• Sensitive to noisy data and outliers
• Not suitable to discover clusters with non-convex shapes
28
Comments on the K-Means Method
• Most of the variants of the k-means which differ in
• Selection of the initial k means
• Dissimilarity calculations
• Strategies to calculate cluster means
• Handling categorical data: k-modes
• Replacing means of clusters with modes
• Using new dissimilarity measures to deal with
categorical objects
• Using a frequency-based method to update modes of
clusters
• A mixture of categorical and numerical data: k-prototype
method
29
Variations of the K-Means Method
• The k-means algorithm is sensitive to outliers!
• Since an object with an extremely large value may substantially
distort the distribution of the data
• K-Medoids:
• Instead of taking the mean value of the object in a cluster as a
reference point, medoids can be used, which is the most centrally
located object in a cluster
30
What Is the Problem of the K-Means Method?
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
31
PAM: A Typical K-Medoids Algorithm
Total Cost = 20
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrary
choose k
object as
initial
medoids
Assign
each
remainin
g object
to
nearest
medoids
Randomly select a
nonmedoid object,Oramdom
Compute
total cost of
swapping
0
1
2
3
4
5
6
7
8
9
1
0
0 1 2 3 4 5 6 7 8 9 1
0
Total Cost = 26
Swapping O
and Oramdom
If quality is
improved.
Do loop
Until no change
0
1
2
3
4
5
6
7
8
9
1
0
0 1 2 3 4 5 6 7 8 9 1
0
•K-Medoids Clustering: Find representative objects
(medoids) in clusters
•PAM (Partitioning Around Medoids, Kaufmann &
Rousseeuw 1987)
• Starts from an initial set of medoids and iteratively
replaces one of the medoids by one of the non-medoids
if it improves the total distance of the resulting
clustering
• PAM works effectively for small data sets, but does not
scale well for large data sets (due to the computational
complexity)
•Efficiency improvement on PAM
• CLARA (Kaufmann & Rousseeuw, 1990): PAM on
samples
• CLARANS (Ng & Han, 1994): Randomized re-sampling
32
The K-Medoid Clustering Method
5.3 Hierarchical Methods
33
• Use distance matrix as clustering criteria
• This method does not require the number of clusters k as an
input, but needs a termination condition
34
Hierarchical Clustering
Step 0 Step 1 Step 2 Step 3 Step 4
b
d
c
e
a
a b
d e
c d e
a b c d e
Step 4 Step 3 Step 2 Step 1 Step 0
agglomerative
(AGNES)
divisive
(DIANA)
• Introduced in Kaufmann and Rousseeuw (1990)
• Implemented in statistical packages, e.g., Splus
• Use the single-link method and the dissimilarity matrix
• Merge nodes that have the least dissimilarity
• Go on in a non-descending fashion
• Eventually all nodes belong to the same cluster
35
AGNES (Agglomerative Nesting)
36
Dendrogram: Shows How Clusters are Merged
Decompose data objects into a several levels of nested partitioning (tree of
clusters), called a dendrogram
A clustering of the data objects is obtained by cutting the dendrogram at the
desired level, then each connected component forms a cluster
•Introduced in Kaufmann and Rousseeuw (1990)
•Implemented in statistical analysis packages, e.g.,
Splus
•Inverse order of AGNES
•Eventually each node forms a cluster on its own
37
DIANA (Divisive Analysis)
• Single link: smallest distance between an element in one cluster and an
element in the other, i.e., dist(Ki
, Kj
) = min(tip
, tjq
)
• Complete link: largest distance between an element in one cluster and an
element in the other, i.e., dist(Ki
, Kj
) = max(tip
, tjq
)
• Average: avg distance between an element in one cluster and an element in
the other, i.e., dist(Ki
, Kj
) = avg(tip
, tjq
)
• Centroid: distance between the centroids of two clusters, i.e., dist(Ki
, Kj
) =
dist(Ci
, Cj
)
• Medoid: distance between the medoids of two clusters, i.e., dist(Ki
, Kj
) =
dist(Mi
, Mj
)
• Medoid: a chosen, centrally located object in the cluster
38
Distance between Clusters X X
• Centroid: the “middle” of a
cluster
• Radius: square root of average
distance from any point of the
cluster to its centroid
• Diameter: square root of
average mean squared distance
between all pairs of points in the
cluster
39
Centroid, Radius and Diameter of a Cluster
(for numerical data sets)
• Major weakness of agglomerative clustering methods
• Can never undo what was done previously
• Do not scale well: time complexity of at least O(n2
),
where n is the number of total objects
• Integration of hierarchical & distance-based clustering
• BIRCH (1996): uses CF-tree and incrementally adjusts
the quality of sub-clusters
• CHAMELEON (1999): hierarchical clustering using
dynamic modeling
40
Extensions to Hierarchical Clustering
• Zhang, Ramakrishnan & Livny, SIGMOD’96
• Incrementally construct a CF (Clustering Feature) tree, a hierarchical
data structure for multiphase clustering
• Phase 1: scan DB to build an initial in-memory CF tree (a multi-level
compression of the data that tries to preserve the inherent
clustering structure of the data)
• Phase 2: use an arbitrary clustering algorithm to cluster the leaf
nodes of the CF-tree
• Scales linearly: finds a good clustering with a single scan and improves
the quality with a few additional scans
• Weakness: handles only numeric data, and sensitive to the order of the
data record
41
BIRCH (Balanced Iterative Reducing and
Clustering Using Hierarchies)
• Clustering Feature (CF): CF = (N, LS, SS)
• N: Number of data points
• LS: linear sum of N points:
• SS: square sum of N points
42
Clustering Feature Vector in BIRCH
CF = (5, (16,30),(54,190))
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
•Clustering feature:
• Summary of the statistics for a given subcluster: the
0-th, 1st, and 2nd moments of the subcluster from the
statistical point of view
• Registers crucial measurements for computing cluster
and utilizes storage efficiently
•A CF tree is a height-balanced tree that stores the
clustering features for a hierarchical clustering
• A nonleaf node in a tree has descendants or “children”
• The nonleaf nodes store sums of the CFs of their
children
•A CF tree has two parameters
• Branching factor: max # of children
• Threshold: max diameter of sub-clusters stored at the
leaf nodes
43
CF-Tree in BIRCH
44
The CF Tree Structure
CF1
child1
CF3
child3
CF2
child2
CF6
child6
CF1
child1
CF3
child3
CF2
child2
CF5
child5
CF1
CF2
CF6
prev next
CF1
CF2
CF4
prev next
B = 7
L = 6
Non-leaf node
Leaf node Leaf node
• Cluster Diameter
• For each point in the input
• Find closest leaf entry
• Add point to leaf entry and update CF
• If entry diameter > max_diameter, then split leaf, and
possibly parents
• Algorithm is O(n)
• Concerns
• Sensitive to insertion order of data points
• Since we fix the size of leaf nodes, so clusters may not be so
natural
• Clusters tend to be spherical given the radius and diameter
measures 45
The Birch Algorithm
• CHAMELEON: G. Karypis, E. H. Han, and V. Kumar, 1999
• Measures the similarity based on a dynamic model
• Two clusters are merged only if the interconnectivity
and closeness (proximity) between two clusters are
high relative to the internal interconnectivity of the
clusters and closeness of items within the clusters
• Graph-based, and a two-phase algorithm
1. Use a graph-partitioning algorithm: cluster objects into
a large number of relatively small sub-clusters
2. Use an agglomerative hierarchical clustering
algorithm: find the genuine clusters by repeatedly
combining these sub-clusters
46
CHAMELEON: Hierarchical Clustering Using
Dynamic Modeling (1999)
• k-nearest graphs from an original data in 2D:
• EC{Ci ,Cj } :
The absolute inter-connectivity between Ci
and Cj
:
the sum of the weight of the edges that connect vertices in
Ci
to vertices in Cj
• Internal inter-connectivity of a cluster Ci
: the size of its
min-cut bisector ECCi
(i.e., the weighted sum of edges that
partition the graph into two roughly equal parts)
• Relative Inter-connectivity (RI):
47
KNN Graphs & Interconnectivity
• Relative closeness between a pair of clusters Ci and Cj :
the absolute closeness between Ci and Cj normalized
w.r.t. the internal closeness of the two clusters Ci and
Cj
• and are the average weights of the edges
that belong in the min-cut bisector of clusters Ci and Cj ,
respectively, and is the average weight of the
edges that connect vertices in Ci to vertices in Cj
• Merge Sub-Clusters:
• Merges only those pairs of clusters whose RI and RC are both
above some user-specified thresholds
• Merge those maximizing the function that combines RI and
RC 48
Relative Closeness & Merge of Sub-Clusters
49
Overall Framework of CHAMELEON
Construct (K-NN)
Sparse Graph Partition the Graph
Merge Partition
Final Clusters
Data Set
K-NN Graph
P and q are connected if
q is among the top k
closest neighbors of p
Relative interconnectivity:
connectivity of c1
and c2
over
internal connectivity
Relative closeness: closeness
of c1
and c2
over internal
closeness
50
CHAMELEON (Clustering Complex Objects)
• Algorithmic hierarchical clustering
• Nontrivial to choose a good distance measure
• Hard to handle missing attribute values
• Optimization goal not clear: heuristic, local search
• Probabilistic hierarchical clustering
• Use probabilistic models to measure distances between
clusters
• Generative model: Regard the set of data objects to be
clustered as a sample of the underlying data generation
mechanism to be analyzed
• Easy to understand, same efficiency as algorithmic
agglomerative clustering method, can handle partially
observed data
• In practice, assume the generative models adopt
common distributions functions, e.g., Gaussian
distribution or Bernoulli distribution, governed by
parameters
51
Probabilistic Hierarchical Clustering
• Given a set of 1-D points X = {x1
, …, xn
} for clustering
analysis & assuming they are generated by a Gaussian
distribution:
• The probability that a point xi
∈ X is generated by the
model
• The likelihood that X is generated by the model:
• The task of learning the generative model: find the
parameters Ο and σ2
such that
52
Generative Model
the maximum likelihood
53
Gaussian Distribution
From wikipedia and http://home.dei.polimi.it
Bean machine:
drop ball with
pins
1-d
Gaussian
2-d
Gaussian
• For a set of objects partitioned into m clusters C1
, . . . ,Cm
, the quality
can be measured by,
where P() is the maximum likelihood
• If we merge two clusters Cj1
and Cj2
into a cluster Cj1
∪Cj2
, then, the
change in quality of the overall clustering is
• Distance between clusters C1
and C2
:
54
A Probabilistic Hierarchical Clustering Algorithm
5.4 Density-Based Methods
55
• Clustering based on density (local cluster criterion), such as
density-connected points
• Major features:
• Discover clusters of arbitrary shape
• Handle noise
• One scan
• Need density parameters as termination condition
• Several interesting studies:
• DBSCAN: Ester, et al. (KDD’96)
• OPTICS: Ankerst, et al (SIGMOD’99).
• DENCLUE: Hinneburg & D. Keim (KDD’98)
• CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based)
56
Density-Based Clustering Methods
• Two parameters:
• Eps: Maximum radius of the neighbourhood
• MinPts: Minimum number of points in an
Eps-neighbourhood of that point
• NEps
(q): {p belongs to D | dist(p,q) ≤ Eps}
• Directly density-reachable: A point p is directly
density-reachable from a point q w.r.t. Eps, MinPts if
• p belongs to NEps
(q)
• core point condition:
|NEps
(q)| ≥ MinPts
57
Density-Based Clustering: Basic Concepts
MinPts = 5
Eps = 1 cm
p
q
• Density-reachable:
• A point p is density-reachable from a
point q w.r.t. Eps, MinPts if there is a
chain of points p1
, …, pn
, p1
= q, pn
=
p such that pi+1
is directly
density-reachable from pi
• Density-connected
• A point p is density-connected to a
point q w.r.t. Eps, MinPts if there is a
point o such that both, p and q are
density-reachable from o w.r.t. Eps
and MinPts
58
Density-Reachable and Density-Connected
p
q
p1
p q
o
•Relies on a density-based notion of cluster: A
cluster is defined as a maximal set of
density-connected points
•Discovers clusters of arbitrary shape in spatial
databases with noise
59
DBSCAN: Density-Based Spatial Clustering of
Applications with Noise
Core
Border
Outlier
Eps = 1cm
MinPts = 5
1. Arbitrary select a point p
2. Retrieve all points density-reachable from p w.r.t. Eps and
MinPts
3. If p is a core point, a cluster is formed
4. If p is a border point, no points are density-reachable from
p and DBSCAN visits the next point of the database
5. Continue the process until all of the points have been
processed
If a spatial index is used, the computational complexity of DBSCAN is
O(nlogn), where n is the number of database objects. Otherwise, the
complexity is O(n2
)
60
DBSCAN: The Algorithm
61
DBSCAN: Sensitive to Parameters
http://webdocs.cs.ualberta.ca/~yaling/Cluster/Applet/Code/Cluster.html
• OPTICS: Ordering Points To Identify the Clustering Structure
• Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)
• Produces a special order of the database wrt its
density-based clustering structure
• This cluster-ordering contains info equiv to the
density-based clusterings corresponding to a broad
range of parameter settings
• Good for both automatic and interactive cluster
analysis, including finding intrinsic clustering structure
• Can be represented graphically or using visualization
techniques
62
OPTICS: A Cluster-Ordering Method (1999)
• Index-based: k = # of dimensions, N: # of points
• Complexity: O(N*logN)
• Core Distance of an object p: the smallest value ε such that
the Îľ-neighborhood of p has at least MinPts objects
Let NÎľ
(p): Îľ-neighborhood of p, Îľ is a distance value
Core-distanceÎľ, MinPts
(p) = Undefined if card(NÎľ
(p)) < MinPts
MinPts-distance(p), otherwise
• Reachability Distance of object p from core object q is the
min radius value that makes p density-reachable from q
Reachability-distanceÎľ, MinPts
(p, q) =
Undefined if q is not a core object
max(core-distance(q), distance (q, p)), otherwise
63
OPTICS: Some Extension from DBSCAN
64
Core Distance & Reachability Distance
65
Reachability-di
stance
Cluster-order of the objects
undefined
‘
66
Density-Based Clustering: OPTICS & Applications:
http://www.dbs.informatik.uni-muenchen.de/Forschung/KDD/Clustering/OPTICS/Demo
• DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)
• Using statistical density functions:
• Major features
• Solid mathematical foundation
• Good for data sets with large amounts of noise
• Allows a compact mathematical description of arbitrarily shaped
clusters in high-dimensional data sets
• Significant faster than existing algorithm (e.g., DBSCAN)
• But needs a large number of parameters
67
DENCLUE: Using Statistical Density Functions
influence of y on x
total influence on x
gradient of x in the
direction of xi
• Uses grid cells but only keeps information about grid cells
that do actually contain data points and manages these cells
in a tree-based access structure
• Influence function: describes the impact of a data point
within its neighborhood
• Overall density of the data space can be calculated as the
sum of the influence function of all data points
• Clusters can be determined mathematically by identifying
density attractors
• Density attractors are local maximal of the overall density
function
• Center defined clusters: assign to each density attractor the
points density attracted to it
• Arbitrary shaped cluster: merge density attractors that are
connected through paths of high density (> threshold)
68
Denclue: Technical Essence
69
Density Attractor
70
Center-Defined and Arbitrary
5.5 Grid-Based Methods
71
•Using multi-resolution grid data structure
•Several interesting methods
• STING (a STatistical INformation Grid approach) by
Wang, Yang and Muntz (1997)
• WaveCluster by Sheikholeslami, Chatterjee, and Zhang
(VLDB’98)
•A multi-resolution clustering approach using
wavelet method
• CLIQUE: Agrawal, et al. (SIGMOD’98)
•Both grid-based and subspace clustering
72
Grid-Based Clustering Method
•Wang, Yang and Muntz (VLDB’97)
•The spatial area is divided into rectangular cells
•There are several levels of cells corresponding to
different levels of resolution
73
STING: A Statistical Information Grid
Approach
• Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
• Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
• Parameters of higher level cells can be easily calculated
from parameters of lower level cell
• count, mean, s, min, max
• type of distribution—normal, uniform, etc.
• Use a top-down approach to answer spatial data queries
• Start from a pre-selected layer—typically with a small
number of cells
• For each cell in the current level compute the confidence
interval
74
The STING Clustering Method
• Remove the irrelevant cells from further consideration
• When finish examining the current layer, proceed to the
next lower level
• Repeat this process until the bottom layer is reached
• Advantages:
• Query-independent, easy to parallelize, incremental
update
• O(K), where K is the number of grid cells at the lowest
level
• Disadvantages:
• All the cluster boundaries are either horizontal or
vertical, and no diagonal boundary is detected
75
STING Algorithm and Its Analysis
• Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98)
• Automatically identifying subspaces of a high
dimensional data space that allow better clustering
than original space
• CLIQUE can be considered as both density-based and
grid-based
• It partitions each dimension into the same number of equal
length interval
• It partitions an m-dimensional data space into
non-overlapping rectangular units
• A unit is dense if the fraction of total data points contained
in the unit exceeds the input model parameter
• A cluster is a maximal set of connected dense units within a
subspace
76
CLIQUE (Clustering In QUEst)
1. Partition the data space and find the number of
points that lie inside each cell of the partition.
2. Identify the subspaces that contain clusters using
the Apriori principle
3. Identify clusters
1. Determine dense units in all subspaces of interests
2. Determine connected dense units in all subspaces of
interests.
4. Generate minimal description for the clusters
1. Determine maximal regions that cover a cluster of
connected dense units for each cluster
2. Determination of minimal cover for each cluster
77
CLIQUE: The Major Steps
78
Salary
(10,000)
20 30 40 50 60
age
5
4
3
1
2
6
7
0
20 30 40 50 60
age
5
4
3
1
2
6
7
0
Vacation(w
eek) age
Vacation
Salary 30 50
τ = 3
•Strength
• automatically finds subspaces of the highest
dimensionality such that high density clusters exist in
those subspaces
• insensitive to the order of records in input and does not
presume some canonical data distribution
• scales linearly with the size of input and has good
scalability as the number of dimensions in the data
increases
•Weakness
• The accuracy of the clustering result may be degraded at
the expense of simplicity of the method
79
Strength and Weakness of CLIQUE
5.6 Evaluation of Clustering
80
• Assess if non-random structure exists in the data by
measuring the probability that the data is generated by
a uniform data distribution
• Test spatial randomness by statistic test: Hopkins Static
• Given a dataset D regarded as a sample of a random variable
o, determine how far away o is from being uniformly
distributed in the data space
• Sample n points, p1, …, pn, uniformly from D. For each pi,
find its nearest neighbor in D: xi = min{dist (pi, v)} where v in
D
• Sample n points, q1, …, qn, uniformly from D. For each qi,
find its nearest neighbor in D – {qi}: yi = min{dist (qi, v)}
where v in D and v ≠ qi
• Calculate the Hopkins Statistic:
• If D is uniformly distributed, ∑ xi and ∑ yi will be close to each
other and H is close to 0.5. If D is clustered, H is close to 1
81
Assessing Clustering Tendency
• Empirical method
• # of clusters ≈√n/2 for a dataset of n points
• Elbow method
• Use the turning point in the curve of sum of within cluster variance
w.r.t the # of clusters
• Cross validation method
• Divide a given data set into m parts
• Use m – 1 parts to obtain a clustering model
• Use the remaining part to test the quality of the clustering
• E.g., For each point in the test set, find the closest centroid,
and use the sum of squared distance between all points in the
test set and the closest centroids to measure how well the
model fits the test set
• For any k > 0, repeat it m times, compare the overall quality
measure w.r.t. different k’s, and find # of clusters that fits the data
the best
82
Determine the Number of Clusters
• Two methods: extrinsic vs. intrinsic
• Extrinsic: supervised, i.e., the ground truth is available
• Compare a clustering against the ground truth using
certain clustering quality measure
• Ex. BCubed precision and recall metrics
• Intrinsic: unsupervised, i.e., the ground truth is unavailable
• Evaluate the goodness of a clustering by considering
how well the clusters are separated, and how compact
the clusters are
• Ex. Silhouette coefficient
83
Measuring Clustering Quality
• Clustering quality measure: Q(C, Cg
), for a clustering C given
the ground truth Cg
• Q is good if it satisfies the following 4 essential criteria
• Cluster homogeneity: the purer, the better
• Cluster completeness: should assign objects belong to
the same category in the ground truth to the same
cluster
• Rag bag: putting a heterogeneous object into a pure
cluster should be penalized more than putting it into a
rag bag (i.e., “miscellaneous” or “other” category)
• Small cluster preservation: splitting a small category into
pieces is more harmful than splitting a large category
into pieces
84
Measuring Clustering Quality:
Extrinsic Methods
• Cluster analysis groups objects based on their similarity and has
wide applications
• Measure of similarity can be computed for various types of data
• Clustering algorithms can be categorized into partitioning methods,
hierarchical methods, density-based methods, grid-based methods,
and model-based methods
• K-means and K-medoids algorithms are popular partitioning-based
clustering algorithms
• Birch and Chameleon are interesting hierarchical clustering
algorithms, and there are also probabilistic hierarchical clustering
algorithms
• DBSCAN, OPTICS, and DENCLU are interesting density-based
algorithms
• STING and CLIQUE are grid-based methods, where CLIQUE is also a
subspace clustering algorithm
• Quality of clustering results can be evaluated in various ways
85
Rangkuman
1. Jiawei Han and Micheline Kamber, Data Mining: Concepts and
Techniques Third Edition, Elsevier, 2012
2. Ian H. Witten, Frank Eibe, Mark A. Hall, Data mining: Practical
Machine Learning Tools and Techniques 3rd Edition, Elsevier,
2011
3. Markus Hofmann and Ralf Klinkenberg, RapidMiner: Data Mining
Use Cases and Business Analytics Applications, CRC Press Taylor
& Francis Group, 2014
4. Daniel T. Larose, Discovering Knowledge in Data: an Introduction
to Data Mining, John Wiley & Sons, 2005
5. Ethem Alpaydin, Introduction to Machine Learning, 3rd ed., MIT
Press, 2014
6. Florin Gorunescu, Data Mining: Concepts, Models and
Techniques, Springer, 2011
7. Oded Maimon and Lior Rokach, Data Mining and Knowledge
Discovery Handbook Second Edition, Springer, 2010
8. Warren Liao and Evangelos Triantaphyllou (eds.), Recent
Advances in Data Mining of Enterprise Data: Algorithms and
Applications, World Scientific, 2007
86
Referensi

More Related Content

Similar to algoritma klastering.pdf

Chapter 10.1,2,3 pdf.pdf
Chapter 10.1,2,3 pdf.pdfChapter 10.1,2,3 pdf.pdf
Chapter 10.1,2,3 pdf.pdfAmy Aung
 
Unsupervised learning Modi.pptx
Unsupervised learning Modi.pptxUnsupervised learning Modi.pptx
Unsupervised learning Modi.pptxssusere1fd42
 
Cluster Analysis.pptx
Cluster Analysis.pptxCluster Analysis.pptx
Cluster Analysis.pptxRvishnupriya2
 
Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...
Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...
Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...Maninda Edirisooriya
 
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...Salah Amean
 
Clusters techniques
Clusters techniquesClusters techniques
Clusters techniquesrajshreemuthiah
 
Clustering[306] [Read-Only].pdf
Clustering[306] [Read-Only].pdfClustering[306] [Read-Only].pdf
Clustering[306] [Read-Only].pdfigeabroad
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3Nandhini S
 
Cluster_saumitra.ppt
Cluster_saumitra.pptCluster_saumitra.ppt
Cluster_saumitra.pptssuser6b3336
 
Data mining Techniques
Data mining TechniquesData mining Techniques
Data mining TechniquesSulman Ahmed
 
What is cluster analysis
What is cluster analysisWhat is cluster analysis
What is cluster analysisPrabhat gangwar
 
Training machine learning k means 2017
Training machine learning k means 2017Training machine learning k means 2017
Training machine learning k means 2017Iwan Sofana
 
Capter10 cluster basic
Capter10 cluster basicCapter10 cluster basic
Capter10 cluster basicHouw Liong The
 
Capter10 cluster basic : Han & Kamber
Capter10 cluster basic : Han & KamberCapter10 cluster basic : Han & Kamber
Capter10 cluster basic : Han & KamberHouw Liong The
 
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptChapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptSubrata Kumer Paul
 
CLUSTERING
CLUSTERINGCLUSTERING
CLUSTERINGAman Jatain
 

Similar to algoritma klastering.pdf (20)

Clusteryanam
ClusteryanamClusteryanam
Clusteryanam
 
Chapter 10.1,2,3 pdf.pdf
Chapter 10.1,2,3 pdf.pdfChapter 10.1,2,3 pdf.pdf
Chapter 10.1,2,3 pdf.pdf
 
Unsupervised learning Modi.pptx
Unsupervised learning Modi.pptxUnsupervised learning Modi.pptx
Unsupervised learning Modi.pptx
 
Cluster Analysis.pptx
Cluster Analysis.pptxCluster Analysis.pptx
Cluster Analysis.pptx
 
Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...
Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...
Lecture 11 - KNN and Clustering, a lecture in subject module Statistical & Ma...
 
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
 
Clusters techniques
Clusters techniquesClusters techniques
Clusters techniques
 
Clustering[306] [Read-Only].pdf
Clustering[306] [Read-Only].pdfClustering[306] [Read-Only].pdf
Clustering[306] [Read-Only].pdf
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
 
Cluster_saumitra.ppt
Cluster_saumitra.pptCluster_saumitra.ppt
Cluster_saumitra.ppt
 
Data mining Techniques
Data mining TechniquesData mining Techniques
Data mining Techniques
 
Chapter 5.pdf
Chapter 5.pdfChapter 5.pdf
Chapter 5.pdf
 
What is cluster analysis
What is cluster analysisWhat is cluster analysis
What is cluster analysis
 
Training machine learning k means 2017
Training machine learning k means 2017Training machine learning k means 2017
Training machine learning k means 2017
 
Clustering
ClusteringClustering
Clustering
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Capter10 cluster basic
Capter10 cluster basicCapter10 cluster basic
Capter10 cluster basic
 
Capter10 cluster basic : Han & Kamber
Capter10 cluster basic : Han & KamberCapter10 cluster basic : Han & Kamber
Capter10 cluster basic : Han & Kamber
 
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptChapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
 
CLUSTERING
CLUSTERINGCLUSTERING
CLUSTERING
 

Recently uploaded

Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degreeyuu sss
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryJeremy Anderson
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一fhwihughh
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfchwongval
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxUnduhUnggah1
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxEmmanuel Dauda
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSINGmarianagonzalez07
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...Boston Institute of Analytics
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 

Recently uploaded (20)

Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data Story
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdf
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docx
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptx
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 

algoritma klastering.pdf

  • 1. Data Mining: 5. Algoritma Klastering Romi Satria Wahono romi@romisatriawahono.net http://romisatriawahono.net/dm WA/SMS: +6281586220090 1
  • 2. Romi Satria Wahono • SMA Taruna Nusantara Magelang (1993) • B.Eng, M.Eng and Ph.D in Software Engineering Saitama University Japan (1994-2004) Universiti Teknikal Malaysia Melaka (2014) • Research Interests in Software Engineering and Machine Learning • LIPI Researcher (2004-2007) • Founder and CEO: • PT IlmuKomputerCom Braindevs Sistema • PT Brainmatics Cipta Informatika • Professional Member of IEEE, ACM and PMI • IT Award Winners from WSIS (United Nations), Kemdikbud, LIPI, etc • Industrial IT Certifications: TOGAF, ITIL, CCNA, etc • Enterprise Architecture Consultant: KPK, UNSRI, LIPI, Ristek Dikti, DJPK Kemenkeu, Kemsos, INSW, etc 2
  • 3. 8. Text Mining 7. Algoritma Estimasi dan Forecasting 6. Algoritma Asosiasi 5. Algoritma Klastering 4. Algoritma Klasifikasi 3. Persiapan Data 2. Proses Data Mining 1. Pengantar Data Mining 3 Course Outline
  • 4. 5. Algoritma Klastering 5.1 Cluster Analysis: Basic Concepts 5.2 Partitioning Methods 5.3 Hierarchical Methods 5.4 Density-Based Methods 5.5 Grid-Based Methods 5.6 Evaluation of Clustering 4
  • 5. 5.1 Cluster Analysis: Basic Concepts 5
  • 6. • Cluster: A collection of data objects • similar (or related) to one another within the same group • dissimilar (or unrelated) to the objects in other groups • Cluster analysis (or clustering, data segmentation, …) • Finding similarities between data according to the characteristics found in the data and grouping similar data objects into clusters • Unsupervised learning: no predefined classes (i.e., learning by observations vs. learning by examples: supervised) • Typical applications • As a stand-alone tool to get insight into data distribution • As a preprocessing step for other algorithms 6 What is Cluster Analysis?
  • 7. • Data reduction • Summarization: Preprocessing for regression, PCA, classification, and association analysis • Compression: Image processing: vector quantization • Hypothesis generation and testing • Prediction based on groups • Cluster & find characteristics/patterns for each group • Finding K-nearest Neighbors • Localizing search to one or a small number of clusters • Outlier detection: Outliers are often viewed as those “far away” from any cluster 7 Applications of Cluster Analysis
  • 8. • Biology: taxonomy of living things: kingdom, phylum, class, order, family, genus and species • Information retrieval: document clustering • Land use: Identification of areas of similar land use in an earth observation database • Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs • City-planning: Identifying groups of houses according to their house type, value, and geographical location • Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults • Climate: understanding earth climate, find patterns of atmospheric and ocean • Economic Science: market research 8 Clustering: Application Examples
  • 9. • Feature selection • Select info concerning the task of interest • Minimal information redundancy • Proximity measure • Similarity of two feature vectors • Clustering criterion • Expressed via a cost function or some rules • Clustering algorithms • Choice of algorithms • Validation of the results • Validation test (also, clustering tendency test) • Interpretation of the results • Integration with applications 9 Basic Steps to Develop a Clustering Task
  • 10. •A good clustering method will produce high quality clusters • high intra-class similarity: cohesive within clusters • low inter-class similarity: distinctive between clusters •The quality of a clustering method depends on • the similarity measure used by the method • its implementation, and • Its ability to discover some or all of the hidden patterns 10 Quality: What Is Good Clustering?
  • 11. • Dissimilarity/Similarity metric • Similarity is expressed in terms of a distance function, typically metric: d(i, j) • The definitions of distance functions are usually rather different for interval-scaled, boolean, categorical, ordinal ratio, and vector variables • Weights should be associated with different variables based on applications and data semantics • Quality of clustering: • There is usually a separate “quality” function that measures the “goodness” of a cluster. • It is hard to define “similar enough” or “good enough” • The answer is typically highly subjective 11 Measure the Quality of Clustering
  • 12. • Partitioning criteria • Single level vs. hierarchical partitioning (often, multi-level hierarchical partitioning is desirable) • Separation of clusters • Exclusive (e.g., one customer belongs to only one region) vs. non-exclusive (e.g., one document may belong to more than one class) • Similarity measure • Distance-based (e.g., Euclidian, road network, vector) vs. connectivity-based (e.g., density or contiguity) • Clustering space • Full space (often when low dimensional) vs. subspaces (often in high-dimensional clustering) 12 Considerations for Cluster Analysis
  • 13. • Scalability • Clustering all the data instead of only on samples • Ability to deal with different types of attributes • Numerical, binary, categorical, ordinal, linked, and mixture of these • Constraint-based clustering • User may give inputs on constraints • Use domain knowledge to determine input parameters • Interpretability and usability • Others • Discovery of clusters with arbitrary shape • Ability to deal with noisy data • Incremental clustering and insensitivity to input order • High dimensionality 13 Requirements and Challenges
  • 14. • Partitioning approach: • Construct various partitions and then evaluate them by some criterion, e.g., minimizing the sum of square errors • Typical methods: k-means, k-medoids, CLARANS • Hierarchical approach: • Create a hierarchical decomposition of the set of data (or objects) using some criterion • Typical methods: Diana, Agnes, BIRCH, CAMELEON • Density-based approach: • Based on connectivity and density functions • Typical methods: DBSACN, OPTICS, DenClue • Grid-based approach: • based on a multiple-level granularity structure • Typical methods: STING, WaveCluster, CLIQUE 14 Major Clustering Approaches 1
  • 15. • Model-based: • A model is hypothesized for each of the clusters and tries to find the best fit of that model to each other • Typical methods: EM, SOM, COBWEB • Frequent pattern-based: • Based on the analysis of frequent patterns • Typical methods: p-Cluster • User-guided or constraint-based: • Clustering by considering user-specified or application-specific constraints • Typical methods: COD (obstacles), constrained clustering • Link-based clustering: • Objects are often linked together in various ways • Massive links can be used to cluster objects: SimRank, LinkClus 15 Major Clustering Approaches 2
  • 17. • Partitioning method: Partitioning a database D of n objects into a set of k clusters, such that the sum of squared distances is minimized (where ci is the centroid or medoid of cluster Ci ) • Given k, find a partition of k clusters that optimizes the chosen partitioning criterion • Global optimal: exhaustively enumerate all partitions • Heuristic methods: k-means and k-medoids algorithms • k-means (MacQueen’67, Lloyd’57/’82): Each cluster is represented by the center of the cluster • k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster 17 Partitioning Algorithms: Basic Concept
  • 18. • Given k, the k-means algorithm is implemented in four steps: 1. Partition objects into k nonempty subsets 2. Compute seed points as the centroids of the clusters of the current partitioning (the centroid is the center, i.e., mean point, of the cluster) 3. Assign each object to the cluster with the nearest seed point 4. Go back to Step 2, stop when the assignment does not change 18 The K-Means Clustering Method
  • 19. 19 An Example of K-Means Clustering K=2 Arbitrarily partition objects into k groups Update the cluster centroids Update the cluster centroids Reassign objects Loop if needed The initial data set ■ Partition objects into k nonempty subsets ■ Repeat ■ Compute centroid (i.e., mean point) for each partition ■ Assign each object to the cluster of its nearest centroid ■ Until no change
  • 20. 1. Pilih jumlah klaster k yang diinginkan 2. Inisialisasi k pusat klaster (centroid) secara random 3. Tempatkan setiap data atau objek ke klaster terdekat. Kedekatan dua objek ditentukan berdasar jarak. Jarak yang dipakai pada algoritma k-Means adalah Euclidean distance (d) • x = x1, x2, . . . , xn, dan y = y1, y2, . . . , yn merupakan banyaknya n atribut(kolom) antara 2 record 4. Hitung kembali pusat klaster dengan keanggotaan klaster yang sekarang. Pusat klaster adalah rata-rata (mean) dari semua data atau objek dalam klaster tertentu 5. Tugaskan lagi setiap objek dengan memakai pusat klaster yang baru. Jika pusat klaster sudah tidak berubah lagi, maka proses pengklasteran selesai. Atau, kembali lagi ke langkah nomor 3 sampai pusat klaster tidak berubah lagi (stabil) atau tidak ada penurunan yang signifikan dari nilai SSE (Sum of Squared Errors) 20 Tahapan Algoritma k-Means
  • 21. 1. Tentukan jumlah klaster k=2 2. Tentukan centroid awal secara acak misal dari data disamping m1 =(1,1), m2=(2,1) 3. Tempatkan tiap objek ke klaster terdekat berdasarkan nilai centroid yang paling dekat selisihnya (jaraknya). Didapatkan hasil, anggota cluster1 = {A,E,G}, cluster2={B,C,D,F,H} Nilai SSE yaitu: 21 Contoh Kasus – Iterasi 1
  • 22. 4. Menghitung nilai centroid yang baru 5. Tugaskan lagi setiap objek dengan memakai pusat klaster yang baru. Nilai SSE yang baru: 22 Interasi 2
  • 23. 4. Terdapat perubahan anggota cluster yaitu cluster1={A,E,G,H}, cluster2={B,C,D,F}, maka cari lagi nilai centroid yang baru yaitu: m1=(1,25;1,75) dan m2=(4;2,75) 5. Tugaskan lagi setiap objek dengan memakai pusat klaster yang baru Nilai SSE yang baru: 23 Iterasi 3
  • 24. • Dapat dilihat pada tabel. Tidak ada perubahan anggota lagi pada masing-masing cluster • Hasil akhir yaitu: cluster1={A,E,G,H}, dan cluster2={B,C,D,F} Dengan nilai SSE = 6,25 dan jumlah iterasi 3 24 Hasil Akhir
  • 25. •Lakukan eksperimen mengikuti buku Matthew North, Data Mining for the Masses, 2012, Chapter 6 k-Means Clustering, pp. 91-103 (CoronaryHeartDisease.csv) •Gambarkan grafik (chart) dan pilih Scatter 3D Color untuk menggambarkan data hasil klastering yang telah dilakukan •Analisis apa yang telah dilakukan oleh Sonia, dan apa manfaat k-Means clustering bagi pekerjaannya? 25 Latihan
  • 26. •Lakukan pengukuran performance dengan menggunakan Cluster Distance Performance, untuk mendapatkan nilai Davies Bouldin Index (DBI) •Nilai DBI semakin rendah berarti cluster yang kita bentuk semakin baik 26 Latihan
  • 27. •Lakukan klastering terhadap data IMFdata.csv (http://romisatriawahono.net/lecture/dm/dataset) 27 Latihan
  • 28. • Strength: • Efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. • Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k)) • Comment: Often terminates at a local optimal • Weakness • Applicable only to objects in a continuous n-dimensional space • Using the k-modes method for categorical data • In comparison, k-medoids can be applied to a wide range of data • Need to specify k, the number of clusters, in advance (there are ways to automatically determine the best k (see Hastie et al., 2009) • Sensitive to noisy data and outliers • Not suitable to discover clusters with non-convex shapes 28 Comments on the K-Means Method
  • 29. • Most of the variants of the k-means which differ in • Selection of the initial k means • Dissimilarity calculations • Strategies to calculate cluster means • Handling categorical data: k-modes • Replacing means of clusters with modes • Using new dissimilarity measures to deal with categorical objects • Using a frequency-based method to update modes of clusters • A mixture of categorical and numerical data: k-prototype method 29 Variations of the K-Means Method
  • 30. • The k-means algorithm is sensitive to outliers! • Since an object with an extremely large value may substantially distort the distribution of the data • K-Medoids: • Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster 30 What Is the Problem of the K-Means Method? 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
  • 31. 31 PAM: A Typical K-Medoids Algorithm Total Cost = 20 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 K=2 Arbitrary choose k object as initial medoids Assign each remainin g object to nearest medoids Randomly select a nonmedoid object,Oramdom Compute total cost of swapping 0 1 2 3 4 5 6 7 8 9 1 0 0 1 2 3 4 5 6 7 8 9 1 0 Total Cost = 26 Swapping O and Oramdom If quality is improved. Do loop Until no change 0 1 2 3 4 5 6 7 8 9 1 0 0 1 2 3 4 5 6 7 8 9 1 0
  • 32. •K-Medoids Clustering: Find representative objects (medoids) in clusters •PAM (Partitioning Around Medoids, Kaufmann & Rousseeuw 1987) • Starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering • PAM works effectively for small data sets, but does not scale well for large data sets (due to the computational complexity) •Efficiency improvement on PAM • CLARA (Kaufmann & Rousseeuw, 1990): PAM on samples • CLARANS (Ng & Han, 1994): Randomized re-sampling 32 The K-Medoid Clustering Method
  • 34. • Use distance matrix as clustering criteria • This method does not require the number of clusters k as an input, but needs a termination condition 34 Hierarchical Clustering Step 0 Step 1 Step 2 Step 3 Step 4 b d c e a a b d e c d e a b c d e Step 4 Step 3 Step 2 Step 1 Step 0 agglomerative (AGNES) divisive (DIANA)
  • 35. • Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical packages, e.g., Splus • Use the single-link method and the dissimilarity matrix • Merge nodes that have the least dissimilarity • Go on in a non-descending fashion • Eventually all nodes belong to the same cluster 35 AGNES (Agglomerative Nesting)
  • 36. 36 Dendrogram: Shows How Clusters are Merged Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster
  • 37. •Introduced in Kaufmann and Rousseeuw (1990) •Implemented in statistical analysis packages, e.g., Splus •Inverse order of AGNES •Eventually each node forms a cluster on its own 37 DIANA (Divisive Analysis)
  • 38. • Single link: smallest distance between an element in one cluster and an element in the other, i.e., dist(Ki , Kj ) = min(tip , tjq ) • Complete link: largest distance between an element in one cluster and an element in the other, i.e., dist(Ki , Kj ) = max(tip , tjq ) • Average: avg distance between an element in one cluster and an element in the other, i.e., dist(Ki , Kj ) = avg(tip , tjq ) • Centroid: distance between the centroids of two clusters, i.e., dist(Ki , Kj ) = dist(Ci , Cj ) • Medoid: distance between the medoids of two clusters, i.e., dist(Ki , Kj ) = dist(Mi , Mj ) • Medoid: a chosen, centrally located object in the cluster 38 Distance between Clusters X X
  • 39. • Centroid: the “middle” of a cluster • Radius: square root of average distance from any point of the cluster to its centroid • Diameter: square root of average mean squared distance between all pairs of points in the cluster 39 Centroid, Radius and Diameter of a Cluster (for numerical data sets)
  • 40. • Major weakness of agglomerative clustering methods • Can never undo what was done previously • Do not scale well: time complexity of at least O(n2 ), where n is the number of total objects • Integration of hierarchical & distance-based clustering • BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters • CHAMELEON (1999): hierarchical clustering using dynamic modeling 40 Extensions to Hierarchical Clustering
  • 41. • Zhang, Ramakrishnan & Livny, SIGMOD’96 • Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering • Phase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) • Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree • Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans • Weakness: handles only numeric data, and sensitive to the order of the data record 41 BIRCH (Balanced Iterative Reducing and Clustering Using Hierarchies)
  • 42. • Clustering Feature (CF): CF = (N, LS, SS) • N: Number of data points • LS: linear sum of N points: • SS: square sum of N points 42 Clustering Feature Vector in BIRCH CF = (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8)
  • 43. •Clustering feature: • Summary of the statistics for a given subcluster: the 0-th, 1st, and 2nd moments of the subcluster from the statistical point of view • Registers crucial measurements for computing cluster and utilizes storage efficiently •A CF tree is a height-balanced tree that stores the clustering features for a hierarchical clustering • A nonleaf node in a tree has descendants or “children” • The nonleaf nodes store sums of the CFs of their children •A CF tree has two parameters • Branching factor: max # of children • Threshold: max diameter of sub-clusters stored at the leaf nodes 43 CF-Tree in BIRCH
  • 44. 44 The CF Tree Structure CF1 child1 CF3 child3 CF2 child2 CF6 child6 CF1 child1 CF3 child3 CF2 child2 CF5 child5 CF1 CF2 CF6 prev next CF1 CF2 CF4 prev next B = 7 L = 6 Non-leaf node Leaf node Leaf node
  • 45. • Cluster Diameter • For each point in the input • Find closest leaf entry • Add point to leaf entry and update CF • If entry diameter > max_diameter, then split leaf, and possibly parents • Algorithm is O(n) • Concerns • Sensitive to insertion order of data points • Since we fix the size of leaf nodes, so clusters may not be so natural • Clusters tend to be spherical given the radius and diameter measures 45 The Birch Algorithm
  • 46. • CHAMELEON: G. Karypis, E. H. Han, and V. Kumar, 1999 • Measures the similarity based on a dynamic model • Two clusters are merged only if the interconnectivity and closeness (proximity) between two clusters are high relative to the internal interconnectivity of the clusters and closeness of items within the clusters • Graph-based, and a two-phase algorithm 1. Use a graph-partitioning algorithm: cluster objects into a large number of relatively small sub-clusters 2. Use an agglomerative hierarchical clustering algorithm: find the genuine clusters by repeatedly combining these sub-clusters 46 CHAMELEON: Hierarchical Clustering Using Dynamic Modeling (1999)
  • 47. • k-nearest graphs from an original data in 2D: • EC{Ci ,Cj } : The absolute inter-connectivity between Ci and Cj : the sum of the weight of the edges that connect vertices in Ci to vertices in Cj • Internal inter-connectivity of a cluster Ci : the size of its min-cut bisector ECCi (i.e., the weighted sum of edges that partition the graph into two roughly equal parts) • Relative Inter-connectivity (RI): 47 KNN Graphs & Interconnectivity
  • 48. • Relative closeness between a pair of clusters Ci and Cj : the absolute closeness between Ci and Cj normalized w.r.t. the internal closeness of the two clusters Ci and Cj • and are the average weights of the edges that belong in the min-cut bisector of clusters Ci and Cj , respectively, and is the average weight of the edges that connect vertices in Ci to vertices in Cj • Merge Sub-Clusters: • Merges only those pairs of clusters whose RI and RC are both above some user-specified thresholds • Merge those maximizing the function that combines RI and RC 48 Relative Closeness & Merge of Sub-Clusters
  • 49. 49 Overall Framework of CHAMELEON Construct (K-NN) Sparse Graph Partition the Graph Merge Partition Final Clusters Data Set K-NN Graph P and q are connected if q is among the top k closest neighbors of p Relative interconnectivity: connectivity of c1 and c2 over internal connectivity Relative closeness: closeness of c1 and c2 over internal closeness
  • 51. • Algorithmic hierarchical clustering • Nontrivial to choose a good distance measure • Hard to handle missing attribute values • Optimization goal not clear: heuristic, local search • Probabilistic hierarchical clustering • Use probabilistic models to measure distances between clusters • Generative model: Regard the set of data objects to be clustered as a sample of the underlying data generation mechanism to be analyzed • Easy to understand, same efficiency as algorithmic agglomerative clustering method, can handle partially observed data • In practice, assume the generative models adopt common distributions functions, e.g., Gaussian distribution or Bernoulli distribution, governed by parameters 51 Probabilistic Hierarchical Clustering
  • 52. • Given a set of 1-D points X = {x1 , …, xn } for clustering analysis & assuming they are generated by a Gaussian distribution: • The probability that a point xi ∈ X is generated by the model • The likelihood that X is generated by the model: • The task of learning the generative model: find the parameters Îź and σ2 such that 52 Generative Model the maximum likelihood
  • 53. 53 Gaussian Distribution From wikipedia and http://home.dei.polimi.it Bean machine: drop ball with pins 1-d Gaussian 2-d Gaussian
  • 54. • For a set of objects partitioned into m clusters C1 , . . . ,Cm , the quality can be measured by, where P() is the maximum likelihood • If we merge two clusters Cj1 and Cj2 into a cluster Cj1 ∪Cj2 , then, the change in quality of the overall clustering is • Distance between clusters C1 and C2 : 54 A Probabilistic Hierarchical Clustering Algorithm
  • 56. • Clustering based on density (local cluster criterion), such as density-connected points • Major features: • Discover clusters of arbitrary shape • Handle noise • One scan • Need density parameters as termination condition • Several interesting studies: • DBSCAN: Ester, et al. (KDD’96) • OPTICS: Ankerst, et al (SIGMOD’99). • DENCLUE: Hinneburg & D. Keim (KDD’98) • CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based) 56 Density-Based Clustering Methods
  • 57. • Two parameters: • Eps: Maximum radius of the neighbourhood • MinPts: Minimum number of points in an Eps-neighbourhood of that point • NEps (q): {p belongs to D | dist(p,q) ≤ Eps} • Directly density-reachable: A point p is directly density-reachable from a point q w.r.t. Eps, MinPts if • p belongs to NEps (q) • core point condition: |NEps (q)| ≥ MinPts 57 Density-Based Clustering: Basic Concepts MinPts = 5 Eps = 1 cm p q
  • 58. • Density-reachable: • A point p is density-reachable from a point q w.r.t. Eps, MinPts if there is a chain of points p1 , …, pn , p1 = q, pn = p such that pi+1 is directly density-reachable from pi • Density-connected • A point p is density-connected to a point q w.r.t. Eps, MinPts if there is a point o such that both, p and q are density-reachable from o w.r.t. Eps and MinPts 58 Density-Reachable and Density-Connected p q p1 p q o
  • 59. •Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points •Discovers clusters of arbitrary shape in spatial databases with noise 59 DBSCAN: Density-Based Spatial Clustering of Applications with Noise Core Border Outlier Eps = 1cm MinPts = 5
  • 60. 1. Arbitrary select a point p 2. Retrieve all points density-reachable from p w.r.t. Eps and MinPts 3. If p is a core point, a cluster is formed 4. If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database 5. Continue the process until all of the points have been processed If a spatial index is used, the computational complexity of DBSCAN is O(nlogn), where n is the number of database objects. Otherwise, the complexity is O(n2 ) 60 DBSCAN: The Algorithm
  • 61. 61 DBSCAN: Sensitive to Parameters http://webdocs.cs.ualberta.ca/~yaling/Cluster/Applet/Code/Cluster.html
  • 62. • OPTICS: Ordering Points To Identify the Clustering Structure • Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99) • Produces a special order of the database wrt its density-based clustering structure • This cluster-ordering contains info equiv to the density-based clusterings corresponding to a broad range of parameter settings • Good for both automatic and interactive cluster analysis, including finding intrinsic clustering structure • Can be represented graphically or using visualization techniques 62 OPTICS: A Cluster-Ordering Method (1999)
  • 63. • Index-based: k = # of dimensions, N: # of points • Complexity: O(N*logN) • Core Distance of an object p: the smallest value Îľ such that the Îľ-neighborhood of p has at least MinPts objects Let NÎľ (p): Îľ-neighborhood of p, Îľ is a distance value Core-distanceÎľ, MinPts (p) = Undefined if card(NÎľ (p)) < MinPts MinPts-distance(p), otherwise • Reachability Distance of object p from core object q is the min radius value that makes p density-reachable from q Reachability-distanceÎľ, MinPts (p, q) = Undefined if q is not a core object max(core-distance(q), distance (q, p)), otherwise 63 OPTICS: Some Extension from DBSCAN
  • 64. 64 Core Distance & Reachability Distance
  • 66. 66 Density-Based Clustering: OPTICS & Applications: http://www.dbs.informatik.uni-muenchen.de/Forschung/KDD/Clustering/OPTICS/Demo
  • 67. • DENsity-based CLUstEring by Hinneburg & Keim (KDD’98) • Using statistical density functions: • Major features • Solid mathematical foundation • Good for data sets with large amounts of noise • Allows a compact mathematical description of arbitrarily shaped clusters in high-dimensional data sets • Significant faster than existing algorithm (e.g., DBSCAN) • But needs a large number of parameters 67 DENCLUE: Using Statistical Density Functions influence of y on x total influence on x gradient of x in the direction of xi
  • 68. • Uses grid cells but only keeps information about grid cells that do actually contain data points and manages these cells in a tree-based access structure • Influence function: describes the impact of a data point within its neighborhood • Overall density of the data space can be calculated as the sum of the influence function of all data points • Clusters can be determined mathematically by identifying density attractors • Density attractors are local maximal of the overall density function • Center defined clusters: assign to each density attractor the points density attracted to it • Arbitrary shaped cluster: merge density attractors that are connected through paths of high density (> threshold) 68 Denclue: Technical Essence
  • 72. •Using multi-resolution grid data structure •Several interesting methods • STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997) • WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB’98) •A multi-resolution clustering approach using wavelet method • CLIQUE: Agrawal, et al. (SIGMOD’98) •Both grid-based and subspace clustering 72 Grid-Based Clustering Method
  • 73. •Wang, Yang and Muntz (VLDB’97) •The spatial area is divided into rectangular cells •There are several levels of cells corresponding to different levels of resolution 73 STING: A Statistical Information Grid Approach
  • 74. • Each cell at a high level is partitioned into a number of smaller cells in the next lower level • Statistical info of each cell is calculated and stored beforehand and is used to answer queries • Parameters of higher level cells can be easily calculated from parameters of lower level cell • count, mean, s, min, max • type of distribution—normal, uniform, etc. • Use a top-down approach to answer spatial data queries • Start from a pre-selected layer—typically with a small number of cells • For each cell in the current level compute the confidence interval 74 The STING Clustering Method
  • 75. • Remove the irrelevant cells from further consideration • When finish examining the current layer, proceed to the next lower level • Repeat this process until the bottom layer is reached • Advantages: • Query-independent, easy to parallelize, incremental update • O(K), where K is the number of grid cells at the lowest level • Disadvantages: • All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected 75 STING Algorithm and Its Analysis
  • 76. • Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98) • Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space • CLIQUE can be considered as both density-based and grid-based • It partitions each dimension into the same number of equal length interval • It partitions an m-dimensional data space into non-overlapping rectangular units • A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter • A cluster is a maximal set of connected dense units within a subspace 76 CLIQUE (Clustering In QUEst)
  • 77. 1. Partition the data space and find the number of points that lie inside each cell of the partition. 2. Identify the subspaces that contain clusters using the Apriori principle 3. Identify clusters 1. Determine dense units in all subspaces of interests 2. Determine connected dense units in all subspaces of interests. 4. Generate minimal description for the clusters 1. Determine maximal regions that cover a cluster of connected dense units for each cluster 2. Determination of minimal cover for each cluster 77 CLIQUE: The Major Steps
  • 78. 78 Salary (10,000) 20 30 40 50 60 age 5 4 3 1 2 6 7 0 20 30 40 50 60 age 5 4 3 1 2 6 7 0 Vacation(w eek) age Vacation Salary 30 50 τ = 3
  • 79. •Strength • automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces • insensitive to the order of records in input and does not presume some canonical data distribution • scales linearly with the size of input and has good scalability as the number of dimensions in the data increases •Weakness • The accuracy of the clustering result may be degraded at the expense of simplicity of the method 79 Strength and Weakness of CLIQUE
  • 80. 5.6 Evaluation of Clustering 80
  • 81. • Assess if non-random structure exists in the data by measuring the probability that the data is generated by a uniform data distribution • Test spatial randomness by statistic test: Hopkins Static • Given a dataset D regarded as a sample of a random variable o, determine how far away o is from being uniformly distributed in the data space • Sample n points, p1, …, pn, uniformly from D. For each pi, find its nearest neighbor in D: xi = min{dist (pi, v)} where v in D • Sample n points, q1, …, qn, uniformly from D. For each qi, find its nearest neighbor in D – {qi}: yi = min{dist (qi, v)} where v in D and v ≠ qi • Calculate the Hopkins Statistic: • If D is uniformly distributed, ∑ xi and ∑ yi will be close to each other and H is close to 0.5. If D is clustered, H is close to 1 81 Assessing Clustering Tendency
  • 82. • Empirical method • # of clusters ≈√n/2 for a dataset of n points • Elbow method • Use the turning point in the curve of sum of within cluster variance w.r.t the # of clusters • Cross validation method • Divide a given data set into m parts • Use m – 1 parts to obtain a clustering model • Use the remaining part to test the quality of the clustering • E.g., For each point in the test set, find the closest centroid, and use the sum of squared distance between all points in the test set and the closest centroids to measure how well the model fits the test set • For any k > 0, repeat it m times, compare the overall quality measure w.r.t. different k’s, and find # of clusters that fits the data the best 82 Determine the Number of Clusters
  • 83. • Two methods: extrinsic vs. intrinsic • Extrinsic: supervised, i.e., the ground truth is available • Compare a clustering against the ground truth using certain clustering quality measure • Ex. BCubed precision and recall metrics • Intrinsic: unsupervised, i.e., the ground truth is unavailable • Evaluate the goodness of a clustering by considering how well the clusters are separated, and how compact the clusters are • Ex. Silhouette coefficient 83 Measuring Clustering Quality
  • 84. • Clustering quality measure: Q(C, Cg ), for a clustering C given the ground truth Cg • Q is good if it satisfies the following 4 essential criteria • Cluster homogeneity: the purer, the better • Cluster completeness: should assign objects belong to the same category in the ground truth to the same cluster • Rag bag: putting a heterogeneous object into a pure cluster should be penalized more than putting it into a rag bag (i.e., “miscellaneous” or “other” category) • Small cluster preservation: splitting a small category into pieces is more harmful than splitting a large category into pieces 84 Measuring Clustering Quality: Extrinsic Methods
  • 85. • Cluster analysis groups objects based on their similarity and has wide applications • Measure of similarity can be computed for various types of data • Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods • K-means and K-medoids algorithms are popular partitioning-based clustering algorithms • Birch and Chameleon are interesting hierarchical clustering algorithms, and there are also probabilistic hierarchical clustering algorithms • DBSCAN, OPTICS, and DENCLU are interesting density-based algorithms • STING and CLIQUE are grid-based methods, where CLIQUE is also a subspace clustering algorithm • Quality of clustering results can be evaluated in various ways 85 Rangkuman
  • 86. 1. Jiawei Han and Micheline Kamber, Data Mining: Concepts and Techniques Third Edition, Elsevier, 2012 2. Ian H. Witten, Frank Eibe, Mark A. Hall, Data mining: Practical Machine Learning Tools and Techniques 3rd Edition, Elsevier, 2011 3. Markus Hofmann and Ralf Klinkenberg, RapidMiner: Data Mining Use Cases and Business Analytics Applications, CRC Press Taylor & Francis Group, 2014 4. Daniel T. Larose, Discovering Knowledge in Data: an Introduction to Data Mining, John Wiley & Sons, 2005 5. Ethem Alpaydin, Introduction to Machine Learning, 3rd ed., MIT Press, 2014 6. Florin Gorunescu, Data Mining: Concepts, Models and Techniques, Springer, 2011 7. Oded Maimon and Lior Rokach, Data Mining and Knowledge Discovery Handbook Second Edition, Springer, 2010 8. Warren Liao and Evangelos Triantaphyllou (eds.), Recent Advances in Data Mining of Enterprise Data: Algorithms and Applications, World Scientific, 2007 86 Referensi