Clustering
2
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
3
What is Cluster Analysis?
• Finding groups of objects such that the objects in a
group will be similar (or related) to one another and
different from (or unrelated to) the objects in other
groups
Inter-cluster
distances are
maximized
Intra-cluster
distances are
minimized
4
What is Cluster Analysis?
• Cluster: a collection of data objects
– Similar to one another within the same cluster
– Dissimilar to the objects in other clusters
• Cluster analysis
– Grouping a set of data objects into clusters
• Clustering is unsupervised classification: no predefined
classes
• Typical applications
– As a stand-alone tool to get insight into data distribution
– As a preprocessing step for other algorithms
Why do we cluster?
• Clustering : given a collection of data objects group
them so that
– Similar to one another within the same cluster
– Dissimilar to the objects in other clusters
• Clustering results are used:
– As a stand-alone tool to get insight into data
distribution
• Visualization of clusters may unveil important information
– As a preprocessing step for other algorithms
• Efficient indexing or compression often relies on
clustering
5
6
General Applications of Clustering
• Pattern Recognition (unsupervised)
• Spatial Data Analysis
– create thematic maps in GIS by clustering feature spaces
– detect spatial clusters and explain them in spatial data mining
• Image Processing
• Economic Science (especially market research)
• WWW
– Document classification
– Cluster Weblog data to discover groups of similar access
patterns
7
Examples of Clustering Applications
• Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
programs
• Land use: Identification of areas of similar land use in an earth
observation database
• Insurance: Identifying groups of motor insurance policy holders with
a high average claim cost
• City-planning: Identifying groups of houses according to their house
type, value, and geographical location
• Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
8
Notion of a Cluster can be Ambiguous
How many clusters?
Four Clusters
Two Clusters
Six Clusters
9
What Is Good Clustering?
• A good clustering method will produce high quality
clusters with
– high intra-class similarity
– low inter-class similarity
• The quality of a clustering result depends on both the
similarity measure used by the method and its
implementation.
• The quality of a clustering method is also measured by
its ability to discover some or all of the hidden patterns.
The clustering task
Group observations into groups so that the observations
belonging in the same group are similar, whereas
observations in different groups are different
• Basic questions:
– What does “similar” mean
– What is a good partition of the objects? i.e., how is the
quality of a solution measured
– How to find a good partition of the observations
10
11
Types of Clusterings:
• A clustering is a set of clusters
• Important distinction between hierarchical and
partitional sets of clusters
• Partitional Clustering
– A division data objects into non-overlapping subsets (clusters)
such that each data object is in exactly one subset
• Hierarchical clustering
– A set of nested clusters organized as a hierarchical tree
12
Partitional Clustering
Original Points A Partitional Clustering
13
Hierarchical Clustering
p4
p1
p3
p2
p4
p1
p3
p2
p4
p1 p2 p3
p4
p1 p2 p3
Traditional Hierarchical Clustering
Non-traditional Hierarchical Clustering Non-traditional Dendrogram
Traditional Dendrogram
14
Other Distinctions Between Sets of
Clusters
• Exclusive versus non-exclusive
– In non-exclusive clusterings, points may belong to multiple clusters.
– Can represent multiple classes or ‘border’ points
• Fuzzy versus non-fuzzy
– In fuzzy clustering, a point belongs to every cluster with some weight
between 0 and 1
– Weights must sum to 1
– Probabilistic clustering has similar characteristics
• Partial versus complete
– In some cases, we only want to cluster some of the data
• Heterogeneous versus homogeneous
– Cluster of widely different sizes, shapes, and densities
15
Types of Clusters
• Well-separated clusters
• Center-based clusters
• Contiguous clusters
• Density-based clusters
• Property or Conceptual
• Described by an Objective Function
16
Types of Clusters: Well-Separated
• Well-Separated Clusters:
– A cluster is a set of points such that any point in a cluster is
closer (or more similar) to every other point in the cluster than to
any point not in the cluster.
3 well-separated clusters
17
Types of Clusters: Center-Based
• Center-based
– A cluster is a set of objects such that an object in a cluster is
closer (more similar) to the “center” of a cluster, than to the
center of any other cluster
– The center of a cluster is often a centroid, the average of all the
points in the cluster, or a medoid, the most “representative” point
of a cluster
4 center-based clusters
18
Types of Clusters: Contiguity-Based
• Contiguous Cluster (Nearest neighbor or
Transitive)
– A cluster is a set of points such that a point in a cluster is closer (or
more similar) to one or more other points in the cluster than to any
point not in the cluster.
8 contiguous clusters
19
Types of Clusters: Density-Based
• Density-based
– A cluster is a dense region of points, which is separated by low-
density regions, from other regions of high density.
– Used when the clusters are irregular or intertwined, and when
noise and outliers are present.
6 density-based clusters
20
Types of Clusters: Conceptual Clusters
• Shared Property or Conceptual Clusters
– Finds clusters that share some common property or represent a
particular concept.
2 Overlapping Circles
21
Types of Clusters: Objective Function
• Clusters Defined by an Objective Function
– Finds clusters that minimize or maximize an objective function.
– Enumerate all possible ways of dividing the points into clusters
and evaluate the `goodness' of each potential set of clusters by
using the given objective function. (NP Hard)
– Can have global or local objectives.
• Hierarchical clustering algorithms typically have local objectives
• Partitional algorithms typically have global objectives
– A variation of the global objective function approach is to fit the
data to a parameterized model.
• Parameters for the model are determined from the data.
• Mixture models assume that the data is a ‘mixture' of a number of
statistical distributions.
22
Types of Clusters: Objective Function …
• Map the clustering problem to a different domain
and solve a related problem in that domain
– Proximity matrix defines a weighted graph, where the
nodes are the points being clustered, and the
weighted edges represent the proximities between
points
– Clustering is equivalent to breaking the graph into
connected components, one for each cluster.
– Want to minimize the edge weight between clusters
and maximize the edge weight within clusters
23
Requirements of Clustering in Data Mining
• Scalability
• Ability to deal with different types of attributes
• Discovery of clusters with arbitrary shape
• Minimal requirements for domain knowledge to determine input
parameters
• Able to deal with noise and outliers
• Insensitive to order of input records
• High dimensionality
• Incorporation of user-specified constraints
• Interpretability and usability
24
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
Stages in clustering
26
Data Structures
• Data matrix
– (two modes)
• Dissimilarity matrix
– (one mode)


















np
x
...
nf
x
...
n1
x
...
...
...
...
...
ip
x
...
if
x
...
i1
x
...
...
...
...
...
1p
x
...
1f
x
...
11
x
















0
...
)
2
,
(
)
1
,
(
:
:
:
)
2
,
3
(
)
...
n
d
n
d
0
d
d(3,1
0
d(2,1)
0
27
Measure the Quality of Clustering
• Dissimilarity/Similarity metric: Similarity is expressed in terms of a
distance function, which is typically metric:
d(i, j)
• There is a separate “quality” function that measures the “goodness”
of a cluster.
• The definitions of distance functions are usually very different for
interval-scaled, boolean, categorical, ordinal and ratio variables.
• Weights should be associated with different variables based on
applications and data semantics.
• It is hard to define “similar enough” or “good enough”
– the answer is typically highly subjective.
28
Type of data in clustering analysis
• Interval-scaled variables:
• Binary variables:
• Nominal, ordinal, and ratio variables:
• Variables of mixed types:
Interval-valued variables
• Interval-scaled (numeric) variables are continuous measurements
of a roughly linear scale.
Examples
– weight and height, latitude and longitude coordinates (e.g., when
clustering houses), and weather temperature. The measurement
unit used can affect the clustering
– For example, changing measurement units from meters to inches
for height, or from kilograms to pounds for weight, may lead to a
very different clustering structure.
29
Data Standardization
• Expressing a variable in smaller units will lead to a larger
range for that variable, and thus a larger effect on the
resulting clustering structure.
• To help avoid dependence on the choice of measurement
units, the data should be standardized.
• Standardizing measurements attempts to give all
variables an equal weight.
• To standardize measurements, one choice is to convert
the original measurements to unitless variables.
30
31
Interval-valued variables
• Standardize data
– Calculate the mean absolute deviation:
where
– Calculate the standardized measurement (z-score)
• Using mean absolute deviation is more robust than using
standard deviation
.
)
...
2
1
1
nf
f
f
f
x
x
(x
n
m 



|)
|
...
|
|
|
(|
1
2
1 f
nf
f
f
f
f
f
m
x
m
x
m
x
n
s 






f
f
if
if s
m
x
z


32
Data Standardization
33
Data Standardization
34
Similarity and Dissimilarity Between Objects
• Distances are normally used to measure the similarity or
dissimilarity between two data objects
• Some popular ones include: Minkowski distance:
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-
dimensional data objects, and q is a positive integer
• If q = 1, d is Manhattan distance
q
q
p
p
q
q
j
x
i
x
j
x
i
x
j
x
i
x
j
i
d )
|
|
...
|
|
|
(|
)
,
(
2
2
1
1







|
|
...
|
|
|
|
)
,
(
2
2
1
1 p
p j
x
i
x
j
x
i
x
j
x
i
x
j
i
d 






35
Similarity and Dissimilarity Between Objects (Cont.)
• If q = 2, d is Euclidean distance:
– Properties
• d(i,j)  0
• d(i,i) = 0
• d(i,j) = d(j,i)
• d(i,j)  d(i,k) + d(k,j)
• Also, one can use weighted distance, parametric
Pearson product moment correlation, or other disimilarity
measures
)
|
|
...
|
|
|
(|
)
,
( 2
2
2
2
2
1
1 p
p j
x
i
x
j
x
i
x
j
x
i
x
j
i
d 






Other Dissimilarity Measures
36
37
Similarity and Dissimilarity Between Objects (Cont.)
38
Binary Variables
39
Binary Variables
• A contingency table for binary data
• Simple matching coefficient (invariant, if the binary variable is
symmetric):
• Jaccard coefficient (noninvariant if the binary variable is asymmetric):
d
c
b
a
c
b
j
i
d





)
,
(
c
b
a
c
b
j
i
d




)
,
(
p
d
b
c
a
sum
d
c
d
c
b
a
b
a
sum




0
1
0
1
Object i
Object j
40
Dissimilarity between Binary Variables
• Example
– gender is a symmetric attribute
– the remaining attributes are asymmetric binary
– let the values Y and P be set to 1, and the value N be set to 0
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
75
.
0
2
1
1
2
1
)
,
(
67
.
0
1
1
1
1
1
)
,
(
33
.
0
1
0
2
1
0
)
,
(















m ary
jim
d
jim
jack
d
m ary
jack
d
41
Nominal Variables
• A generalization of the binary variable in that it can take
more than 2 states, e.g., red, yellow, blue, green
• Method 1: Simple matching
– m: # of matches, p: total # of variables
• Method 2: use a large number of binary variables
– creating a new binary variable for each of the M nominal states
p
m
p
j
i
d 

)
,
(
42
43
Ordinal Variables
44
Ordinal Variables
45
Ordinal Variables
46
Ordinal Variables
Example: Dissimilarity between ordinal variables
47
48
Ratio-Scaled Variables
• Ratio-scaled variable: a positive measurement on a
nonlinear scale, approximately at exponential scale,
such as AeBt or Ae-Bt
• Methods:
– treat them like interval-scaled variables—not a good choice!
(why?—the scale can be distorted)
– apply logarithmic transformation
yif = log(xif)
– treat them as continuous ordinal data treat their rank as interval-
scaled
49
Variables of Mixed Types
• A database may contain all the six types of variables
– symmetric binary, asymmetric binary, nominal, ordinal, interval
and ratio
• One may use a weighted formula to combine their
effects
– f is binary or nominal:
dij
(f) = 0 if xif = xjf , or dij
(f) = 1 otherwise
– f is interval-based: use the normalized distance
– f is ordinal or ratio-scaled
• compute ranks rif and
• and treat zif as interval-scaled
)
(
1
)
(
)
(
1
)
,
( f
ij
p
f
f
ij
f
ij
p
f
d
j
i
d







1
1



f
if
M
r
zif
50
Variables of Mixed Types
Example: Dissimilarity between variables of mixed
type
51
52
Example: Dissimilarity between variables of mixed
type
CS590D: Data Mining
Prof. Chris Clifton
February 23, 2006
Clustering
57
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
Taxonomy of Clustering Approaches
59
Partitioning Algorithms: Basic
Concept
• Partitioning method: Construct a partition of a database
D of n objects into a set of k clusters
• Given a k, find a partition of k clusters that optimizes the
chosen partitioning criterion
– Global optimal: exhaustively enumerate all partitions
– Heuristic methods: k-means and k-medoids algorithms
– k-means (MacQueen’67): Each cluster is represented by the
center of the cluster
– k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the
objects in the cluster
60
The K-Means Clustering
Method
• Given k, the k-means algorithm is implemented in four
steps:
– Partition objects into k nonempty subsets
– Compute seed points as the centroids of the clusters of the
current partition (the centroid is the center, i.e., mean point, of
the cluster)
– Assign each object to the cluster with the nearest seed point
– Go back to Step 2, stop when no more new assignment
K-Means
• Step 0: Start with a random partition into K
clusters
• Step 1: Generate a new partition by
assigning each pattern to its closest cluster
center
• Step 2: Compute new cluster centers as the
centroids of the clusters.
• Step 3: Steps 1 and 2 are repeated until
there is no change in the membership (also
cluster centers remain the same)
K-Means
K-Means – How many K’s ?
K-Means – How many K’s ?
Locating the ‘knee’
The knee of a curve is defined as the point of
maximum curvature.
66
The K-Means Clustering
Method
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrarily choose K
object as initial
cluster center
Assign
each
objects
to most
similar
center
Update
the
cluster
means
Update
the
cluster
means
reassign
reassign
67
Comments on the K-Means
Method
• Strength: Relatively efficient: O(tkn), where n is # objects, k is # clusters,
and t is # iterations. Normally, k, t << n.
• Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k))
• Comment: Often terminates at a local optimum. The global optimum may be
found using techniques such as: deterministic annealing and genetic
algorithms
• Weakness
– Applicable only when mean is defined, then what about categorical data?
– Need to specify k, the number of clusters, in advance
– Unable to handle noisy data and outliers
– Not suitable to discover clusters with non-convex shapes
68
Variations of the K-Means
Method
• A few variants of the k-means which differ in
– Selection of the initial k means
– Dissimilarity calculations
– Strategies to calculate cluster means
• Handling categorical data: k-modes (Huang’98)
– Replacing means of clusters with modes
– Using new dissimilarity measures to deal with categorical objects
– Using a frequency-based method to update modes of clusters
– A mixture of categorical and numerical data: k-prototype method
69
What is the problem of k-
Means Method?
• The k-means algorithm is sensitive to outliers !
– Since an object with an extremely large value may substantially distort
the distribution of the data.
• K-Medoids: Instead of taking the mean value of the object in a
cluster as a reference point, medoids can be used, which is the
most centrally located object in a cluster.
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
70
Importance of Choosing Initial
Centroids
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 6
71
Solutions to Initial Centroids
Problem
• Multiple runs
– Helps, but probability is not on your side
• Sample and use hierarchical clustering to
determine initial centroids
• Select more than k initial centroids and then
select among these initial centroids
– Select most widely separated
• Postprocessing
• Bisecting K-means
– Not as susceptible to initialization issues
72
Limitations of K-means:
Differing Density
Original Points K-means (3 Clusters)
73
Limitations of K-means: Non-
globular Shapes
Original Points K-means (2 Clusters)
74
The K-Medoids Clustering
Method
• Find representative objects, called medoids, in clusters
• PAM (Partitioning Around Medoids, 1987)
– starts from an initial set of medoids and iteratively replaces one of the
medoids by one of the non-medoids if it improves the total distance of
the resulting clustering
– PAM works effectively for small data sets, but does not scale well for
large data sets
• CLARA (Kaufmann & Rousseeuw, 1990)
• CLARANS (Ng & Han, 1994): Randomized sampling
• Focusing + spatial data structure (Ester et al., 1995)
75
Typical k-medoids algorithm
(PAM)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 20
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrary
choose k
object as
initial
medoids
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Assign
each
remainin
g object
to
nearest
medoids
Randomly select a
nonmedoid object,Oramdom
Compute
total cost of
swapping
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 26
Swapping O
and Oramdom
If quality is
improved.
Do loop
Until no
change
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
76
PAM (Partitioning Around
Medoids) (1987)
• PAM (Kaufman and Rousseeuw, 1987), built in Splus
• Use real object to represent the cluster
– Select k representative objects arbitrarily
– For each pair of non-selected object h and selected object i,
calculate the total swapping cost TCih
– For each pair of i and h,
• If TCih < 0, i is replaced by h
• Then assign each non-selected object to the most similar
representative object
– repeat steps 2-3 until there is no change
PAM Clustering: Total swapping
cost TCih=jCjih
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
j
i
h
t
Cjih = 0
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
t
i h
j
Cjih = d(j, h) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
h
i
t
j
Cjih = d(j, t) - d(j, i)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
t
i
h j
Cjih = d(j, h) - d(j, t)
78
What is the problem with
PAM?
• Pam is more robust than k-means in the presence of
noise and outliers because a medoid is less influenced
by outliers or other extreme values than a mean
• Pam works efficiently for small data sets but does not
scale well for large data sets.
– O(k(n-k)2 ) for each iteration
where n is # of data,k is # of clusters
Sampling based method,
CLARA(Clustering LARge Applications)
79
CLARA (Clustering Large
Applications) (1990)
• CLARA (Kaufmann and Rousseeuw in 1990)
– Built in statistical analysis packages, such as S+
• It draws multiple samples of the data set, applies PAM on each
sample, and gives the best clustering as the output
• Strength: deals with larger data sets than PAM
• Weakness:
– Efficiency depends on the sample size
– A good clustering based on samples will not necessarily represent a
good clustering of the whole data set if the sample is biased
80
CLARANS (“Randomized”
CLARA) (1994)
• CLARANS (A Clustering Algorithm based on Randomized Search)
(Ng and Han’94)
• CLARANS draws sample of neighbors dynamically
• The clustering process can be presented as searching a graph
where every node is a potential solution, that is, a set of k medoids
• If the local optimum is found, CLARANS starts with new randomly
selected node in search for a new local optimum
• It is more efficient and scalable than both PAM and CLARA
• Focusing techniques and spatial access structures may further
improve its performance (Ester et al.’95)
81
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
Hierarchical Clustering
Agglomerative clustering treats each data point as a singleton cluster, and
then successively merges clusters until all points have been merged into a
single remaining cluster. Divisive clustering works the other way around.
83
Hierarchical Clustering
• Use distance matrix as clustering criteria. This
method does not require the number of clusters
k as an input, but needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
b
d
c
e
a
a b
d e
c d e
a b c d e
Step 4 Step 3 Step 2 Step 1 Step 0
agglomerative
(AGNES)
divisive
(DIANA)
84
AGNES (Agglomerative
Nesting)
• Introduced in Kaufmann and Rousseeuw (1990)
• Implemented in statistical analysis packages, e.g., Splus
• Use the Single-Link method and the dissimilarity matrix.
• Merge nodes that have the least dissimilarity
• Go on in a non-descending fashion
• Eventually all nodes belong to the same cluster
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
85
Agglomerative Clustering
Algorithm
• More popular hierarchical clustering technique
• Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains
• Key operation is the computation of the proximity of two
clusters
– Different approaches to defining the distance between clusters
distinguish the different algorithms
86
Starting Situation
• Start with clusters of individual points and
a proximity matrix p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
. Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
87
Intermediate Situation
• After some merging steps, we have some clusters
C1
C4
C2 C5
C3
C2
C1
C1
C3
C5
C4
C2
C3 C4 C5
Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
88
Intermediate Situation
• We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix.
C1
C4
C2 C5
C3
C2
C1
C1
C3
C5
C4
C2
C3 C4 C5
Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
89
After Merging
• The question is “How do we update the proximity
matrix?”
C1
C4
C2 U C5
C3 ? ? ? ?
?
?
?
C2
U
C5
C1
C1
C3
C4
C2 U C5
C3 C4
Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
90
How to Define Inter-Cluster
Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Similarity?
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
Proximity Matrix
91
How to Define Inter-Cluster
Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
92
How to Define Inter-Cluster
Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
93
How to Define Inter-Cluster
Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
94
How to Define Inter-Cluster
Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
• MIN
• MAX
• Group Average
• Distance Between Centroids
• Other methods driven by an
objective function
– Ward’s Method uses squared error
 
108
Hierarchical Clustering: Time and
Space requirements
• O(N2) space since it uses the proximity matrix.
– N is the number of points.
• O(N3) time in many cases
– There are N steps and at each step the size, N2,
proximity matrix must be updated and searched
– Complexity can be reduced to O(N2 log(N) ) time for
some approaches
109
Hierarchical Clustering: Problems
and Limitations
• Once a decision is made to combine two
clusters, it cannot be undone
• No objective function is directly minimized
• Different schemes have problems with one
or more of the following:
– Sensitivity to noise and outliers
– Difficulty handling different sized clusters and
convex shapes
– Breaking large clusters
110
A Dendrogram Shows How the
Clusters are Merged Hierarchically
• Decompose data objects into a several levels of
nested partitioning (tree of clusters), called a
dendrogram.
• A clustering of the data objects is obtained by
cutting the dendrogram at the desired level, then
each connected component forms a cluster.
111
DIANA (Divisive Analysis)
• Introduced in Kaufmann and Rousseeuw (1990)
• Implemented in statistical analysis packages, e.g., Splus
• Inverse order of AGNES
• Eventually each node forms a cluster on its own
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
114
More on Hierarchical
Clustering Methods
• Major weakness of agglomerative clustering methods
– do not scale well: time complexity of at least O(n2), where n is
the number of total objects
– can never undo what was done previously
• Integration of hierarchical with distance-based clustering
– BIRCH (1996): uses CF-tree and incrementally adjusts the
quality of sub-clusters
– CURE (1998): selects well-scattered points from the cluster and
then shrinks them towards the center of the cluster by a specified
fraction
– CHAMELEON (1999): hierarchical clustering using dynamic
modeling
115
BIRCH (1996)
• Birch: Balanced Iterative Reducing and Clustering using Hierarchies,
by Zhang, Ramakrishnan, Livny (SIGMOD’96)
• Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
– Phase 1: scan DB to build an initial in-memory CF tree (a multi-level
compression of the data that tries to preserve the inherent clustering
structure of the data)
– Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes
of the CF-tree
• Scales linearly: finds a good clustering with a single scan and
improves the quality with a few additional scans
• Weakness: handles only numeric data, and sensitive to the order of
the data record.
116
Clustering Feature: CF = (N, LS, SS)
N: Number of data points
LS: N
i=1=Xi
SS: N
i=1=Xi
2
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CF = (5, (16,30),(54,190))
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
Clustering Feature Vector
117
CF-Tree in BIRCH
• Clustering feature:
– summary of the statistics for a given subcluster: the 0-th, 1st and
2nd moments of the subcluster from the statistical point of view.
– registers crucial measurements for computing cluster and utilizes
storage efficiently
A CF tree is a height-balanced tree that stores the
clustering features for a hierarchical clustering
– A nonleaf node in a tree has descendants or “children”
– The nonleaf nodes store sums of the CFs of their children
• A CF tree has two parameters
– Branching factor: specify the maximum number of children.
– threshold: max diameter of sub-clusters stored at the leaf nodes
CF Tree
CF1
child1
CF3
child3
CF2
child2
CF6
child6
CF1
child1
CF3
child3
CF2
child2
CF5
child5
CF1 CF2 CF6
prev next CF1 CF2 CF4
prev next
B = 7
L = 6
Root
Non-leaf node
Leaf node Leaf node
119
CURE (Clustering Using
REpresentatives )
• CURE: proposed by Guha, Rastogi & Shim, 1998
– Stops the creation of a cluster hierarchy if a level consists of k
clusters
– Uses multiple representative points to evaluate the distance
between clusters, adjusts well to arbitrary shaped clusters and
avoids single-link effect
120
Drawbacks of Distance-
Based Method
• Drawbacks of square-error based clustering method
– Consider only one point as representative of a cluster
– Good only for convex shaped, similar size and density, and if k
can be reasonably estimated
121
Cure: The Algorithm
• Draw random sample s.
• Partition sample to p partitions with size s/p
• Partially cluster partitions into s/pq clusters
• Eliminate outliers
– By random sampling
– If a cluster grows too slow, eliminate it.
• Cluster partial clusters.
• Label data in disk
122
Data Partitioning and
Clustering
– s = 50
– p = 2
– s/p = 25
x x
x
y
y y
y
x
y
x
s/pq = 5
123
Cure: Shrinking
Representative Points
• Shrink the multiple representative points towards the
gravity center by a fraction of .
• Multiple representatives capture the shape of the cluster
x
y
x
y
124
Clustering Categorical Data:
ROCK
• ROCK: Robust Clustering using linKs,
by S. Guha, R. Rastogi, K. Shim (ICDE’99).
– Use links to measure similarity/proximity
– Not distance based
– Computational complexity:
• Basic ideas:
– Similarity function and neighbors:
Let T1 = {1,2,3}, T2={3,4,5}
O n nm m n n
m a
( log )
2 2
 
Sim T T
T T
T T
( , )
1 2
1 2
1 2



Sim T T
( , )
{ }
{ , , , , }
.
1 2
3
1 2 3 4 5
1
5
0 2
  
126
CHAMELEON (Hierarchical
clustering using dynamic modeling)
• CHAMELEON: by G. Karypis, E.H. Han, and V. Kumar’99
• Measures the similarity based on a dynamic model
– Two clusters are merged only if the interconnectivity and closeness
(proximity) between two clusters are high relative to the internal
interconnectivity of the clusters and closeness of items within the
clusters
– Cure ignores information about interconnectivity of the objects,
Rock ignores information about the closeness of two clusters
• A two-phase algorithm
1. Use a graph partitioning algorithm: cluster objects into a large number
of relatively small sub-clusters
2. Use an agglomerative hierarchical clustering algorithm: find the
genuine clusters by repeatedly combining these sub-clusters
127
Overall Framework of
CHAMELEON
Construct
Sparse Graph Partition the Graph
Merge Partition
Final Clusters
Data Set
129
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
130
Density-Based Clustering
Methods
• Clustering based on density (local cluster criterion), such
as density-connected points
• Major features:
– Discover clusters of arbitrary shape
– Handle noise
– One scan
– Need density parameters as termination condition
• Several interesting studies:
– DBSCAN: Ester, et al. (KDD’96)
– OPTICS: Ankerst, et al (SIGMOD’99).
– DENCLUE: Hinneburg & D. Keim (KDD’98)
– CLIQUE: Agrawal, et al. (SIGMOD’98)
131
Density Concepts
• Core object (CO)–object with at least ‘M’ objects within a
radius ‘E-neighborhood’
• Directly density reachable (DDR)–x is CO, y is in x’s ‘E-
neighborhood’
• Density reachable–there exists a chain of DDR objects
from x to y
• Density based cluster–density connected objects
maximum w.r.t. reachability
132
Density-Based Clustering:
Background
• Two parameters:
– Eps: Maximum radius of the neighbourhood
– MinPts: Minimum number of points in an Eps-neighbourhood of
that point
• NEps(p): {q belongs to D | dist(p,q) <= Eps}
• Directly density-reachable: A point p is directly density-
reachable from a point q wrt. Eps, MinPts if
– 1) p belongs to NEps(q)
– 2) core point condition:
|NEps (q)| >= MinPts
p
q
MinPts = 5
Eps = 1 cm
133
Density-Based Clustering:
Background (II)
• Density-reachable:
– A point p is density-reachable from
a point q wrt. Eps, MinPts if there is
a chain of points p1, …, pn, p1 = q, pn
= p such that pi+1 is directly density-
reachable from pi
• Density-connected
– A point p is density-connected to a
point q wrt. Eps, MinPts if there is a
point o such that both, p and q are
density-reachable from o wrt. Eps
and MinPts.
p
q
p1
p q
o
134
DBSCAN: Density Based Spatial
Clustering of Applications with Noise
• Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
• Discovers clusters of arbitrary shape in spatial
databases with noise
Core
Border
Outlier
Eps = 1cm
MinPts = 5
136
DBSCAN
• DBSCAN is a density-based algorithm.
– Density = number of points within a specified radius
(Eps)
– A point is a core point if it has more than a specified
number of points (MinPts) within Eps
• These are points that are at the interior of a
cluster
– A border point has fewer than MinPts within Eps,
but is in the neighborhood of a core point
– A noise point is any point that is not a core point or
a border point.
137
DBSCAN: Core, Border, and
Noise Points
138
DBSCAN Algorithm
• Eliminate noise points
• Perform clustering on the remaining points
139
DBSCAN: Core, Border and
Noise Points
Original Points Point types: core,
border and noise
Eps = 10, MinPts = 4
140
When DBSCAN Works Well
Original Points Clusters
• Resistant to Noise
• Can handle clusters of different shapes and sizes
141
When DBSCAN Does NOT
Work Well
Original Points
(MinPts=4, Eps=9.75).
(MinPts=4, Eps=9.92)
• Varying densities
• High-dimensional data
142
DBSCAN: Determining EPS
and MinPts
• Idea is that for points in a cluster, their kth nearest
neighbors are at roughly the same distance
• Noise points have the kth nearest neighbor at farther
distance
• So, plot sorted distance of every point to its kth nearest
neighbor
143
OPTICS: A Cluster-Ordering
Method (1999)
• OPTICS: Ordering Points To Identify the
Clustering Structure
– Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)
– Produces a special order of the database wrt its
density-based clustering structure
– This cluster-ordering contains info equiv to the
density-based clusterings corresponding to a broad
range of parameter settings
– Good for both automatic and interactive cluster
analysis, including finding intrinsic clustering structure
– Can be represented graphically or using visualization
techniques
153
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
154
Cluster Validity
• For supervised classification we have a variety of
measures to evaluate how good our model is
– Accuracy, precision, recall
• For cluster analysis, the analogous question is how to
evaluate the “goodness” of the resulting clusters?
• But “clusters are in the eye of the beholder”!
• Then why do we want to evaluate them?
– To avoid finding patterns in noise
– To compare clustering algorithms
– To compare two sets of clusters
– To compare two clusters
155
Clusters found in Random Data
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Random
Points
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
K-means
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
DBSCAN
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y Complete
Link
156
Different Aspects of Cluster
Validation
1. Determining the clustering tendency of a set of data, i.e.,
distinguishing whether non-random structure actually exists in the
data.
2. Comparing the results of a cluster analysis to externally known
results, e.g., to externally given class labels.
3. Evaluating how well the results of a cluster analysis fit the data
without reference to external information.
- Use only the data
4. Comparing the results of two different sets of cluster analyses to
determine which is better.
5. Determining the ‘correct’ number of clusters.
For 2, 3, and 4, we can further distinguish whether we want to
evaluate the entire clustering or just individual clusters.
157
Measures of Cluster Validity
• Numerical measures that are applied to judge various aspects
of cluster validity, are classified into the following three types.
– External Index: Used to measure the extent to which cluster
labels match externally supplied class labels.
• Entropy
– Internal Index: Used to measure the goodness of a clustering
structure without respect to external information.
• Sum of Squared Error (SSE)
– Relative Index: Used to compare two different clusterings or
clusters.
• Often an external or internal index is used for this function, e.g., SSE or
entropy
• Sometimes these are referred to as criteria instead of indices
– However, sometimes criterion is the general strategy and index is the
numerical measure that implements the criterion.
158
Measuring Cluster Validity Via
Correlation
• Two matrices
– Proximity Matrix
– “Incidence” Matrix
• One row and one column for each data point
• An entry is 1 if the associated pair of points belong to the same cluster
• An entry is 0 if the associated pair of points belongs to different clusters
• Compute the correlation between the two matrices
– Since the matrices are symmetric, only the correlation between
n(n-1) / 2 entries needs to be calculated.
• High correlation indicates that points that belong to the
same cluster are close to each other.
• Not a good measure for some density or contiguity
based clusters.
159
Measuring Cluster Validity Via
Correlation
• Correlation of incidence and proximity matrices
for the K-means clusterings of the following two
data sets.
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Corr = -0.9235 Corr = -0.5810
160
Using Similarity Matrix for
Cluster Validation
• Order the similarity matrix with respect to cluster
labels and inspect visually.
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Points
Points
20 40 60 80 100
10
20
30
40
50
60
70
80
90
100
Similarity
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
161
Using Similarity Matrix for
Cluster Validation
• Clusters in random data are not so crisp
Points
Points
20 40 60 80 100
10
20
30
40
50
60
70
80
90
100
Similarity
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
DBSCAN
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
162
Points
Points
20 40 60 80 100
10
20
30
40
50
60
70
80
90
100
Similarity
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Using Similarity Matrix for
Cluster Validation
• Clusters in random data are not so crisp
K-means
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
164
Using Similarity Matrix for
Cluster Validation
1
2
3
5
6
4
7
DBSCAN
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
500 1000 1500 2000 2500 3000
500
1000
1500
2000
2500
3000
165
Internal Measures: SSE
• Clusters in more complicated figures aren’t well separated
• Internal Index: Used to measure the goodness of a clustering
structure without respect to external information
– SSE
• SSE is good for comparing two clusterings or two clusters (average
SSE).
• Can also be used to estimate the number of clusters
2 5 10 15 20 25 30
0
1
2
3
4
5
6
7
8
9
10
K
SSE
5 10 15
-6
-4
-2
0
2
4
6
166
Internal Measures: SSE
• SSE curve for a more complicated data set
1
2
3
5
6
4
7
SSE of clusters found using K-means
167
Framework for Cluster Validity
• Need a framework to interpret any measure.
– For example, if our measure of evaluation has the value, 10, is that
good, fair, or poor?
• Statistics provide a framework for cluster validity
– The more “atypical” a clustering result is, the more likely it represents
valid structure in the data
– Can compare the values of an index that result from random data or
clusterings to those of a clustering result.
• If the value of the index is unlikely, then the cluster results are valid
– These approaches are more complicated and harder to understand.
• For comparing the results of two different sets of cluster
analyses, a framework is less necessary.
– However, there is the question of whether the difference between two
index values is significant
168
Statistical Framework for SSE
• Example
– Compare SSE of 0.005 against three clusters in random data
– Histogram shows SSE of three clusters in 500 sets of random
data points of size 100 distributed over the range 0.2 – 0.8 for x
and y values
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034
0
5
10
15
20
25
30
35
40
45
50
SSE
Count
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
169
Statistical Framework for
Correlation
• Correlation of incidence and proximity matrices for
the K-means clusterings of the following two data
sets.
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
y
Corr = -0.9235 Corr = -0.5810
170
Internal Measures: Cohesion and
Separation
• Cluster Cohesion: Measures how closely related
are objects in a cluster
– Example: SSE
• Cluster Separation: Measure how distinct or
well-separated a cluster is from other clusters
• Example: Squared Error
– Cohesion is measured by the within cluster sum of squares
(SSE)
– Separation is measured by the between cluster sum of squares
– Where |Ci| is the size of cluster i
 



i C
x
i
i
m
x
WSS 2
)
(
 

i
i
i m
m
C
BSS 2
)
(
171
Internal Measures: Cohesion
and Separation
• Example: SSE
– BSS + WSS = constant
1 2 3 4 5
 

m1 m2
m
10
9
1
9
)
3
5
.
4
(
2
)
5
.
1
3
(
2
1
)
5
.
4
5
(
)
5
.
4
4
(
)
5
.
1
2
(
)
5
.
1
1
(
2
2
2
2
2
2



















Total
BSS
WSS
K=2 clusters:
10
0
10
0
)
3
3
(
4
10
)
3
5
(
)
3
4
(
)
3
2
(
)
3
1
(
2
2
2
2
2
















Total
BSS
WSS
K=1 cluster:
172
Internal Measures: Cohesion and
Separation
• A proximity graph based approach can also be used for
cohesion and separation.
– Cluster cohesion is the sum of the weight of all links within a cluster.
– Cluster separation is the sum of the weights between nodes in the
cluster and nodes outside the cluster.
cohesion separation
173
Internal Measures: Silhouette
Coefficient
• Silhouette Coefficient combine ideas of both cohesion and
separation, but for individual points, as well as clusters and
clusterings
• For an individual point, i
– Calculate a = average distance of i to the points in its cluster
– Calculate b = min (average distance of i to points in another cluster)
– The silhouette coefficient for a point is then given by
s = 1 – a/b if a < b, (or s = b/a - 1 if a  b, not the usual case)
– Typically between 0 and 1.
– The closer to 1 the better.
• Can calculate the Average Silhouette width for a cluster or a
clustering
a
b
174
External Measures of Cluster Validity:
Entropy and Purity
175
Final Comment on Cluster
Validity
“The validation of clustering structures is the
most difficult and frustrating part of cluster
analysis.
Without a strong effort in this direction, cluster
analysis will remain a black art accessible only
to those true believers who have experience and
great courage.”
Algorithms for Clustering Data, Jain and Dubes
CS590D: Data Mining
Prof. Chris Clifton
March 2, 2006
Clustering
177
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
178
Grid-Based Clustering
Method
• Using multi-resolution grid data structure
• Several interesting methods
– STING (a STatistical INformation Grid approach) by Wang, Yang
and Muntz (1997)
– WaveCluster by Sheikholeslami, Chatterjee, and Zhang
(VLDB’98)
• A multi-resolution clustering approach using wavelet method
– CLIQUE: Agrawal, et al. (SIGMOD’98)
179
STING: A Statistical
Information Grid Approach
• Wang, Yang and Muntz (VLDB’97)
• The spatial area area is divided into rectangular cells
• There are several levels of cells corresponding to
different levels of resolution
180
STING: A Statistical
Information Grid Approach (2)
– Each cell at a high level is partitioned into a number of smaller
cells in the next lower level
– Statistical info of each cell is calculated and stored beforehand
and is used to answer queries
– Parameters of higher level cells can be easily calculated from
parameters of lower level cell
• count, mean, s, min, max
• type of distribution—normal, uniform, etc.
– Use a top-down approach to answer spatial data queries
– Start from a pre-selected layer—typically with a small number of
cells
– For each cell in the current level compute the confidence interval
181
STING: A Statistical
Information Grid Approach (3)
– Remove the irrelevant cells from further consideration
– When finish examining the current layer, proceed to the next
lower level
– Repeat this process until the bottom layer is reached
– Advantages:
• Query-independent, easy to parallelize, incremental update
• O(K), where K is the number of grid cells at the lowest level
– Disadvantages:
• All the cluster boundaries are either horizontal or vertical, and
no diagonal boundary is detected
182
WaveCluster (1998)
• Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
• A multi-resolution clustering approach which applies
wavelet transform to the feature space
– A wavelet transform is a signal processing technique that
decomposes a signal into different frequency sub-band.
• Both grid-based and density-based
• Input parameters:
– # of grid cells for each dimension
– the wavelet, and the # of applications of wavelet transform.
183
What is Wavelet (1)?
184
WaveCluster (1998)
• How to apply wavelet transform to find clusters
– Summaries the data by imposing a multidimensional
grid structure onto data space
– These multidimensional spatial data objects are
represented in a n-dimensional feature space
– Apply wavelet transform on feature space to find the
dense regions in the feature space
– Apply wavelet transform multiple times which result in
clusters at different scales from fine to coarse
185
Wavelet Transform
• Decomposes a signal into different
frequency subbands. (can be applied to n-
dimensional signals)
• Data are transformed to preserve relative
distance between objects at different
levels of resolution.
• Allows natural clusters to become more
distinguishable
186
What Is Wavelet (2)?
187
Quantization
188
Transformation
189
WaveCluster (1998)
• Why is wavelet transformation useful for clustering
– Unsupervised clustering
It uses hat-shape filters to emphasize region where points
cluster, but simultaneously to suppress weaker information in
their boundary
– Effective removal of outliers
– Multi-resolution
– Cost efficiency
• Major features:
– Complexity O(N)
– Detect arbitrary shaped clusters at different scales
– Not sensitive to noise, not sensitive to input order
– Only applicable to low dimensional data
190
CLIQUE (Clustering In QUEst)
• Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).
• Automatically identifying subspaces of a high dimensional data
space that allow better clustering than original space
• CLIQUE can be considered as both density-based and grid-based
– It partitions each dimension into the same number of equal length
interval
– It partitions an m-dimensional data space into non-overlapping
rectangular units
– A unit is dense if the fraction of total data points contained in the unit
exceeds the input model parameter
– A cluster is a maximal set of connected dense units within a subspace
191
CLIQUE: The Major Steps
• Partition the data space and find the number of points
that lie inside each cell of the partition.
• Identify the subspaces that contain clusters using the
Apriori principle
• Identify clusters:
– Determine dense units in all subspaces of interests
– Determine connected dense units in all subspaces of interests.
• Generate minimal description for the clusters
– Determine maximal regions that cover a cluster of connected
dense units for each cluster
– Determination of minimal cover for each cluster
Salary
(10,000)
20 30 40 50 60
age
5
4
3
1
2
6
7
0
20 30 40 50 60
age
5
4
3
1
2
6
7
0
Vacation
(week)
age
Vacation
30 50
 = 3
193
Strength and Weakness of
CLIQUE
• Strength
– It automatically finds subspaces of the highest
dimensionality such that high density clusters exist in
those subspaces
– It is insensitive to the order of records in input and
does not presume some canonical data distribution
– It scales linearly with the size of input and has good
scalability as the number of dimensions in the data
increases
• Weakness
– The accuracy of the clustering result may be
degraded at the expense of simplicity of the method
194
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
195
Model-Based Clustering
Methods
• Attempt to optimize the fit between the data and some
mathematical model
• Statistical and AI approach
– Conceptual clustering
• A form of clustering in machine learning
• Produces a classification scheme for a set of unlabeled objects
• Finds characteristic description for each concept (class)
– COBWEB (Fisher’87)
• A popular a simple method of incremental conceptual learning
• Creates a hierarchical clustering in the form of a classification tree
• Each node refers to a concept and contains a probabilistic
description of that concept
196
COBWEB Clustering
Method
A classification tree
197
More on Statistical-Based
Clustering
• Limitations of COBWEB
– The assumption that the attributes are independent of each
other is often too strong because correlation may exist
– Not suitable for clustering large database data – skewed tree
and expensive probability distributions
• CLASSIT
– an extension of COBWEB for incremental clustering of
continuous data
– suffers similar problems as COBWEB
• AutoClass (Cheeseman and Stutz, 1996)
– Uses Bayesian statistical analysis to estimate the number of
clusters
– Popular in industry
198
Other Model-Based
Clustering Methods
• Neural network approaches
– Represent each cluster as an exemplar, acting as a
“prototype” of the cluster
– New objects are distributed to the cluster whose
exemplar is the most similar according to some
dostance measure
• Competitive learning
– Involves a hierarchical architecture of several units
(neurons)
– Neurons compete in a “winner-takes-all” fashion for
the object currently being presented
199
Model-Based Clustering
Methods
200
Self-organizing feature
maps (SOMs)
• Clustering is also performed by having several
units competing for the current object
• The unit whose weight vector is closest to the
current object wins
• The winner and its neighbors learn by having
their weights adjusted
• SOMs are believed to resemble processing that
can occur in the brain
• Useful for visualizing high-dimensional data in 2-
or 3-D space
201
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
202
What Is Outlier Discovery?
• What are outliers?
– The set of objects are considerably dissimilar from the
remainder of the data
– Example: Sports: Michael Jordon, Wayne Gretzky, ...
• Problem
– Find top n outlier points
• Applications:
– Credit card fraud detection
– Telecom fraud detection
– Customer segmentation
– Medical analysis
Outlier Discovery:
Statistical
Approaches
Assume a model underlying distribution that
generates data set (e.g. normal distribution)
• Use discordancy tests depending on
– data distribution
– distribution parameter (e.g., mean, variance)
– number of expected outliers
• Drawbacks
– most tests are for single attribute
– In many cases, data distribution may not be known
CS590D: Data Mining
Prof. Chris Clifton
March 4, 2006
Clustering
205
Outlier Discovery: Distance-
Based Approach
• Introduced to counter the main limitations
imposed by statistical methods
– We need multi-dimensional analysis without knowing
data distribution.
• Distance-based outlier: A DB(p, D)-outlier is an
object O in a dataset T such that at least a
fraction p of the objects in T lies at a distance
greater than D from O
• Algorithms for mining distance-based outliers
– Index-based algorithm
– Nested-loop algorithm
– Cell-based algorithm
206
Outlier Discovery:
Deviation-Based Approach
• Identifies outliers by examining the main characteristics
of objects in a group
• Objects that “deviate” from this description are
considered outliers
• sequential exception technique
– simulates the way in which humans can distinguish unusual
objects from among a series of supposedly like objects
• OLAP data cube technique
– uses data cubes to identify regions of anomalies in large
multidimensional data
207
Cluster Analysis
• What is Cluster Analysis?
• Types of Data in Cluster Analysis
• Partitioning Methods
• Hierarchical Methods
• Density-Based Methods
• Cluster Evaluation
• Grid-Based Methods
• Model-Based Clustering Methods
• Outlier Analysis
• Summary
208
Problems and Challenges
• Considerable progress has been made in scalable clustering
methods
– Partitioning: k-means, k-medoids, CLARANS
– Hierarchical: BIRCH, CURE
– Density-based: DBSCAN, CLIQUE, OPTICS
– Grid-based: STING, WaveCluster
– Model-based: Autoclass, Denclue, Cobweb
• Current clustering techniques do not address all the requirements
adequately
• Constraint-based clustering analysis: Constraints exist in data space
(bridges and highways) or in user queries
209
Constraint-Based Clustering
Analysis
• Clustering analysis: less parameters but more user-
desired constraints, e.g., an ATM allocation problem
210
Clustering With Obstacle
Objects
Taking obstacles into account
Not Taking obstacles into account
211
Summary
• Cluster analysis groups objects based on their similarity
and has wide applications
• Measure of similarity can be computed for various types
of data
• Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
• Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
• There are still lots of research issues on cluster analysis,
such as constraint-based clustering
212
References (1)
• R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high
dimensional data for data mining applications. SIGMOD'98
• M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
• M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the
clustering structure, SIGMOD’99.
• P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996
• M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in
large spatial databases. KDD'96.
• M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing
techniques for efficient class identification. SSD'95.
• D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139-
172, 1987.
• D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on
dynamic systems. In Proc. VLDB’98.
• S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases.
SIGMOD'98.
• A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
213
References (2)
• L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis.
John Wiley & Sons, 1990.
• E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98.
• G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering.
John Wiley and Sons, 1988.
• P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.
• R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.
• E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets.
Proc. 1996 Int. Conf. on Pattern Recognition, 101-105.
• G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering
approach for very large spatial databases. VLDB’98.
• W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining,
VLDB’97.
• T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very
large databases. SIGMOD'96.

Cluster_saumitra.ppt

  • 1.
  • 2.
    2 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 3.
    3 What is ClusterAnalysis? • Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized
  • 4.
    4 What is ClusterAnalysis? • Cluster: a collection of data objects – Similar to one another within the same cluster – Dissimilar to the objects in other clusters • Cluster analysis – Grouping a set of data objects into clusters • Clustering is unsupervised classification: no predefined classes • Typical applications – As a stand-alone tool to get insight into data distribution – As a preprocessing step for other algorithms
  • 5.
    Why do wecluster? • Clustering : given a collection of data objects group them so that – Similar to one another within the same cluster – Dissimilar to the objects in other clusters • Clustering results are used: – As a stand-alone tool to get insight into data distribution • Visualization of clusters may unveil important information – As a preprocessing step for other algorithms • Efficient indexing or compression often relies on clustering 5
  • 6.
    6 General Applications ofClustering • Pattern Recognition (unsupervised) • Spatial Data Analysis – create thematic maps in GIS by clustering feature spaces – detect spatial clusters and explain them in spatial data mining • Image Processing • Economic Science (especially market research) • WWW – Document classification – Cluster Weblog data to discover groups of similar access patterns
  • 7.
    7 Examples of ClusteringApplications • Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs • Land use: Identification of areas of similar land use in an earth observation database • Insurance: Identifying groups of motor insurance policy holders with a high average claim cost • City-planning: Identifying groups of houses according to their house type, value, and geographical location • Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults
  • 8.
    8 Notion of aCluster can be Ambiguous How many clusters? Four Clusters Two Clusters Six Clusters
  • 9.
    9 What Is GoodClustering? • A good clustering method will produce high quality clusters with – high intra-class similarity – low inter-class similarity • The quality of a clustering result depends on both the similarity measure used by the method and its implementation. • The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns.
  • 10.
    The clustering task Groupobservations into groups so that the observations belonging in the same group are similar, whereas observations in different groups are different • Basic questions: – What does “similar” mean – What is a good partition of the objects? i.e., how is the quality of a solution measured – How to find a good partition of the observations 10
  • 11.
    11 Types of Clusterings: •A clustering is a set of clusters • Important distinction between hierarchical and partitional sets of clusters • Partitional Clustering – A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset • Hierarchical clustering – A set of nested clusters organized as a hierarchical tree
  • 12.
  • 13.
    13 Hierarchical Clustering p4 p1 p3 p2 p4 p1 p3 p2 p4 p1 p2p3 p4 p1 p2 p3 Traditional Hierarchical Clustering Non-traditional Hierarchical Clustering Non-traditional Dendrogram Traditional Dendrogram
  • 14.
    14 Other Distinctions BetweenSets of Clusters • Exclusive versus non-exclusive – In non-exclusive clusterings, points may belong to multiple clusters. – Can represent multiple classes or ‘border’ points • Fuzzy versus non-fuzzy – In fuzzy clustering, a point belongs to every cluster with some weight between 0 and 1 – Weights must sum to 1 – Probabilistic clustering has similar characteristics • Partial versus complete – In some cases, we only want to cluster some of the data • Heterogeneous versus homogeneous – Cluster of widely different sizes, shapes, and densities
  • 15.
    15 Types of Clusters •Well-separated clusters • Center-based clusters • Contiguous clusters • Density-based clusters • Property or Conceptual • Described by an Objective Function
  • 16.
    16 Types of Clusters:Well-Separated • Well-Separated Clusters: – A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster. 3 well-separated clusters
  • 17.
    17 Types of Clusters:Center-Based • Center-based – A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster – The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster 4 center-based clusters
  • 18.
    18 Types of Clusters:Contiguity-Based • Contiguous Cluster (Nearest neighbor or Transitive) – A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster. 8 contiguous clusters
  • 19.
    19 Types of Clusters:Density-Based • Density-based – A cluster is a dense region of points, which is separated by low- density regions, from other regions of high density. – Used when the clusters are irregular or intertwined, and when noise and outliers are present. 6 density-based clusters
  • 20.
    20 Types of Clusters:Conceptual Clusters • Shared Property or Conceptual Clusters – Finds clusters that share some common property or represent a particular concept. 2 Overlapping Circles
  • 21.
    21 Types of Clusters:Objective Function • Clusters Defined by an Objective Function – Finds clusters that minimize or maximize an objective function. – Enumerate all possible ways of dividing the points into clusters and evaluate the `goodness' of each potential set of clusters by using the given objective function. (NP Hard) – Can have global or local objectives. • Hierarchical clustering algorithms typically have local objectives • Partitional algorithms typically have global objectives – A variation of the global objective function approach is to fit the data to a parameterized model. • Parameters for the model are determined from the data. • Mixture models assume that the data is a ‘mixture' of a number of statistical distributions.
  • 22.
    22 Types of Clusters:Objective Function … • Map the clustering problem to a different domain and solve a related problem in that domain – Proximity matrix defines a weighted graph, where the nodes are the points being clustered, and the weighted edges represent the proximities between points – Clustering is equivalent to breaking the graph into connected components, one for each cluster. – Want to minimize the edge weight between clusters and maximize the edge weight within clusters
  • 23.
    23 Requirements of Clusteringin Data Mining • Scalability • Ability to deal with different types of attributes • Discovery of clusters with arbitrary shape • Minimal requirements for domain knowledge to determine input parameters • Able to deal with noise and outliers • Insensitive to order of input records • High dimensionality • Incorporation of user-specified constraints • Interpretability and usability
  • 24.
    24 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 25.
  • 26.
    26 Data Structures • Datamatrix – (two modes) • Dissimilarity matrix – (one mode)                   np x ... nf x ... n1 x ... ... ... ... ... ip x ... if x ... i1 x ... ... ... ... ... 1p x ... 1f x ... 11 x                 0 ... ) 2 , ( ) 1 , ( : : : ) 2 , 3 ( ) ... n d n d 0 d d(3,1 0 d(2,1) 0
  • 27.
    27 Measure the Qualityof Clustering • Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, which is typically metric: d(i, j) • There is a separate “quality” function that measures the “goodness” of a cluster. • The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal and ratio variables. • Weights should be associated with different variables based on applications and data semantics. • It is hard to define “similar enough” or “good enough” – the answer is typically highly subjective.
  • 28.
    28 Type of datain clustering analysis • Interval-scaled variables: • Binary variables: • Nominal, ordinal, and ratio variables: • Variables of mixed types:
  • 29.
    Interval-valued variables • Interval-scaled(numeric) variables are continuous measurements of a roughly linear scale. Examples – weight and height, latitude and longitude coordinates (e.g., when clustering houses), and weather temperature. The measurement unit used can affect the clustering – For example, changing measurement units from meters to inches for height, or from kilograms to pounds for weight, may lead to a very different clustering structure. 29
  • 30.
    Data Standardization • Expressinga variable in smaller units will lead to a larger range for that variable, and thus a larger effect on the resulting clustering structure. • To help avoid dependence on the choice of measurement units, the data should be standardized. • Standardizing measurements attempts to give all variables an equal weight. • To standardize measurements, one choice is to convert the original measurements to unitless variables. 30
  • 31.
    31 Interval-valued variables • Standardizedata – Calculate the mean absolute deviation: where – Calculate the standardized measurement (z-score) • Using mean absolute deviation is more robust than using standard deviation . ) ... 2 1 1 nf f f f x x (x n m     |) | ... | | | (| 1 2 1 f nf f f f f f m x m x m x n s        f f if if s m x z  
  • 32.
  • 33.
  • 34.
    34 Similarity and DissimilarityBetween Objects • Distances are normally used to measure the similarity or dissimilarity between two data objects • Some popular ones include: Minkowski distance: where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p- dimensional data objects, and q is a positive integer • If q = 1, d is Manhattan distance q q p p q q j x i x j x i x j x i x j i d ) | | ... | | | (| ) , ( 2 2 1 1        | | ... | | | | ) , ( 2 2 1 1 p p j x i x j x i x j x i x j i d       
  • 35.
    35 Similarity and DissimilarityBetween Objects (Cont.) • If q = 2, d is Euclidean distance: – Properties • d(i,j)  0 • d(i,i) = 0 • d(i,j) = d(j,i) • d(i,j)  d(i,k) + d(k,j) • Also, one can use weighted distance, parametric Pearson product moment correlation, or other disimilarity measures ) | | ... | | | (| ) , ( 2 2 2 2 2 1 1 p p j x i x j x i x j x i x j i d       
  • 36.
  • 37.
    37 Similarity and DissimilarityBetween Objects (Cont.)
  • 38.
  • 39.
    39 Binary Variables • Acontingency table for binary data • Simple matching coefficient (invariant, if the binary variable is symmetric): • Jaccard coefficient (noninvariant if the binary variable is asymmetric): d c b a c b j i d      ) , ( c b a c b j i d     ) , ( p d b c a sum d c d c b a b a sum     0 1 0 1 Object i Object j
  • 40.
    40 Dissimilarity between BinaryVariables • Example – gender is a symmetric attribute – the remaining attributes are asymmetric binary – let the values Y and P be set to 1, and the value N be set to 0 Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4 Jack M Y N P N N N Mary F Y N P N P N Jim M Y P N N N N 75 . 0 2 1 1 2 1 ) , ( 67 . 0 1 1 1 1 1 ) , ( 33 . 0 1 0 2 1 0 ) , (                m ary jim d jim jack d m ary jack d
  • 41.
    41 Nominal Variables • Ageneralization of the binary variable in that it can take more than 2 states, e.g., red, yellow, blue, green • Method 1: Simple matching – m: # of matches, p: total # of variables • Method 2: use a large number of binary variables – creating a new binary variable for each of the M nominal states p m p j i d   ) , (
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
    Example: Dissimilarity betweenordinal variables 47
  • 48.
    48 Ratio-Scaled Variables • Ratio-scaledvariable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt • Methods: – treat them like interval-scaled variables—not a good choice! (why?—the scale can be distorted) – apply logarithmic transformation yif = log(xif) – treat them as continuous ordinal data treat their rank as interval- scaled
  • 49.
    49 Variables of MixedTypes • A database may contain all the six types of variables – symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio • One may use a weighted formula to combine their effects – f is binary or nominal: dij (f) = 0 if xif = xjf , or dij (f) = 1 otherwise – f is interval-based: use the normalized distance – f is ordinal or ratio-scaled • compute ranks rif and • and treat zif as interval-scaled ) ( 1 ) ( ) ( 1 ) , ( f ij p f f ij f ij p f d j i d        1 1    f if M r zif
  • 50.
  • 51.
    Example: Dissimilarity betweenvariables of mixed type 51
  • 52.
    52 Example: Dissimilarity betweenvariables of mixed type
  • 53.
    CS590D: Data Mining Prof.Chris Clifton February 23, 2006 Clustering
  • 54.
    57 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 55.
  • 56.
    59 Partitioning Algorithms: Basic Concept •Partitioning method: Construct a partition of a database D of n objects into a set of k clusters • Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion – Global optimal: exhaustively enumerate all partitions – Heuristic methods: k-means and k-medoids algorithms – k-means (MacQueen’67): Each cluster is represented by the center of the cluster – k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster
  • 57.
    60 The K-Means Clustering Method •Given k, the k-means algorithm is implemented in four steps: – Partition objects into k nonempty subsets – Compute seed points as the centroids of the clusters of the current partition (the centroid is the center, i.e., mean point, of the cluster) – Assign each object to the cluster with the nearest seed point – Go back to Step 2, stop when no more new assignment
  • 58.
    K-Means • Step 0:Start with a random partition into K clusters • Step 1: Generate a new partition by assigning each pattern to its closest cluster center • Step 2: Compute new cluster centers as the centroids of the clusters. • Step 3: Steps 1 and 2 are repeated until there is no change in the membership (also cluster centers remain the same)
  • 59.
  • 60.
    K-Means – Howmany K’s ?
  • 61.
    K-Means – Howmany K’s ?
  • 62.
    Locating the ‘knee’ Theknee of a curve is defined as the point of maximum curvature.
  • 63.
    66 The K-Means Clustering Method 0 1 2 3 4 5 6 7 8 9 10 01 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 K=2 Arbitrarily choose K object as initial cluster center Assign each objects to most similar center Update the cluster means Update the cluster means reassign reassign
  • 64.
    67 Comments on theK-Means Method • Strength: Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. • Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k)) • Comment: Often terminates at a local optimum. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms • Weakness – Applicable only when mean is defined, then what about categorical data? – Need to specify k, the number of clusters, in advance – Unable to handle noisy data and outliers – Not suitable to discover clusters with non-convex shapes
  • 65.
    68 Variations of theK-Means Method • A few variants of the k-means which differ in – Selection of the initial k means – Dissimilarity calculations – Strategies to calculate cluster means • Handling categorical data: k-modes (Huang’98) – Replacing means of clusters with modes – Using new dissimilarity measures to deal with categorical objects – Using a frequency-based method to update modes of clusters – A mixture of categorical and numerical data: k-prototype method
  • 66.
    69 What is theproblem of k- Means Method? • The k-means algorithm is sensitive to outliers ! – Since an object with an extremely large value may substantially distort the distribution of the data. • K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster. 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
  • 67.
    70 Importance of ChoosingInitial Centroids -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 6
  • 68.
    71 Solutions to InitialCentroids Problem • Multiple runs – Helps, but probability is not on your side • Sample and use hierarchical clustering to determine initial centroids • Select more than k initial centroids and then select among these initial centroids – Select most widely separated • Postprocessing • Bisecting K-means – Not as susceptible to initialization issues
  • 69.
    72 Limitations of K-means: DifferingDensity Original Points K-means (3 Clusters)
  • 70.
    73 Limitations of K-means:Non- globular Shapes Original Points K-means (2 Clusters)
  • 71.
    74 The K-Medoids Clustering Method •Find representative objects, called medoids, in clusters • PAM (Partitioning Around Medoids, 1987) – starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering – PAM works effectively for small data sets, but does not scale well for large data sets • CLARA (Kaufmann & Rousseeuw, 1990) • CLARANS (Ng & Han, 1994): Randomized sampling • Focusing + spatial data structure (Ester et al., 1995)
  • 72.
    75 Typical k-medoids algorithm (PAM) 0 1 2 3 4 5 6 7 8 9 10 01 2 3 4 5 6 7 8 9 10 Total Cost = 20 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 K=2 Arbitrary choose k object as initial medoids 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Assign each remainin g object to nearest medoids Randomly select a nonmedoid object,Oramdom Compute total cost of swapping 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 Total Cost = 26 Swapping O and Oramdom If quality is improved. Do loop Until no change 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
  • 73.
    76 PAM (Partitioning Around Medoids)(1987) • PAM (Kaufman and Rousseeuw, 1987), built in Splus • Use real object to represent the cluster – Select k representative objects arbitrarily – For each pair of non-selected object h and selected object i, calculate the total swapping cost TCih – For each pair of i and h, • If TCih < 0, i is replaced by h • Then assign each non-selected object to the most similar representative object – repeat steps 2-3 until there is no change
  • 74.
    PAM Clustering: Totalswapping cost TCih=jCjih 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 j i h t Cjih = 0 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 t i h j Cjih = d(j, h) - d(j, i) 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 h i t j Cjih = d(j, t) - d(j, i) 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 t i h j Cjih = d(j, h) - d(j, t)
  • 75.
    78 What is theproblem with PAM? • Pam is more robust than k-means in the presence of noise and outliers because a medoid is less influenced by outliers or other extreme values than a mean • Pam works efficiently for small data sets but does not scale well for large data sets. – O(k(n-k)2 ) for each iteration where n is # of data,k is # of clusters Sampling based method, CLARA(Clustering LARge Applications)
  • 76.
    79 CLARA (Clustering Large Applications)(1990) • CLARA (Kaufmann and Rousseeuw in 1990) – Built in statistical analysis packages, such as S+ • It draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output • Strength: deals with larger data sets than PAM • Weakness: – Efficiency depends on the sample size – A good clustering based on samples will not necessarily represent a good clustering of the whole data set if the sample is biased
  • 77.
    80 CLARANS (“Randomized” CLARA) (1994) •CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han’94) • CLARANS draws sample of neighbors dynamically • The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids • If the local optimum is found, CLARANS starts with new randomly selected node in search for a new local optimum • It is more efficient and scalable than both PAM and CLARA • Focusing techniques and spatial access structures may further improve its performance (Ester et al.’95)
  • 78.
    81 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 79.
    Hierarchical Clustering Agglomerative clusteringtreats each data point as a singleton cluster, and then successively merges clusters until all points have been merged into a single remaining cluster. Divisive clustering works the other way around.
  • 80.
    83 Hierarchical Clustering • Usedistance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition Step 0 Step 1 Step 2 Step 3 Step 4 b d c e a a b d e c d e a b c d e Step 4 Step 3 Step 2 Step 1 Step 0 agglomerative (AGNES) divisive (DIANA)
  • 81.
    84 AGNES (Agglomerative Nesting) • Introducedin Kaufmann and Rousseeuw (1990) • Implemented in statistical analysis packages, e.g., Splus • Use the Single-Link method and the dissimilarity matrix. • Merge nodes that have the least dissimilarity • Go on in a non-descending fashion • Eventually all nodes belong to the same cluster 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
  • 82.
    85 Agglomerative Clustering Algorithm • Morepopular hierarchical clustering technique • Basic algorithm is straightforward 1. Compute the proximity matrix 2. Let each data point be a cluster 3. Repeat 4. Merge the two closest clusters 5. Update the proximity matrix 6. Until only a single cluster remains • Key operation is the computation of the proximity of two clusters – Different approaches to defining the distance between clusters distinguish the different algorithms
  • 83.
    86 Starting Situation • Startwith clusters of individual points and a proximity matrix p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 84.
    87 Intermediate Situation • Aftersome merging steps, we have some clusters C1 C4 C2 C5 C3 C2 C1 C1 C3 C5 C4 C2 C3 C4 C5 Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 85.
    88 Intermediate Situation • Wewant to merge the two closest clusters (C2 and C5) and update the proximity matrix. C1 C4 C2 C5 C3 C2 C1 C1 C3 C5 C4 C2 C3 C4 C5 Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 86.
    89 After Merging • Thequestion is “How do we update the proximity matrix?” C1 C4 C2 U C5 C3 ? ? ? ? ? ? ? C2 U C5 C1 C1 C3 C4 C2 U C5 C3 C4 Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 87.
    90 How to DefineInter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Similarity? • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function – Ward’s Method uses squared error Proximity Matrix
  • 88.
    91 How to DefineInter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function – Ward’s Method uses squared error
  • 89.
    92 How to DefineInter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function – Ward’s Method uses squared error
  • 90.
    93 How to DefineInter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function – Ward’s Method uses squared error
  • 91.
    94 How to DefineInter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix • MIN • MAX • Group Average • Distance Between Centroids • Other methods driven by an objective function – Ward’s Method uses squared error  
  • 92.
    108 Hierarchical Clustering: Timeand Space requirements • O(N2) space since it uses the proximity matrix. – N is the number of points. • O(N3) time in many cases – There are N steps and at each step the size, N2, proximity matrix must be updated and searched – Complexity can be reduced to O(N2 log(N) ) time for some approaches
  • 93.
    109 Hierarchical Clustering: Problems andLimitations • Once a decision is made to combine two clusters, it cannot be undone • No objective function is directly minimized • Different schemes have problems with one or more of the following: – Sensitivity to noise and outliers – Difficulty handling different sized clusters and convex shapes – Breaking large clusters
  • 94.
    110 A Dendrogram ShowsHow the Clusters are Merged Hierarchically • Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram. • A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster.
  • 95.
    111 DIANA (Divisive Analysis) •Introduced in Kaufmann and Rousseeuw (1990) • Implemented in statistical analysis packages, e.g., Splus • Inverse order of AGNES • Eventually each node forms a cluster on its own 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
  • 96.
    114 More on Hierarchical ClusteringMethods • Major weakness of agglomerative clustering methods – do not scale well: time complexity of at least O(n2), where n is the number of total objects – can never undo what was done previously • Integration of hierarchical with distance-based clustering – BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters – CURE (1998): selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction – CHAMELEON (1999): hierarchical clustering using dynamic modeling
  • 97.
    115 BIRCH (1996) • Birch:Balanced Iterative Reducing and Clustering using Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’96) • Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering – Phase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) – Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree • Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans • Weakness: handles only numeric data, and sensitive to the order of the data record.
  • 98.
    116 Clustering Feature: CF= (N, LS, SS) N: Number of data points LS: N i=1=Xi SS: N i=1=Xi 2 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 CF = (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8) Clustering Feature Vector
  • 99.
    117 CF-Tree in BIRCH •Clustering feature: – summary of the statistics for a given subcluster: the 0-th, 1st and 2nd moments of the subcluster from the statistical point of view. – registers crucial measurements for computing cluster and utilizes storage efficiently A CF tree is a height-balanced tree that stores the clustering features for a hierarchical clustering – A nonleaf node in a tree has descendants or “children” – The nonleaf nodes store sums of the CFs of their children • A CF tree has two parameters – Branching factor: specify the maximum number of children. – threshold: max diameter of sub-clusters stored at the leaf nodes
  • 100.
    CF Tree CF1 child1 CF3 child3 CF2 child2 CF6 child6 CF1 child1 CF3 child3 CF2 child2 CF5 child5 CF1 CF2CF6 prev next CF1 CF2 CF4 prev next B = 7 L = 6 Root Non-leaf node Leaf node Leaf node
  • 101.
    119 CURE (Clustering Using REpresentatives) • CURE: proposed by Guha, Rastogi & Shim, 1998 – Stops the creation of a cluster hierarchy if a level consists of k clusters – Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effect
  • 102.
    120 Drawbacks of Distance- BasedMethod • Drawbacks of square-error based clustering method – Consider only one point as representative of a cluster – Good only for convex shaped, similar size and density, and if k can be reasonably estimated
  • 103.
    121 Cure: The Algorithm •Draw random sample s. • Partition sample to p partitions with size s/p • Partially cluster partitions into s/pq clusters • Eliminate outliers – By random sampling – If a cluster grows too slow, eliminate it. • Cluster partial clusters. • Label data in disk
  • 104.
    122 Data Partitioning and Clustering –s = 50 – p = 2 – s/p = 25 x x x y y y y x y x s/pq = 5
  • 105.
    123 Cure: Shrinking Representative Points •Shrink the multiple representative points towards the gravity center by a fraction of . • Multiple representatives capture the shape of the cluster x y x y
  • 106.
    124 Clustering Categorical Data: ROCK •ROCK: Robust Clustering using linKs, by S. Guha, R. Rastogi, K. Shim (ICDE’99). – Use links to measure similarity/proximity – Not distance based – Computational complexity: • Basic ideas: – Similarity function and neighbors: Let T1 = {1,2,3}, T2={3,4,5} O n nm m n n m a ( log ) 2 2   Sim T T T T T T ( , ) 1 2 1 2 1 2    Sim T T ( , ) { } { , , , , } . 1 2 3 1 2 3 4 5 1 5 0 2   
  • 107.
    126 CHAMELEON (Hierarchical clustering usingdynamic modeling) • CHAMELEON: by G. Karypis, E.H. Han, and V. Kumar’99 • Measures the similarity based on a dynamic model – Two clusters are merged only if the interconnectivity and closeness (proximity) between two clusters are high relative to the internal interconnectivity of the clusters and closeness of items within the clusters – Cure ignores information about interconnectivity of the objects, Rock ignores information about the closeness of two clusters • A two-phase algorithm 1. Use a graph partitioning algorithm: cluster objects into a large number of relatively small sub-clusters 2. Use an agglomerative hierarchical clustering algorithm: find the genuine clusters by repeatedly combining these sub-clusters
  • 108.
    127 Overall Framework of CHAMELEON Construct SparseGraph Partition the Graph Merge Partition Final Clusters Data Set
  • 109.
    129 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 110.
    130 Density-Based Clustering Methods • Clusteringbased on density (local cluster criterion), such as density-connected points • Major features: – Discover clusters of arbitrary shape – Handle noise – One scan – Need density parameters as termination condition • Several interesting studies: – DBSCAN: Ester, et al. (KDD’96) – OPTICS: Ankerst, et al (SIGMOD’99). – DENCLUE: Hinneburg & D. Keim (KDD’98) – CLIQUE: Agrawal, et al. (SIGMOD’98)
  • 111.
    131 Density Concepts • Coreobject (CO)–object with at least ‘M’ objects within a radius ‘E-neighborhood’ • Directly density reachable (DDR)–x is CO, y is in x’s ‘E- neighborhood’ • Density reachable–there exists a chain of DDR objects from x to y • Density based cluster–density connected objects maximum w.r.t. reachability
  • 112.
    132 Density-Based Clustering: Background • Twoparameters: – Eps: Maximum radius of the neighbourhood – MinPts: Minimum number of points in an Eps-neighbourhood of that point • NEps(p): {q belongs to D | dist(p,q) <= Eps} • Directly density-reachable: A point p is directly density- reachable from a point q wrt. Eps, MinPts if – 1) p belongs to NEps(q) – 2) core point condition: |NEps (q)| >= MinPts p q MinPts = 5 Eps = 1 cm
  • 113.
    133 Density-Based Clustering: Background (II) •Density-reachable: – A point p is density-reachable from a point q wrt. Eps, MinPts if there is a chain of points p1, …, pn, p1 = q, pn = p such that pi+1 is directly density- reachable from pi • Density-connected – A point p is density-connected to a point q wrt. Eps, MinPts if there is a point o such that both, p and q are density-reachable from o wrt. Eps and MinPts. p q p1 p q o
  • 114.
    134 DBSCAN: Density BasedSpatial Clustering of Applications with Noise • Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points • Discovers clusters of arbitrary shape in spatial databases with noise Core Border Outlier Eps = 1cm MinPts = 5
  • 115.
    136 DBSCAN • DBSCAN isa density-based algorithm. – Density = number of points within a specified radius (Eps) – A point is a core point if it has more than a specified number of points (MinPts) within Eps • These are points that are at the interior of a cluster – A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point – A noise point is any point that is not a core point or a border point.
  • 116.
    137 DBSCAN: Core, Border,and Noise Points
  • 117.
    138 DBSCAN Algorithm • Eliminatenoise points • Perform clustering on the remaining points
  • 118.
    139 DBSCAN: Core, Borderand Noise Points Original Points Point types: core, border and noise Eps = 10, MinPts = 4
  • 119.
    140 When DBSCAN WorksWell Original Points Clusters • Resistant to Noise • Can handle clusters of different shapes and sizes
  • 120.
    141 When DBSCAN DoesNOT Work Well Original Points (MinPts=4, Eps=9.75). (MinPts=4, Eps=9.92) • Varying densities • High-dimensional data
  • 121.
    142 DBSCAN: Determining EPS andMinPts • Idea is that for points in a cluster, their kth nearest neighbors are at roughly the same distance • Noise points have the kth nearest neighbor at farther distance • So, plot sorted distance of every point to its kth nearest neighbor
  • 122.
    143 OPTICS: A Cluster-Ordering Method(1999) • OPTICS: Ordering Points To Identify the Clustering Structure – Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99) – Produces a special order of the database wrt its density-based clustering structure – This cluster-ordering contains info equiv to the density-based clusterings corresponding to a broad range of parameter settings – Good for both automatic and interactive cluster analysis, including finding intrinsic clustering structure – Can be represented graphically or using visualization techniques
  • 123.
    153 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 124.
    154 Cluster Validity • Forsupervised classification we have a variety of measures to evaluate how good our model is – Accuracy, precision, recall • For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? • But “clusters are in the eye of the beholder”! • Then why do we want to evaluate them? – To avoid finding patterns in noise – To compare clustering algorithms – To compare two sets of clusters – To compare two clusters
  • 125.
    155 Clusters found inRandom Data 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Random Points 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y K-means 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y DBSCAN 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Complete Link
  • 126.
    156 Different Aspects ofCluster Validation 1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data. 2. Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels. 3. Evaluating how well the results of a cluster analysis fit the data without reference to external information. - Use only the data 4. Comparing the results of two different sets of cluster analyses to determine which is better. 5. Determining the ‘correct’ number of clusters. For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters.
  • 127.
    157 Measures of ClusterValidity • Numerical measures that are applied to judge various aspects of cluster validity, are classified into the following three types. – External Index: Used to measure the extent to which cluster labels match externally supplied class labels. • Entropy – Internal Index: Used to measure the goodness of a clustering structure without respect to external information. • Sum of Squared Error (SSE) – Relative Index: Used to compare two different clusterings or clusters. • Often an external or internal index is used for this function, e.g., SSE or entropy • Sometimes these are referred to as criteria instead of indices – However, sometimes criterion is the general strategy and index is the numerical measure that implements the criterion.
  • 128.
    158 Measuring Cluster ValidityVia Correlation • Two matrices – Proximity Matrix – “Incidence” Matrix • One row and one column for each data point • An entry is 1 if the associated pair of points belong to the same cluster • An entry is 0 if the associated pair of points belongs to different clusters • Compute the correlation between the two matrices – Since the matrices are symmetric, only the correlation between n(n-1) / 2 entries needs to be calculated. • High correlation indicates that points that belong to the same cluster are close to each other. • Not a good measure for some density or contiguity based clusters.
  • 129.
    159 Measuring Cluster ValidityVia Correlation • Correlation of incidence and proximity matrices for the K-means clusterings of the following two data sets. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Corr = -0.9235 Corr = -0.5810
  • 130.
    160 Using Similarity Matrixfor Cluster Validation • Order the similarity matrix with respect to cluster labels and inspect visually. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Points Points 20 40 60 80 100 10 20 30 40 50 60 70 80 90 100 Similarity 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
  • 131.
    161 Using Similarity Matrixfor Cluster Validation • Clusters in random data are not so crisp Points Points 20 40 60 80 100 10 20 30 40 50 60 70 80 90 100 Similarity 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 DBSCAN 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y
  • 132.
    162 Points Points 20 40 6080 100 10 20 30 40 50 60 70 80 90 100 Similarity 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Using Similarity Matrix for Cluster Validation • Clusters in random data are not so crisp K-means 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y
  • 133.
    164 Using Similarity Matrixfor Cluster Validation 1 2 3 5 6 4 7 DBSCAN 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
  • 134.
    165 Internal Measures: SSE •Clusters in more complicated figures aren’t well separated • Internal Index: Used to measure the goodness of a clustering structure without respect to external information – SSE • SSE is good for comparing two clusterings or two clusters (average SSE). • Can also be used to estimate the number of clusters 2 5 10 15 20 25 30 0 1 2 3 4 5 6 7 8 9 10 K SSE 5 10 15 -6 -4 -2 0 2 4 6
  • 135.
    166 Internal Measures: SSE •SSE curve for a more complicated data set 1 2 3 5 6 4 7 SSE of clusters found using K-means
  • 136.
    167 Framework for ClusterValidity • Need a framework to interpret any measure. – For example, if our measure of evaluation has the value, 10, is that good, fair, or poor? • Statistics provide a framework for cluster validity – The more “atypical” a clustering result is, the more likely it represents valid structure in the data – Can compare the values of an index that result from random data or clusterings to those of a clustering result. • If the value of the index is unlikely, then the cluster results are valid – These approaches are more complicated and harder to understand. • For comparing the results of two different sets of cluster analyses, a framework is less necessary. – However, there is the question of whether the difference between two index values is significant
  • 137.
    168 Statistical Framework forSSE • Example – Compare SSE of 0.005 against three clusters in random data – Histogram shows SSE of three clusters in 500 sets of random data points of size 100 distributed over the range 0.2 – 0.8 for x and y values 0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034 0 5 10 15 20 25 30 35 40 45 50 SSE Count 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y
  • 138.
    169 Statistical Framework for Correlation •Correlation of incidence and proximity matrices for the K-means clusterings of the following two data sets. 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x y Corr = -0.9235 Corr = -0.5810
  • 139.
    170 Internal Measures: Cohesionand Separation • Cluster Cohesion: Measures how closely related are objects in a cluster – Example: SSE • Cluster Separation: Measure how distinct or well-separated a cluster is from other clusters • Example: Squared Error – Cohesion is measured by the within cluster sum of squares (SSE) – Separation is measured by the between cluster sum of squares – Where |Ci| is the size of cluster i      i C x i i m x WSS 2 ) (    i i i m m C BSS 2 ) (
  • 140.
    171 Internal Measures: Cohesion andSeparation • Example: SSE – BSS + WSS = constant 1 2 3 4 5    m1 m2 m 10 9 1 9 ) 3 5 . 4 ( 2 ) 5 . 1 3 ( 2 1 ) 5 . 4 5 ( ) 5 . 4 4 ( ) 5 . 1 2 ( ) 5 . 1 1 ( 2 2 2 2 2 2                    Total BSS WSS K=2 clusters: 10 0 10 0 ) 3 3 ( 4 10 ) 3 5 ( ) 3 4 ( ) 3 2 ( ) 3 1 ( 2 2 2 2 2                 Total BSS WSS K=1 cluster:
  • 141.
    172 Internal Measures: Cohesionand Separation • A proximity graph based approach can also be used for cohesion and separation. – Cluster cohesion is the sum of the weight of all links within a cluster. – Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster. cohesion separation
  • 142.
    173 Internal Measures: Silhouette Coefficient •Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings • For an individual point, i – Calculate a = average distance of i to the points in its cluster – Calculate b = min (average distance of i to points in another cluster) – The silhouette coefficient for a point is then given by s = 1 – a/b if a < b, (or s = b/a - 1 if a  b, not the usual case) – Typically between 0 and 1. – The closer to 1 the better. • Can calculate the Average Silhouette width for a cluster or a clustering a b
  • 143.
    174 External Measures ofCluster Validity: Entropy and Purity
  • 144.
    175 Final Comment onCluster Validity “The validation of clustering structures is the most difficult and frustrating part of cluster analysis. Without a strong effort in this direction, cluster analysis will remain a black art accessible only to those true believers who have experience and great courage.” Algorithms for Clustering Data, Jain and Dubes
  • 145.
    CS590D: Data Mining Prof.Chris Clifton March 2, 2006 Clustering
  • 146.
    177 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 147.
    178 Grid-Based Clustering Method • Usingmulti-resolution grid data structure • Several interesting methods – STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997) – WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB’98) • A multi-resolution clustering approach using wavelet method – CLIQUE: Agrawal, et al. (SIGMOD’98)
  • 148.
    179 STING: A Statistical InformationGrid Approach • Wang, Yang and Muntz (VLDB’97) • The spatial area area is divided into rectangular cells • There are several levels of cells corresponding to different levels of resolution
  • 149.
    180 STING: A Statistical InformationGrid Approach (2) – Each cell at a high level is partitioned into a number of smaller cells in the next lower level – Statistical info of each cell is calculated and stored beforehand and is used to answer queries – Parameters of higher level cells can be easily calculated from parameters of lower level cell • count, mean, s, min, max • type of distribution—normal, uniform, etc. – Use a top-down approach to answer spatial data queries – Start from a pre-selected layer—typically with a small number of cells – For each cell in the current level compute the confidence interval
  • 150.
    181 STING: A Statistical InformationGrid Approach (3) – Remove the irrelevant cells from further consideration – When finish examining the current layer, proceed to the next lower level – Repeat this process until the bottom layer is reached – Advantages: • Query-independent, easy to parallelize, incremental update • O(K), where K is the number of grid cells at the lowest level – Disadvantages: • All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected
  • 151.
    182 WaveCluster (1998) • Sheikholeslami,Chatterjee, and Zhang (VLDB’98) • A multi-resolution clustering approach which applies wavelet transform to the feature space – A wavelet transform is a signal processing technique that decomposes a signal into different frequency sub-band. • Both grid-based and density-based • Input parameters: – # of grid cells for each dimension – the wavelet, and the # of applications of wavelet transform.
  • 152.
  • 153.
    184 WaveCluster (1998) • Howto apply wavelet transform to find clusters – Summaries the data by imposing a multidimensional grid structure onto data space – These multidimensional spatial data objects are represented in a n-dimensional feature space – Apply wavelet transform on feature space to find the dense regions in the feature space – Apply wavelet transform multiple times which result in clusters at different scales from fine to coarse
  • 154.
    185 Wavelet Transform • Decomposesa signal into different frequency subbands. (can be applied to n- dimensional signals) • Data are transformed to preserve relative distance between objects at different levels of resolution. • Allows natural clusters to become more distinguishable
  • 155.
  • 156.
  • 157.
  • 158.
    189 WaveCluster (1998) • Whyis wavelet transformation useful for clustering – Unsupervised clustering It uses hat-shape filters to emphasize region where points cluster, but simultaneously to suppress weaker information in their boundary – Effective removal of outliers – Multi-resolution – Cost efficiency • Major features: – Complexity O(N) – Detect arbitrary shaped clusters at different scales – Not sensitive to noise, not sensitive to input order – Only applicable to low dimensional data
  • 159.
    190 CLIQUE (Clustering InQUEst) • Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98). • Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space • CLIQUE can be considered as both density-based and grid-based – It partitions each dimension into the same number of equal length interval – It partitions an m-dimensional data space into non-overlapping rectangular units – A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter – A cluster is a maximal set of connected dense units within a subspace
  • 160.
    191 CLIQUE: The MajorSteps • Partition the data space and find the number of points that lie inside each cell of the partition. • Identify the subspaces that contain clusters using the Apriori principle • Identify clusters: – Determine dense units in all subspaces of interests – Determine connected dense units in all subspaces of interests. • Generate minimal description for the clusters – Determine maximal regions that cover a cluster of connected dense units for each cluster – Determination of minimal cover for each cluster
  • 161.
    Salary (10,000) 20 30 4050 60 age 5 4 3 1 2 6 7 0 20 30 40 50 60 age 5 4 3 1 2 6 7 0 Vacation (week) age Vacation 30 50  = 3
  • 162.
    193 Strength and Weaknessof CLIQUE • Strength – It automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces – It is insensitive to the order of records in input and does not presume some canonical data distribution – It scales linearly with the size of input and has good scalability as the number of dimensions in the data increases • Weakness – The accuracy of the clustering result may be degraded at the expense of simplicity of the method
  • 163.
    194 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 164.
    195 Model-Based Clustering Methods • Attemptto optimize the fit between the data and some mathematical model • Statistical and AI approach – Conceptual clustering • A form of clustering in machine learning • Produces a classification scheme for a set of unlabeled objects • Finds characteristic description for each concept (class) – COBWEB (Fisher’87) • A popular a simple method of incremental conceptual learning • Creates a hierarchical clustering in the form of a classification tree • Each node refers to a concept and contains a probabilistic description of that concept
  • 165.
  • 166.
    197 More on Statistical-Based Clustering •Limitations of COBWEB – The assumption that the attributes are independent of each other is often too strong because correlation may exist – Not suitable for clustering large database data – skewed tree and expensive probability distributions • CLASSIT – an extension of COBWEB for incremental clustering of continuous data – suffers similar problems as COBWEB • AutoClass (Cheeseman and Stutz, 1996) – Uses Bayesian statistical analysis to estimate the number of clusters – Popular in industry
  • 167.
    198 Other Model-Based Clustering Methods •Neural network approaches – Represent each cluster as an exemplar, acting as a “prototype” of the cluster – New objects are distributed to the cluster whose exemplar is the most similar according to some dostance measure • Competitive learning – Involves a hierarchical architecture of several units (neurons) – Neurons compete in a “winner-takes-all” fashion for the object currently being presented
  • 168.
  • 169.
    200 Self-organizing feature maps (SOMs) •Clustering is also performed by having several units competing for the current object • The unit whose weight vector is closest to the current object wins • The winner and its neighbors learn by having their weights adjusted • SOMs are believed to resemble processing that can occur in the brain • Useful for visualizing high-dimensional data in 2- or 3-D space
  • 170.
    201 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 171.
    202 What Is OutlierDiscovery? • What are outliers? – The set of objects are considerably dissimilar from the remainder of the data – Example: Sports: Michael Jordon, Wayne Gretzky, ... • Problem – Find top n outlier points • Applications: – Credit card fraud detection – Telecom fraud detection – Customer segmentation – Medical analysis
  • 172.
    Outlier Discovery: Statistical Approaches Assume amodel underlying distribution that generates data set (e.g. normal distribution) • Use discordancy tests depending on – data distribution – distribution parameter (e.g., mean, variance) – number of expected outliers • Drawbacks – most tests are for single attribute – In many cases, data distribution may not be known
  • 173.
    CS590D: Data Mining Prof.Chris Clifton March 4, 2006 Clustering
  • 174.
    205 Outlier Discovery: Distance- BasedApproach • Introduced to counter the main limitations imposed by statistical methods – We need multi-dimensional analysis without knowing data distribution. • Distance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least a fraction p of the objects in T lies at a distance greater than D from O • Algorithms for mining distance-based outliers – Index-based algorithm – Nested-loop algorithm – Cell-based algorithm
  • 175.
    206 Outlier Discovery: Deviation-Based Approach •Identifies outliers by examining the main characteristics of objects in a group • Objects that “deviate” from this description are considered outliers • sequential exception technique – simulates the way in which humans can distinguish unusual objects from among a series of supposedly like objects • OLAP data cube technique – uses data cubes to identify regions of anomalies in large multidimensional data
  • 176.
    207 Cluster Analysis • Whatis Cluster Analysis? • Types of Data in Cluster Analysis • Partitioning Methods • Hierarchical Methods • Density-Based Methods • Cluster Evaluation • Grid-Based Methods • Model-Based Clustering Methods • Outlier Analysis • Summary
  • 177.
    208 Problems and Challenges •Considerable progress has been made in scalable clustering methods – Partitioning: k-means, k-medoids, CLARANS – Hierarchical: BIRCH, CURE – Density-based: DBSCAN, CLIQUE, OPTICS – Grid-based: STING, WaveCluster – Model-based: Autoclass, Denclue, Cobweb • Current clustering techniques do not address all the requirements adequately • Constraint-based clustering analysis: Constraints exist in data space (bridges and highways) or in user queries
  • 178.
    209 Constraint-Based Clustering Analysis • Clusteringanalysis: less parameters but more user- desired constraints, e.g., an ATM allocation problem
  • 179.
    210 Clustering With Obstacle Objects Takingobstacles into account Not Taking obstacles into account
  • 180.
    211 Summary • Cluster analysisgroups objects based on their similarity and has wide applications • Measure of similarity can be computed for various types of data • Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods • Outlier detection and analysis are very useful for fraud detection, etc. and can be performed by statistical, distance-based or deviation-based approaches • There are still lots of research issues on cluster analysis, such as constraint-based clustering
  • 181.
    212 References (1) • R.Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98 • M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973. • M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SIGMOD’99. • P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996 • M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases. KDD'96. • M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing techniques for efficient class identification. SSD'95. • D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139- 172, 1987. • D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. In Proc. VLDB’98. • S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases. SIGMOD'98. • A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
  • 182.
    213 References (2) • L.Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, 1990. • E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98. • G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, 1988. • P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997. • R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94. • E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101-105. • G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering approach for very large spatial databases. VLDB’98. • W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining, VLDB’97. • T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very large databases. SIGMOD'96.