SlideShare a Scribd company logo
1 of 150
CLUSTER ANALYSIS
April
5,
2022
1
CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
2
GENERAL APPLICATIONS OF CLUSTERING
 Pattern Recognition
 Spatial Data Analysis
 Create thematic maps in GIS by clustering feature spaces
 Detect spatial clusters and explain them in spatial data mining
 Image Processing
 Economic Science (especially market research)
 WWW
 Document classification
 Cluster Weblog data to discover groups of similar access
patterns
April
5,
2022
4
EXAMPLES OF CLUSTERING APPLICATIONS
 Marketing: Help marketers discover distinct groups in their
customer bases, and then use this knowledge to develop
targeted marketing programs
 Land use: Identification of areas of similar land use in an
earth observation database
 Insurance: Identifying groups of motor insurance policy
holders with a high average claim cost
 City-planning: Identifying groups of houses according to
their house type, value, and geographical location
 Earth-quake studies: Observed earth quake epicenters
should be clustered along continent faults
April
5,
2022
5
WHAT IS GOOD CLUSTERING?
 A good clustering method will produce high quality
clusters with
 High intra-class similarity
 Low inter-class similarity
 The quality of a clustering result depends on both
the similarity measure used by the method and its
implementation.
 The quality of a clustering method is also
measured by its ability to discover some or all of
the hidden patterns.
April
5,
2022
6
REQUIREMENTS OF CLUSTERING IN DATA MINING
 Scalability
 Ability to deal with different types of attributes
 Discovery of clusters with arbitrary shape
 Minimal requirements for domain knowledge to
determine input parameters
 Able to deal with noise and outliers
 Insensitive to order of input records
 High dimensionality
 Incorporation of user-specified constraints
 Interpretability and usability
April
5,
2022
7
CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
8
DATA STRUCTURES
 Data matrix
 (two modes)
 Dissimilarity matrix
 (one mode)
April
5,
2022
9


















np
x
...
nf
x
...
n1
x
...
...
...
...
...
ip
x
...
if
x
...
i1
x
...
...
...
...
...
1p
x
...
1f
x
...
11
x
















0
...
)
2
,
(
)
1
,
(
:
:
:
)
2
,
3
(
)
...
n
d
n
d
0
d
d(3,1
0
d(2,1)
0
MEASURE THE QUALITY OF CLUSTERING
 Dissimilarity/Similarity metric : Similarity is expressed in
terms of a distance function, which is typically metric:
d(i, j)
 There is a separate “quality” function that measures
the “goodness” of a cluster.
 The definitions of distance functions are usually very
different for interval-scaled, Boolean, Categorical,
Ordinal and Ratio variables.
 Weights should be associated with different variables
based on applications and data semantics.
 It is hard to define “similar enough” or “good enough”
 the answer is typically highly subjective.
April
5,
2022
10
TYPE OF DATA IN CLUSTERING ANALYSIS
 Interval-scaled variables:
 Binary variables:
 Nominal, ordinal, and ratio variables:
 Variables of mixed types:
April
5,
2022
11
INTERVAL-VALUED VARIABLES
 Standardize data
 Calculate the mean absolute deviation:
where
 Calculate the standardized measurement (z-score)
 Using mean absolute deviation is more robust
than using standard deviation
April
5,
2022
12
.
)
...
2
1
1
nf
f
f
f
x
x
(x
n
m 



|)
|
...
|
|
|
(|
1
2
1 f
nf
f
f
f
f
f
m
x
m
x
m
x
n
s 






f
f
if
if s
m
x
z


SIMILARITY AND DISSIMILARITY BETWEEN OBJECTS
 Distances are normally used to measure the similarity
or dissimilarity between two data objects
 Some popular ones include: Minkowski distance:
where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p-
dimensional data objects, and q is a positive integer
 If q = 1, d is Manhattan distance
April
5,
2022
13
q
q
p
p
q
q
j
x
i
x
j
x
i
x
j
x
i
x
j
i
d )
|
|
...
|
|
|
(|
)
,
(
2
2
1
1







|
|
...
|
|
|
|
)
,
(
2
2
1
1 p
p j
x
i
x
j
x
i
x
j
x
i
x
j
i
d 






SIMILARITY AND DISSIMILARITY BETWEEN OBJECTS
(CONT.)
 If q = 2, d is Euclidean distance:
 Properties
 d(i,j)  0
 d(i,i) = 0
 d(i,j) = d(j,i)
 d(i,j)  d(i,k) + d(k,j)
 Also one can use weighted distance, parametric Pearson
product moment correlation, or other dissimilarity
measures.
April
5,
2022
14
)
|
|
...
|
|
|
(|
)
,
( 2
2
2
2
2
1
1 p
p j
x
i
x
j
x
i
x
j
x
i
x
j
i
d 






 A Binary variable has only two states (0 or 1)
 0 means variable is absent
 1 means variable is present
 Ex: variable smoker (1 indicates patient smokes and 0
indicates patient does not)
 Treating binary variables as interval-scaled can lead to
misleading results
 There may be symmetric and asymmetric binary
variables
April
5,
2022
15
BINARY VARIABLES
 To compute the dissimilarity between two binary
variables
 If all binary variables are thought of as having same weight
, construct a 2-by-2 contingency table.
BINARY VARIABLES (CONTD…)
 A contingency table for binary data
Where, p is total no of variable and p= a + b + c + d
April
5,
2022
16
p
d
b
c
a
sum
d
c
d
c
b
a
b
a
sum




0
1
0
1
Object i
Object j
 A binary variable is symmetric if both of states are equally valuable
and carry the same weight, no preference on the outcome should
be coded as 0 or 1.
 Ex: gender (male or female)
 Dissimilarity based on symmetric binary variable is called
symmetric binary dissimilarity
d
c
b
a
c
b
j
i
d





)
,
(
April
5,
2022
17
c
b
a
c
b
j
i
d




)
,
(
BINARY VARIABLES (CONTD…)
 A binary variable is asymmetric if the states are not equally
important.
 Ex: test (positive or negative)
 By convention, we shall code the most important outcome, which is
usually the rarest one by 1 (e.g. HIV positive) and the other by 0
(e.g. HIV negative)
 Given two asymmetric variables , the agreement of two 1s (a
positive match) is considered more significant than two 0s (a
negative match)
 Dissimilarity based on asymmetric binary variable is called
asymmetric binary dissimilarity where the no. of –ve matches d is
considered unimportant and thus ignored in the computation.
April
5,
2022
18
BINARY VARIABLES (CONTD…)
 Complementarily , we can measure the distance
between two binary variables based on notion of
similarity instead of dissimilarity
 Jaccard coefficient ,
sim (i, j) = )
,
(
1 j
i
d
c
b
a
a 



DISSIMILARITY BETWEEN BINARY VARIABLES
 Example
 gender is a symmetric attribute
 the remaining attributes are asymmetric binary
 let the values Y and P be set to 1, and the value N be set to 0
April
5,
2022
19
Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4
Jack M Y N P N N N
Mary F Y N P N P N
Jim M Y P N N N N
75
.
0
2
1
1
2
1
)
,
(
67
.
0
1
1
1
1
1
)
,
(
33
.
0
1
0
2
1
0
)
,
(















m ary
jim
d
jim
jack
d
m ary
jack
d
NOMINAL VARIABLES (OR CATEGORICAL)
 A categorical variable (sometimes called a nominal
variable) is one that has two or more categories
 But there is no intrinsic ordering to the categories
 For example, gender is a categorical variable having two
categories (male and female) and there is no intrinsic
ordering to the categories
 Hair colour is also a categorical variable having a number
of categories (blonde, brown, brunette, red, etc.) and again,
there is no agreed way to order these from highest to
lowest.
 A purely categorical variable is one that simply allows you
to assign categories but you cannot clearly order the
variables. If the variable has a clear ordering, then that
variable would be an ordinal variable
April
5,
2022
20
 A generalization of the binary variable is that it can
take more than 2 states, e.g., red, yellow, blue,
green
 Let no. of states of a categorical variable be M
 The states can be denoted as 1,2,….,M
 Method : Simple matching
 m: # of matches, p: total # of variables
April
5,
2022
21
p
m
p
j
i
d 

)
,
(
NOMINAL VARIABLES (OR CATEGORICAL) CONTD…
Object
Identifier
Test-1
(categorical)
Test-2
(ordinal)
Test-3
(ratio-scaled)
1 Code-A Excellent 445
2 Code-B Fair 22
3 Code-C Good 164
4 Code-A Excellent 1,210
April
5,
2022
22
Data
Mining:
Concepts
and
Techniques
NOMINAL VARIABLES (OR CATEGORICAL) CONTD…
Example: Dissimilarity between categorical variables
Suppose, we have object identifier and test-1 are the variables.
The dissimilarity matrix is :
0
d(2,1) 0
d(3,1) d(3,2) 0
d(4,1) d(4,2) d(4,3) 0
Since, we have one
categorical variable
test-1, we set p=1 in
equation and d(i, j) is
0 if objects i and j
match, and 1 if differ.
0
1 0
1 1 0
0 1 1 0
 An ordinal variable is similar to a categorical variable.
 The difference between the two is that there is a clear
ordering of the variables
 For example, suppose you have a variable, economic
status, with three categories (low, medium and high).
 In addition to being able to classify people into these three
categories, you can order the categories as low, medium
and high
 Now consider a variable like educational experience (with
values such as elementary school graduate, high school
graduate, some college and college graduate).
 Even though we can order these from lowest to highest,
the spacing between the values may not be the same
across the levels of the variables.
April
5,
2022
23
Data
Mining:
Concepts
and
Techniques
ORDINAL VARIABLES
 Say we assign scores 1, 2, 3 and 4 to these four levels of
educational experience and we compare the difference in
education between categories one and two with the
difference in educational experience between categories
two and three, or the difference between categories three
and four.
 The difference between categories one and two
(elementary and high school) is probably much bigger than
the difference between categories two and three (high
school and some college).
 In this example, we can order the people in level of
educational experience but the size of the difference
between categories is inconsistent (because the spacing
between categories one and two is bigger than categories
two and three).
April
5,
2022
24
Data
Mining:
Concepts
and
Techniques
ORDINAL VARIABLES (CONTD…)
HOW ARE ORDINAL VARIABLES HANDLED ?
 Quite similar as interval-scaled variable while
computing the dissimilarity between objects
 Let f is a variable from a set of ordinal variables
describing n objects
 The dissimilarity w.r.t f involves the following steps:
 replacing xif by their rank
 map the range of each variable onto [0, 1] by replacing i-th
object in the f-th variable by
 compute the dissimilarity using methods for interval-scaled
variables
April
5,
2022
25
1
1



f
if
if M
r
z
}
,...,
1
{ f
if
M
r 
April
5,
2022
26
ORDINAL VARIABLES (EXAMPLE)
 Example: Dissimilarity between ordinal variables
 Suppose, we have object identifier and test-2 are the
variables.
 There are 3 states for test-2 i.e Mf = 3
 Step1 : Replace each value of test-2 by its rank, i.e 3,1,2,3
 Step2: Normalize the ranking by mapping rank 1 to 0, rank 2
to 0.5 and rank 3 to 1.
 Step3: Use Euclidian distance to find the dissimilarity matrix
0
1 0
0.5 0.5 0
0 1.0 0.5 0
RATIO-SCALED VARIABLES
 Ratio-scaled variable: a positive measurement on a
nonlinear scale, approximately at exponential scale,
such as AeBt or Ae-Bt
 Methods:
 treat them like interval-scaled variables — not a good
choice! (since it is likely that the scale may be distorted)
 apply logarithmic transformation
yif = log(xif)
 treat them as continuous ordinal data treat their rank as
interval-scaled.
April
5,
2022
27
 Ratio variables are those in which the ratio of
two of the numbers have meaning, such as
miles per gallon, for example. If car A gets 15
mpg and car B gets 20 mpg, you can take the
ratio of the two: 15/20 and compute 0.75,
meaning car A gets 75% of the mileage of car B.
April
5,
2022
28
RATIO-SCALED VARIABLES (EXAMPLE)
April
5,
2022
29
RATIO-SCALED VARIABLES (EXAMPLE)
 Example: Dissimilarity between ratio-scaled variables
 Suppose, we have object identifier and test-3 are the
variables.
 Step1 : Let us try logarithmic transformation( the results are
2.65 , 1.34, 2.21 and 3.08)
 Step3: Use Euclidian distance to find the dissimilarity matrix
0
1.31 0
0.44 0.87 0
0.43 1.74 0.87 0
WHY DOES IT MATTER WHETHER A VARIABLE IS
CATEGORICAL, ORDINAL OR INTERVAL?
 Statistical computations and analyses assume that the
variables have a specific levels of measurement.
 For example, it would not make sense to compute an average
hair colour.
 An average of a categorical variable does not make much
sense because there is no intrinsic ordering of the levels of the
categories.
 Moreover, if you tried to compute the average of educational
experience as defined in the ordinal section above, you would
also obtain a nonsensical result.
 Because the spacing between the four levels of educational
experience is very uneven, the meaning of this average would
be very questionable.
April
5,
2022
30
 In short, an average requires a variable to be interval.
 Sometimes you have variables that are "in between"
ordinal and interval, for example, a five-point scale with
values "strongly agree", "agree", "neutral", "disagree" and
"strongly disagree".
 If we cannot be sure that the intervals between each of
these five values are the same, then we would not be
able to say that this is an interval variable, but we would
say that it is an ordinal variable.
 However, in order to be able to use statistics that assume
the variable is interval, we will assume that the intervals
are equally spaced.
April
5,
2022
31
Data
Mining:
Concepts
and
Techniques
WHY DOES IT MATTER WHETHER A VARIABLE IS
CATEGORICAL, ORDINAL OR INTERVAL?
VARIABLES OF MIXED TYPES
 A database may contain all the six types of
variables
 symmetric binary, asymmetric binary, nominal, ordinal,
interval and ratio.
 One may use a weighted formula to combine their
effects.
 f is binary or nominal:
dij
(f) = 0 if xif = xjf , or dij
(f) = 1
 f is interval-based: use the normalized distance
 f is ordinal or ratio-scaled
 compute ranks rif and
 and treat zif as interval-scaled
April
5,
2022
32
)
(
1
)
(
)
(
1
)
,
( f
ij
p
f
f
ij
f
ij
p
f
d
j
i
d







1
1



f
if
M
r
zif
CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
33
Data
Mining:
Concepts
and
Techniques
MAJOR CLUSTERING APPROACHES
 Partitioning algorithms: Construct various partitions and then
evaluate them by some criterion
 Hierarchy algorithms: Create a hierarchical decomposition of the
set of data (or objects) using some criterion
 Density-based: based on connectivity and density functions
 Grid-based: based on a multiple-level granularity structure
 Model-based: A model is hypothesized for each of the clusters
and the idea is to find the best fit of that model to each other
April
5,
2022
34
CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
35
PARTITIONING ALGORITHMS: BASIC CONCEPT
 Partitioning method: Construct a partition of a
database D of n objects into a set of k clusters
 Given a k, find a partition of k clusters that
optimizes the chosen partitioning criterion
 Global optimal: exhaustively enumerate all partitions
 Heuristic methods: k-means and k-medoids algorithms
 k-means (MacQueen’67): Each cluster is represented
by the center of the cluster
 k-medoids or PAM (Partition around medoids)
(Kaufman & Rousseeuw’87): Each cluster is
represented by one of the objects in the cluster
April
5,
2022
36
THE K-MEANS CLUSTERING METHOD
 Given k, the k-means algorithm is implemented in 4
steps:
 Partition objects into k nonempty subsets
 Compute seed points as the centroids of the clusters of the
current partition. The centroid is the center (mean point) of
the cluster.
 Assign each object to the cluster with the nearest seed
point.
 Go back to Step 2, stop when no more new assignment.
April
5,
2022
37
THE K-MEANS CLUSTERING METHOD
 Example
April
5,
2022
38
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
April
5,
2022
39
THE K-MEANS CLUSTERING FLOWCHART
Suppose we have several objects (4 types of medicines) and
each object have two attributes or features as shown in table
below. Our goal is to group these objects into K=2 group of
medicine based on the two features (pH and weight index).
April
5,
2022
40
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
Object attribute 1 (X):
weight index
attribute 2 (Y):
pH
Medicine A 1 1
Medicine B 2 1
Medicine C 4 3
Medicine D 5 4
Each medicine represents one point with two attributes (X, Y) that
we can represent it as coordinate in an attribute space as shown in
the figure next slide.
April
5,
2022
41
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
1. Initial value of centroids : Suppose
we use medicine A and medicine B as
the first centroids. Let c1 and c2 denote
the coordinate of the centroids, then
c1=(1,1) and c2= (2,1)
2.Objects-Centroids distance : we
calculate the distance between
cluster centroid to each object. Let
us use Euclidean distance, then we
have distance matrix at iteration 0 is
April
5,
2022
42
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
 Each column in the distance matrix symbolizes the object. The first
row of the distance matrix corresponds to the distance of each
object to the first centroid and the second row is the distance of
each object to the second centroid.
 For example, distance from medicine C = (4, 3) to the first centroid
c1=(1,1) is
 and its distance to the second centroid c2= (2,1) is
April
5,
2022
43
3. Objects clustering : We assign each object based on the
minimum distance. Thus, medicine A is assigned to group 1,
medicine B to group 2, medicine C to group 2 and medicine D to
group 2. The element of Group matrix below is 1 if and only if the
object is assigned to that group.
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
4. Iteration-1, determine centroids : Knowing the members of
each group, now we compute the new centroid of each group
based on these new memberships. Group 1 only has one member
thus the centroid remains in i.e c1= (1,1) and
Group 2 now has three members, thus the centroid is the
average coordinate among the three members: c1=(1,1) and
April
5,
2022
44
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
5. Iteration-1, Objects-Centroids distances : The next step is to
compute the distance of all objects to the new centroids. Similar
to step 2, we have distance matrix at iteration 1 is
April
5,
2022
45
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
6. Iteration-1, Objects clustering: Similar to step 3, we
assign each object based on the minimum distance. Based on
the new distance matrix, we move the medicine B to Group 1
while all the other objects remain. The Group matrix is shown
below
April
5,
2022
46
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
April
5,
2022
47
7. Iteration 2, determine centroids: Now we repeat step 4 to
calculate the new centroids coordinate based on the clustering of
previous iteration. Group1 and group 2 both has two members,
thus the new centroids are and
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
April
5,
2022
48
8.Iteration-2,Objects-Centroids distances : Repeat
step 2 again, we have new distance matrix at iteration
2 as
9.Iteration-2, Objects clustering : Again, we assign
each object based on the minimum distance.
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
 We obtain result that G2 = G1. Comparing the grouping
of last iteration and this iteration reveals that the objects
does not move group anymore. Thus, the computation
of the k-mean clustering has reached its stability and no
more iteration is needed. We get the final grouping as
the results
April
5,
2022
49
THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
Object attribute 1 (X):
weight index
attribute 2
(Y): pH
Group
(result)
Medicine A 1 1 1
Medicine B 2 1 1
Medicine C 4 3 2
Medicine D 5 4 2
STRENGTHS OF K-MEANS METHOD
 Strength
 It is sensitive with respect to data ordering
 Easily understood
 k-means is most used method
 Relatively efficient: O(tkn), where n is # objects,
k is # clusters, and t is # iterations. Normally, k,
t << n.
April
5,
2022
50
 Weakness
 Applicable only when mean is defined, then what about
categorical data?
 Need to specify k, the number of clusters, in advance
 It is not obvious what is a good k to use
 The process is sensitive with respect to outliers
 The algorithm lacks scalability
 Only numerical attributes are covered
 Resulting clusters can be unbalanced
 Unable to handle noisy data and outliers
 Not suitable to discover clusters with non-convex shapes
 The result strongly depends on the initial guess of
centroids (or assignments)
April
5,
2022
51
WEAKNESSES OF K-MEANS METHOD
 Complexity:
 K-Means: O(kn) per iteration
 Uses:
 All data sizes,
 Best with well separated clusters
 Examples:
PAM, CLARA, CLARANS, HMETIS, BAG
April
5,
2022
52
COMMENTS ON K-MEANS METHOD
VARIATIONS OF THE K-MEANS METHOD
April
5,
2022
53
 K-medoids – instead of mean, use medians of
each cluster
 Mean of 1, 3, 5, 7, 9 is
 Mean of 1, 3, 5, 7, 1009 is
 Median of 1, 3, 5, 7, 1009 is
 Median advantage: not affected by extreme values
 For large databases, use sampling
5
205
5
THE K-MEDOIDS CLUSTERING METHOD
 Find representative objects, called medoids, in
clusters
 PAM (Partitioning Around Medoids, 1987)
 Starts from an initial set of medoids and iteratively
replaces one of the medoids by one of the non-
medoids if it improves the total distance of the resulting
clustering
 PAM works effectively for small data sets, but does not
scale well for large data sets
 CLARA (Kaufmann & Rousseeuw, 1990)
 CLARANS (Ng & Han, 1994): Randomized
sampling
April
5,
2022
54
April
5,
2022
55
K-MEDOIDS
April
5,
2022
56
PAM (PARTITIONING AROUND MEDOIDS)
K-Medoids
 Handles outliers well.
 Ordering of input does not impact results.
 Does not scale well.
 Each cluster represented by one item, called
the medoid.
 Initial set of k medoids randomly chosen.
April
5,
2022
57
PAM: BASIC STRATEGY
 First find a representative object (the medoid) for
each cluster
 Each remaining object is clustered with the
medoid to which it is most “similar”
 Iteratively replace one of the medoids by a non-
medoid as long as the “quality” of the clustering
is improved
DEMONSTRATION OF PAM
 Cluster the following data set of ten objects into two
clusters i.e k = 2.
 Consider a data set of ten objects as follows:
April
5,
2022
58
 Step 1
 Initialise k centre
 Let us assume c1 = (3,4) and c2 = (7,4)
 So here c1 and c2 are selected as medoid.
 Calculating distance so as to associate each data object to its
nearest medoid. Cost is calculated using Minkowski distance
metric with r = 1.
April
5,
2022
59
DEMONSTRATION OF PAM (CONT…)
April
5,
2022
60
Then so the clusters become:
Cluster1 = {(3,4)(2,6)(3,8)(4,7)}
Cluster2 = {(7,4)(6,2)(6,4)(7,3)(8,5)(7,6)}
Since the points (2,6) (3,8) and (4,7) are close to c1 hence they
form one cluster whilst remaining points form another cluster.
So the total cost involved is 20.
Where cost between any two points is found using formula
DEMONSTRATION OF PAM (CONT…)
April
5,
2022
61
where x is any data object, c is the medoid, and d is the
dimension of the object which in this case is 2.
Total cost is the summation of the cost of data object from
its medoid in its cluster so here:
DEMONSTRATION OF PAM (CONT…)
 Step 2
 Selection of nonmedoid O′ randomly
 Let us assume O′ = (7,3)
 So now the medoids are c1(3,4) and O′(7,3)
 If c1 and O′ are new medoids, calculate the total cost involved
 By using the formula in the step 1
April
5,
2022
62
DEMONSTRATION OF PAM (CONT…)
April
5,
2022
63
DEMONSTRATION OF PAM (CONT…)
PAM (PARTITIONING AROUND MEDOIDS)
 PAM (Kaufman and Rousseeuw, 1987), built in Splus
 Use real object to represent the cluster
 Select k representative objects arbitrarily
 For each pair of non-selected object h and selected object
i, calculate the total swapping cost TCih
 For each pair of i and h,
 If TCih < 0, i is replaced by h
 Then assign each non-selected object to the most similar
representative object
 repeat steps 2-3 until there is no change
April
5,
2022
64
April
5,
2022
65
o PAM is more robust than k-means in the
presence of noise and outliers
o Medoids are less influenced by outliers
o PAM is efficiently for small data sets but does
not scale well for large data sets
o For each iteration Cost TCih for k(n-k) pairs is
to be determined
o Sampling based method: CLARA
Adv & Disadvantages of PAM
CLARA (CLUSTERING LARGE APPLICATIONS) (1990)
 CLARA (Kaufmann and Rousseeuw in 1990)
 Built in statistical analysis packages, such as S+
 It draws multiple samples of the data set, applies PAM on
each sample, and gives the best clustering as the output
 Strength: deals with larger data sets than PAM
 Weakness:
 Efficiency depends on the sample size
 A good clustering based on samples will not necessarily represent
a good clustering of the whole data set if the sample is biased
April
5,
2022
66
CLARANS (“RANDOMIZED” CLARA) (1994)
 CLARANS (A Clustering Algorithm based on
Randomized Search) (Ng and Han’94)
 CLARANS draws sample of neighbors dynamically
 The clustering process can be presented as
searching a graph where every node is a potential
solution, that is, a set of k medoids
April
5,
2022
67
CHAPTER 8. CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
68
Data
Mining:
Concepts
and
Techniques
April
5,
2022
69
HIERARCHICAL CLUSTERING
 Clusters are created in levels actually
creating sets of clusters at each level.
 Agglomerative Nesting( AGNES)
 Initially each item is its own cluster
 Iteratively clusters are merged together
 Bottom Up
 Divisive Analysis(DIANA)
 Initially all items in one cluster
 Large clusters are successively divided
 Top Down
April
5,
2022
70
EXAMPLE
April
5,
2022
71
DIFFICULTIES WITH HIERARCHICAL
CLUSTERING
 Can never undo.
 No object swapping is allowed
 Merge or split decisions ,if not well chosen may lead to
poor quality clusters.
 do not scale well: time complexity of at least O(n2),
where n is the number of total objects.
April
5,
2022
72
HIERARCHICAL ALGORITHMS
 Single Link
 Complete Link
 Average Link
April
5,
2022
73
DENDROGRAM
 Dendrogram: a tree data
structure which illustrates
hierarchical clustering
techniques.
 Each level shows clusters
for that level.
 Leaf – individual clusters
 Root – one cluster
 A cluster at level i is the
union of its children clusters
at level i+1.
April
5,
2022
74
LEVELS OF CLUSTERING
April
5,
2022
75
SINGLE LINK
 View all items with links (distances) between
them.
 Finds maximal connected components in this
graph.
 Two clusters are merged if there is at least
one edge which connects them.
 Uses threshold distances at each level.
 Could be agglomerative or divisive.
April
5,
2022
76
SINGLE LINKAGE CLUSTERING
 It is an example of agglomerative hierarchical
clustering.
 We consider the distance between one cluster
and another cluster to be equal to the shortest
distance from any member of one cluster to any
member of the other cluster.
77
ALGORITHM
Given a set of N items to be clustered, and an NxN distance (or
similarity) matrix, the basic process of single linkage clustering is
this:
1. Start by assigning each item to its own cluster, so that if we
have N items, we now have N clusters, each containing just
one item. Let the distances (similarities) between the
clusters equal the distances (similarities) between the
items they contain.
2. Find the closest (most similar) pair of clusters and merge
them into a single cluster, so that now you have one less
cluster.
3. Compute distances (similarities) between the new cluster and
each of the old clusters.
4. Repeat steps 2 and 3 until all items are clustered into a
single cluster of size N.
April
5,
2022
78
HOW TO COMPUTE GROUP SIMILARITY?
Given two groups g1 and g2,
Single-link algorithm: s(g1,g2)= similarity of the closest pair
Complete-link algorithm: s(g1,g2)= similarity of the farthest
pair
Average-link algorithm: s(g1,g2)= average of similarity of all
pairs
Three Popular Methods:
April
5,
2022
79
Single-link algorithm
?
g1 g2
complete-link algorithm
……
average-link algorithm
THREE METHODS ILLUSTRATED
April
5,
2022
80
HIERARCHICAL: SINGLE LINK
Cluster similarity = similarity of two most
similar members
April
5,
2022
81
EXAMPLE: SINGLE LINK
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1












0
4
5
8
0
7
9
0
3
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(
1
2
3
4
5
8
}
8
,
9
min{
}
,
min{
9
}
9
,
10
min{
}
,
min{
3
}
3
,
6
min{
}
,
min{
5
,
2
5
,
1
5
),
2
,
1
(
4
,
2
4
,
1
4
),
2
,
1
(
3
,
2
3
,
1
3
),
2
,
1
(









d
d
d
d
d
d
d
d
d
April
5,
2022
82










0
4
5
0
7
0
5
4
)
3
,
2
,
1
(
5
4
)
3
,
2
,
1
(
1
2
3
4
5












0
4
5
8
0
7
9
0
3
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1
5
}
5
,
8
min{
}
,
min{
7
}
7
,
9
min{
}
,
min{
5
,
3
5
),
2
,
1
(
5
),
3
,
2
,
1
(
4
,
3
4
),
2
,
1
(
4
),
3
,
2
,
1
(






d
d
d
d
d
d
EXAMPLE: SINGLE LINK
April
5,
2022
83
EXAMPLE: SINGLE LINK










0
4
5
0
7
0
5
4
)
3
,
2
,
1
(
5
4
)
3
,
2
,
1
(
2
2
1
3
4
5












0
4
5
8
0
7
9
0
3
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1
5
}
,
min{ 5
),
3
,
2
,
1
(
4
),
3
,
2
,
1
(
)
5
,
4
(
),
3
,
2
,
1
( 
 d
d
d
April
5,
2022
84
HIERARCHICAL: COMPLETE LINK
Cluster similarity = similarity of two least
similar members
April
5,
2022
85
EXAMPLE: COMPLETE LINK
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1












0
4
5
9
0
7
10
0
6
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(
2
2
1
3
4
5
9
}
8
,
9
max{
}
,
max{
10
}
9
,
10
max{
}
,
max{
6
}
3
,
6
max{
}
,
max{
5
,
2
5
,
1
5
),
2
,
1
(
4
,
2
4
,
1
4
),
2
,
1
(
3
,
2
3
,
1
3
),
2
,
1
(









d
d
d
d
d
d
d
d
d
April
5,
2022
86
EXAMPLE: COMPLETE LINK
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1












0
4
5
9
0
7
10
0
6
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(
1
2
3
4
5










0
7
10
0
6
0
)
5
,
4
(
3
)
2
,
1
(
)
5
,
4
(
3
)
2
,
1
(
7
}
5
,
7
max{
}
,
max{
10
}
9
,
10
max{
}
,
max{
5
,
3
4
,
3
)
5
,
4
(
,
3
5
),
2
,
1
(
4
),
2
,
1
(
)
5
,
4
(
),
2
,
1
(






d
d
d
d
d
d
April
5,
2022
87
EXAMPLE: COMPLETE LINK
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1












0
4
5
9
0
7
10
0
6
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(










0
7
10
0
6
0
)
5
,
4
(
3
)
2
,
1
(
)
5
,
4
(
3
)
2
,
1
(
1
2
3
4
5
10
}
,
max{ )
5
,
4
(
,
3
)
5
,
4
(
),
2
,
1
(
)
5
,
4
(
),
3
,
2
,
1
( 
 d
d
d
April
5,
2022
88
HIERARCHICAL: AVERAGE LINK
 Cluster similarity = average similarity of all
pairs
April
5,
2022
89
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1












0
4
5
5
.
8
0
7
5
.
9
0
5
.
4
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(
1
2
3
4
5
5
.
8
2
8
9
)
(
2
1
5
.
9
2
9
10
)
(
2
1
5
.
4
2
3
6
)
(
2
1
5
,
2
5
,
1
5
),
2
,
1
(
4
,
2
4
,
1
4
),
2
,
1
(
3
,
2
3
,
1
3
),
2
,
1
(















d
d
d
d
d
d
d
d
d
EXAMPLE: AVERAGE LINK
April
5,
2022
90
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1












0
4
5
5
.
8
0
7
5
.
9
0
5
.
4
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(
1
2
3
4
5










0
6
9
0
5
.
4
0
)
5
,
4
(
3
)
2
,
1
(
)
5
,
4
(
3
)
2
,
1
(
6
)
(
2
1
9
)
(
4
1
5
,
3
4
,
3
)
5
,
4
(
,
3
5
,
2
4
,
2
5
,
1
4
,
1
)
5
,
4
(
),
2
,
1
(








d
d
d
d
d
d
d
d
EXAMPLE: AVERAGE LINK
April
5,
2022
91
















0
4
5
8
9
0
7
9
10
0
3
6
0
2
0
5
4
3
2
1
5
4
3
2
1












0
4
5
5
.
8
0
7
5
.
9
0
5
.
4
0
5
4
3
)
2
,
1
(
5
4
3
)
2
,
1
(










0
6
9
0
5
.
4
0
)
5
,
4
(
3
)
2
,
1
(
)
5
,
4
(
3
)
2
,
1
(
1
2
3
4
5
8
)
(
6
1
5
,
3
4
,
3
5
,
2
4
,
2
5
,
1
4
,
1
)
5
,
4
(
),
3
,
2
,
1
( 





 d
d
d
d
d
d
d
EXAMPLE: AVERAGE LINK
MORE ON HIERARCHICAL CLUSTERING METHODS
 Major weakness of agglomerative clustering
methods
 do not scale well: time complexity of at least O(n2), where
n is the number of total objects
 can never undo what was done previously
 Integration of hierarchical with distance-based
clustering
 BIRCH (1996): uses CF-tree and incrementally adjusts
the quality of sub-clusters
 CURE (1998): selects well-scattered points from the
cluster and then shrinks them towards the center of the
cluster by a specified fraction
 CHAMELEON (1999): hierarchical clustering using
dynamic modeling
April
5,
2022
92
BIRCH (1996)
 Birch: Balanced Iterative Reducing and Clustering using
Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’96)
 Incrementally construct a CF (Clustering Feature) tree, a
hierarchical data structure for multiphase clustering
 Phase 1: scan DB to build an initial in-memory CF tree (a multi-
level compression of the data that tries to preserve the inherent
clustering structure of the data)
 Phase 2: use an arbitrary clustering algorithm to cluster the leaf
nodes of the CF-tree
 Scales linearly: finds a good clustering with a single scan
and improves the quality with a few additional scans
 Weakness: handles only numeric data, and sensitive to the
order of the data record.
April
5,
2022
93
April
5,
2022
94
Clustering Feature Vector
Clustering Feature: CF = (N, LS, SS)
N: Number of data points
LS: N
i=1=Xi
SS: N
i=1=Xi
2
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CF = (5, (16,30),(54,190))
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
CF TREE
April
5,
2022
95
CF1
child1
CF3
child3
CF2
child2
CF6
child6
CF1
child1
CF3
child3
CF2
child2
CF5
child5
CF1 CF2 CF6
prev next CF1 CF2 CF4
prev next
B = 7
L = 6
Root
Non-leaf node
Leaf node Leaf node
CURE (CLUSTERING USING REPRESENTATIVES)
 CURE: proposed by Guha, Rastogi & Shim, 1998
 Stops the creation of a cluster hierarchy if a level consists
of k clusters
 Uses multiple representative points to evaluate the
distance between clusters, adjusts well to arbitrary
shaped clusters and avoids single-link effect
April
5,
2022
96
CURE: THE ALGORITHM
 Draw random sample s.
 Partition sample to p partitions with size s/p
 Partially cluster partitions into s/pq clusters
 Eliminate outliers
 By random sampling
 If a cluster grows too slow, eliminate it.
 Cluster partial clusters.
 Label data in disk
April
5,
2022
97
DATA PARTITIONING AND CLUSTERING
 s = 50
 p = 2
 s/p = 25
April
5,
2022
98
Data
Mining:
Concepts
and
Techniques
x x
x
y
y y
y
x
y
x
s/pq = 5
CURE: SHRINKING REPRESENTATIVE
POINTS
 Shrink the multiple representative points towards the gravity
center by a fraction of .
 Multiple representatives capture the shape of the cluster
April
5,
2022
99
x
y
x
y
CLUSTERING CATEGORICAL DATA: ROCK
 ROCK: Robust Clustering using linKs,
by S. Guha, R. Rastogi, K. Shim (ICDE’99).
 Use links to measure similarity/proximity
 Not distance based
 Computational complexity:
 Basic ideas:
 Similarity function and neighbors:
Let T1 = {1,2,3}, T2={3,4,5}
April
5,
2022
100
O n nm m n n
m a
( log )
2 2
 
Sim T T
T T
T T
( , )
1 2
1 2
1 2



Sim T T
( , )
{ }
{ , , , , }
.
1 2
3
1 2 3 4 5
1
5
0 2
  
ROCK: ALGORITHM
 Links: The number of common neighbours for the two
points.
 Algorithm
 Draw random sample
 Cluster with links
 Label data in disk
April
5,
2022
101
{1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5}
{1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5}
{1,2,3} {1,2,4}
3
CHAMELEON
 CHAMELEON: Hierarchical clustering using
dynamic modeling, by G. Karypis, E.H. Han and V.
Kumar’99
 Measures the similarity based on a dynamic model
 Two clusters are merged only if the interconnectivity and
closeness (proximity) between two clusters are high
relative to the internal interconnectivity of the clusters
and closeness of items within the clusters
 A two phase algorithm
 1. Use a graph partitioning algorithm: cluster objects
into a large number of relatively small sub-clusters
 2. Use an agglomerative hierarchical clustering
algorithm: find the genuine clusters by repeatedly
combining these sub-clusters
April
5,
2022
102
OVERALL FRAMEWORK OF CHAMELEON
April
5,
2022
103
Construct
Sparse Graph Partition the Graph
Merge Partition
Final Clusters
Data Set
CHAPTER 8. CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
104
Data
Mining:
Concepts
and
Techniques
DENSITY-BASED CLUSTERING METHODS
 Clustering based on density (local cluster criterion),
such as density-connected points
 Major features:
 Discover clusters of arbitrary shape
 Handle noise
 One scan
 Need density parameters as termination condition
 Several interesting studies:
 DBSCAN: Ester, et al. (KDD’96)
 OPTICS: Ankerst, et al (SIGMOD’99).
 DENCLUE: Hinneburg & D. Keim (KDD’98)
 CLIQUE: Agrawal, et al. (SIGMOD’98)
April
5,
2022
105
Data
Mining:
Concepts
and
Techniques
DENSITY-BASED CLUSTERING: BACKGROUND
 Two parameters:
 Eps: Maximum radius of the neighbourhood
 MinPts: Minimum number of points in an Eps-neighbourhood of
that point
 NEps(p): {q belongs to D | dist(p,q) <= Eps}
 Directly density-reachable: A point p is directly density-
reachable from a point q wrt. Eps, MinPts if
 1) p belongs to NEps(q)
 2) core point condition:
|NEps (q)| >= MinPts
April
5,
2022
106
Data
Mining:
Concepts
and
Techniques
p
q
MinPts
=
Eps
=
1
c
DENSITY-BASED CLUSTERING: BACKGROUND
(II)
 Density-reachable:
 A point p is density-reachable from a
point q wrt. Eps, MinPts if there is a
chain of points p1, …, pn, p1 = q, pn = p
such that pi+1 is directly density-
reachable from pi
 Density-connected
 A point p is density-connected to a point
q wrt. Eps, MinPts if there is a point o
such that both, p and q are density-
reachable from o wrt. Eps and MinPts.
April
5,
2022
107
Data
Mining:
Concepts
and
Techniques
p
q
p1
p q
o
DBSCAN: DENSITY BASED SPATIAL
CLUSTERING OF APPLICATIONS WITH NOISE
 Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
 Discovers clusters of arbitrary shape in spatial databases
with noise
April
5,
2022
108
Data
Mining:
Concepts
and
Techniques
Core
Border
Outlier
Eps
Min
DBSCAN: THE ALGORITHM
 Arbitrary select a point p
 Retrieve all points density-reachable from p wrt Eps and
MinPts.
 If p is a core point, a cluster is formed.
 If p is a border point, no points are density-reachable from p
and DBSCAN visits the next point of the database.
 Continue the process until all of the points have been
processed.
April
5,
2022
109
Data
Mining:
Concepts
and
Techniques
OPTICS: A CLUSTER-ORDERING METHOD
(1999)
 OPTICS: Ordering Points To Identify the Clustering
Structure
 Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)
 Produces a special order of the database wrt its density-based
clustering structure
 This cluster-ordering contains info equiv to the density-based
clusterings corresponding to a broad range of parameter
settings
 Good for both automatic and interactive cluster analysis,
including finding intrinsic clustering structure
 Can be represented graphically or using visualization
techniques
April
5,
2022
110
Data
Mining:
Concepts
and
Techniques
OPTICS: SOME EXTENSION FROM
DBSCAN
 Index-based:
 k = number of dimensions
 N = 20
 p = 75%
 M = N(1-p) = 5
 Complexity: O(kN2)
 Core Distance
 Reachability Distance
April
5,
2022
111
Data
Mining:
Concepts
and
Techniques
D
p2
MinPts = 5
e = 3 cm
Max (core-distance (o), d (o, p))
r(p1, o) = 2.8cm. r(p2,o) = 4cm
o
o
p1
April
5,
2022
Data Mining: Concepts and Tec
112
e
e
Reachability
-distance
Cluster-order
of the objects
undefined
e‘
DENCLUE: USING DENSITY
FUNCTIONS
 DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)
 Major features
 Solid mathematical foundation
 Good for data sets with large amounts of noise
 Allows a compact mathematical description of arbitrarily shaped
clusters in high-dimensional data sets
 Significant faster than existing algorithm (faster than DBSCAN by
a factor of up to 45)
 But needs a large number of parameters
April
5,
2022
113
Data
Mining:
Concepts
and
Techniques
DENCLUE: TECHNICAL ESSENCE
 Uses grid cells but only keeps information about grid
cells that do actually contain data points and manages
these cells in a tree-based access structure.
 Influence function: describes the impact of a data point
within its neighborhood.
 Overall density of the data space can be calculated as
the sum of the influence function of all data points.
 Clusters can be determined mathematically by
identifying density attractors.
 Density attractors are local maximal of the overall
density function.
April
5,
2022
114
Data
Mining:
Concepts
and
Techniques
GRADIENT: THE STEEPNESS OF A SLOPE
 Example
April
5,
2022
115
Data
Mining:
Concepts
and
Techniques
 


N
i
x
x
d
D
Gaussian
i
e
x
f 1
2
)
,
(
2
2
)
( 






N
i
x
x
d
i
i
D
Gaussian
i
e
x
x
x
x
f 1
2
)
,
(
2
2
)
(
)
,
( 
f x y e
Gaussian
d x y
( , )
( , )


2
2
2
DENSITY ATTRACTOR
April
5,
2022
116
Data
Mining:
Concepts
and
Techniques
CENTER-DEFINED AND ARBITRARY
April
5,
2022
117
Data
Mining:
Concepts
and
Techniques
CHAPTER 8. CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
118
Data
Mining:
Concepts
and
Techniques
GRID-BASED CLUSTERING METHOD
 Using multi-resolution grid data structure
 Several interesting methods
 STING (a STatistical INformation Grid approach) by Wang,
Yang and Muntz (1997)
 WaveCluster by Sheikholeslami, Chatterjee, and Zhang
(VLDB’98)
 A multi-resolution clustering approach using wavelet method
 CLIQUE: Agrawal, et al. (SIGMOD’98)
April
5,
2022
119
Data
Mining:
Concepts
and
Techniques
STING: A STATISTICAL INFORMATION GRID
APPROACH
 Wang, Yang and Muntz (VLDB’97)
 The spatial area area is divided into rectangular cells
 There are several levels of cells corresponding to different
levels of resolution
April
5,
2022
120
Data
Mining:
Concepts
and
Techniques
STING: A STATISTICAL INFORMATION
GRID APPROACH (2)
 Each cell at a high level is partitioned into a number of smaller cells in
the next lower level
 Statistical info of each cell is calculated and stored beforehand and is
used to answer queries
 Parameters of higher level cells can be easily calculated from
parameters of lower level cell
 count, mean, s, min, max
 type of distribution—normal, uniform, etc.
 Use a top-down approach to answer spatial data queries
 Start from a pre-selected layer—typically with a small number of cells
 For each cell in the current level compute the confidence interval
STING: A STATISTICAL INFORMATION
GRID APPROACH (3)
 Remove the irrelevant cells from further consideration
 When finish examining the current layer, proceed to the next
lower level
 Repeat this process until the bottom layer is reached
 Advantages:
 Query-independent, easy to parallelize, incremental update
 O(K), where K is the number of grid cells at the lowest level
 Disadvantages:
 All the cluster boundaries are either horizontal or vertical, and no
diagonal boundary is detected
WAVECLUSTER (1998)
 Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
 A multi-resolution clustering approach which applies
wavelet transform to the feature space
 A wavelet transform is a signal processing technique that
decomposes a signal into different frequency sub-band.
 Both grid-based and density-based
 Input parameters:
 # of grid cells for each dimension
 the wavelet, and the # of applications of wavelet transform.
April
5,
2022
123
Data
Mining:
Concepts
and
Techniques
WAVECLUSTER (1998)
 How to apply wavelet transform to find clusters
 Summaries the data by imposing a multidimensional grid
structure onto data space
 These multidimensional spatial data objects are represented
in a n-dimensional feature space
 Apply wavelet transform on feature space to find the dense
regions in the feature space
 Apply wavelet transform multiple times which result in clusters
at different scales from fine to coarse
April
5,
2022
125
Data
Mining:
Concepts
and
Techniques
WHAT IS WAVELET (2)?
April
5,
2022
126
Data
Mining:
Concepts
and
Techniques
QUANTIZATION
April
5,
2022
127
Data
Mining:
Concepts
and
Techniques
TRANSFORMATION
April
5,
2022
128
Data
Mining:
Concepts
and
Techniques
WAVECLUSTER (1998)
 Why is wavelet transformation useful for clustering
 Unsupervised clustering
It uses hat-shape filters to emphasize region where points
cluster, but simultaneously to suppress weaker information in
their boundary
 Effective removal of outliers
 Multi-resolution
 Cost efficiency
 Major features:
 Complexity O(N)
 Detect arbitrary shaped clusters at different scales
 Not sensitive to noise, not sensitive to input order
 Only applicable to low dimensional data
April
5,
2022
129
Data
Mining:
Concepts
and
Techniques
CLIQUE (CLUSTERING IN QUEST)
 Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).
 Automatically identifying subspaces of a high dimensional
data space that allow better clustering than original space
 CLIQUE can be considered as both density-based and grid-
based
 It partitions each dimension into the same number of equal length
interval
 It partitions an m-dimensional data space into non-overlapping
rectangular units
 A unit is dense if the fraction of total data points contained in the
unit exceeds the input model parameter
 A cluster is a maximal set of connected dense units within a
subspace
April
5,
2022
130
Data
Mining:
Concepts
and
Techniques
CLIQUE: THE MAJOR STEPS
 Partition the data space and find the number of points that
lie inside each cell of the partition.
 Identify the subspaces that contain clusters using the
Apriori principle
 Identify clusters:
 Determine dense units in all subspaces of interests
 Determine connected dense units in all subspaces of interests.
 Generate minimal description for the clusters
 Determine maximal regions that cover a cluster of connected
dense units for each cluster
 Determination of minimal cover for each cluster
April
5,
2022
131
Data
Mining:
Concepts
and
Techniques
April
5,
2022
Data Mining: Concepts and Tec
132
Salary
(10,000)
20 30 40 50 60
age
5
4
3
1
2
6
7
0
20 30 40 50 60
age
5
4
3
1
2
6
7
0
Vacation
(week)
age
Vacation
30 50
 = 3
STRENGTH AND WEAKNESS OF CLIQUE
 Strength
 It automatically finds subspaces of the highest dimensionality
such that high density clusters exist in those subspaces
 It is insensitive to the order of records in input and does not
presume some canonical data distribution
 It scales linearly with the size of input and has good scalability
as the number of dimensions in the data increases
 Weakness
 The accuracy of the clustering result may be degraded at the
expense of simplicity of the method
April
5,
2022
133
Data
Mining:
Concepts
and
Techniques
CHAPTER 8. CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
134
Data
Mining:
Concepts
and
Techniques
MODEL-BASED CLUSTERING METHODS
 Attempt to optimize the fit between the data and some
mathematical model
 Statistical and AI approach
 Conceptual clustering
 A form of clustering in machine learning
 Produces a classification scheme for a set of unlabeled objects
 Finds characteristic description for each concept (class)
 COBWEB (Fisher’87)
 A popular a simple method of incremental conceptual learning
 Creates a hierarchical clustering in the form of a classification tree
 Each node refers to a concept and contains a probabilistic
description of that concept
April
5,
2022
135
Data
Mining:
Concepts
and
Techniques
COBWEB CLUSTERING METHOD
April
5,
2022
136
Data
Mining:
Concepts
and
Techniques
A classification tree
MORE ON STATISTICAL-BASED CLUSTERING
 Limitations of COBWEB
 The assumption that the attributes are independent of each
other is often too strong because correlation may exist
 Not suitable for clustering large database data – skewed tree
and expensive probability distributions
 CLASSIT
 an extension of COBWEB for incremental clustering of
continuous data
 suffers similar problems as COBWEB
 AutoClass (Cheeseman and Stutz, 1996)
 Uses Bayesian statistical analysis to estimate the number of
clusters
 Popular in industry
April
5,
2022
137
Data
Mining:
Concepts
and
Techniques
OTHER MODEL-BASED CLUSTERING
METHODS
 Neural network approaches
 Represent each cluster as an exemplar, acting as a
“prototype” of the cluster
 New objects are distributed to the cluster whose exemplar is
the most similar according to some dostance measure
 Competitive learning
 Involves a hierarchical architecture of several units (neurons)
 Neurons compete in a “winner-takes-all” fashion for the
object currently being presented
April
5,
2022
138
Data
Mining:
Concepts
and
Techniques
MODEL-BASED CLUSTERING METHODS
April
5,
2022
139
Data
Mining:
Concepts
and
Techniques
SELF-ORGANIZING FEATURE MAPS (SOMS)
 Clustering is also performed by having several units
competing for the current object
 The unit whose weight vector is closest to the current
object wins
 The winner and its neighbors learn by having their
weights adjusted
 SOMs are believed to resemble processing that can
occur in the brain
 Useful for visualizing high-dimensional data in 2- or 3-D
space
April
5,
2022
140
Data
Mining:
Concepts
and
Techniques
CHAPTER 8. CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
141
Data
Mining:
Concepts
and
Techniques
WHAT IS OUTLIER DISCOVERY?
 What are outliers?
 The set of objects are considerably dissimilar from the
remainder of the data
 Example: Sports: Michael Jordon, Wayne Gretzky, ...
 Problem
 Find top n outlier points
 Applications:
 Credit card fraud detection
 Telecom fraud detection
 Customer segmentation
 Medical analysis
April
5,
2022
142
Data
Mining:
Concepts
and
Techniques
OUTLIER DISCOVERY:
STATISTICAL APPROACHES
 Assume a model underlying distribution that generates
data set (e.g. normal distribution)
 Use discordancy tests depending on
 data distribution
 distribution parameter (e.g., mean, variance)
 number of expected outliers
 Drawbacks
 most tests are for single attribute
 In many cases, data distribution may not be known
April
5,
2022
143
Data
Mining:
Concepts
and
Techniques
OUTLIER DISCOVERY: DISTANCE-BASED
APPROACH
 Introduced to counter the main limitations imposed by
statistical methods
 We need multi-dimensional analysis without knowing data
distribution.
 Distance-based outlier: A DB(p, D)-outlier is an object O
in a dataset T such that at least a fraction p of the objects
in T lies at a distance greater than D from O
 Algorithms for mining distance-based outliers
 Index-based algorithm
 Nested-loop algorithm
 Cell-based algorithm
OUTLIER DISCOVERY: DEVIATION-BASED
APPROACH
 Identifies outliers by examining the main characteristics
of objects in a group
 Objects that “deviate” from this description are
considered outliers
 sequential exception technique
 simulates the way in which humans can distinguish unusual
objects from among a series of supposedly like objects
 OLAP data cube technique
 uses data cubes to identify regions of anomalies in large
multidimensional data
April
5,
2022
145
Data
Mining:
Concepts
and
Techniques
CHAPTER 8. CLUSTER ANALYSIS
 What is Cluster Analysis?
 Types of Data in Cluster Analysis
 A Categorization of Major Clustering Methods
 Partitioning Methods
 Hierarchical Methods
 Density-Based Methods
 Grid-Based Methods
 Model-Based Clustering Methods
 Outlier Analysis
 Summary
April
5,
2022
146
Data
Mining:
Concepts
and
Techniques
PROBLEMS AND CHALLENGES
 Considerable progress has been made in scalable
clustering methods
 Partitioning: k-means, k-medoids, CLARANS
 Hierarchical: BIRCH, CURE
 Density-based: DBSCAN, CLIQUE, OPTICS
 Grid-based: STING, WaveCluster
 Model-based: Autoclass, Denclue, Cobweb
 Current clustering techniques do not address all the
requirements adequately
 Constraint-based clustering analysis: Constraints exist in
data space (bridges and highways) or in user queries
April
5,
2022
147
Data
Mining:
Concepts
and
Techniques
CONSTRAINT-BASED CLUSTERING ANALYSIS
 Clustering analysis: less parameters but more user-desired
constraints, e.g., an ATM allocation problem
April
5,
2022
148
Data
Mining:
Concepts
and
Techniques
SUMMARY
 Cluster analysis groups objects based on their similarity
and has wide applications
 Measure of similarity can be computed for various types of
data
 Clustering algorithms can be categorized into partitioning
methods, hierarchical methods, density-based methods,
grid-based methods, and model-based methods
 Outlier detection and analysis are very useful for fraud
detection, etc. and can be performed by statistical,
distance-based or deviation-based approaches
 There are still lots of research issues on cluster analysis,
such as constraint-based clustering
April
5,
2022
149
Data
Mining:
Concepts
and
Techniques
REFERENCES (1)
 R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of
high dimensional data for data mining applications. SIGMOD'98
 M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
 M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify
the clustering structure, SIGMOD’99.
 P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996
 M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering
clusters in large spatial databases. KDD'96.
 M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases:
Focusing techniques for efficient class identification. SSD'95.
 D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning,
2:139-172, 1987.
 D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based
on dynamic systems. In Proc. VLDB’98.
 S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large
databases. SIGMOD'98.
 A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
April
5,
2022
150
Data
Mining:
Concepts
and
Techniques
REFERENCES (2)
 L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster
Analysis. John Wiley & Sons, 1990.
 E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets.
VLDB’98.
 G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to
Clustering. John Wiley and Sons, 1988.
 P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.
 R. Ng and J. Han. Efficient and effective clustering method for spatial data mining.
VLDB'94.
 E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large
data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101-105.
 G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution
clustering approach for very large spatial databases. VLDB’98.
 W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data
Mining, VLDB’97.
 T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method
for very large databases. SIGMOD'96.
April
5,
2022
151
Data
Mining:
Concepts
and
Techniques
Thank you !!!
April
5,
2022
152
Data
Mining:
Concepts
and
Techniques

More Related Content

What's hot

Concepts in order statistics and bayesian estimation
Concepts in order statistics and bayesian estimationConcepts in order statistics and bayesian estimation
Concepts in order statistics and bayesian estimationAlexander Decker
 
PosterPresentations.com-3 6x48-Template-V5 - 副本
PosterPresentations.com-3 6x48-Template-V5 - 副本PosterPresentations.com-3 6x48-Template-V5 - 副本
PosterPresentations.com-3 6x48-Template-V5 - 副本Yijun Zhou
 
A novel numerical approach for odd higher order boundary value problems
A novel numerical approach for odd higher order boundary value problemsA novel numerical approach for odd higher order boundary value problems
A novel numerical approach for odd higher order boundary value problemsAlexander Decker
 
Shriram Nandakumar & Deepa Naik
Shriram Nandakumar & Deepa NaikShriram Nandakumar & Deepa Naik
Shriram Nandakumar & Deepa NaikShriram Nandakumar
 
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...ijsrd.com
 
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ijsc
 
Common fixed point and weak commuting mappings
Common fixed point and weak commuting mappingsCommon fixed point and weak commuting mappings
Common fixed point and weak commuting mappingsAlexander Decker
 
Study of Different Multi-instance Learning kNN Algorithms
Study of Different Multi-instance Learning kNN AlgorithmsStudy of Different Multi-instance Learning kNN Algorithms
Study of Different Multi-instance Learning kNN AlgorithmsEditor IJCATR
 
Comparison of the optimal design
Comparison of the optimal designComparison of the optimal design
Comparison of the optimal designAlexander Decker
 
A New Method for Preserving Privacy in Data Publishing
A New Method for Preserving Privacy in Data PublishingA New Method for Preserving Privacy in Data Publishing
A New Method for Preserving Privacy in Data Publishingcscpconf
 
Arithmetic Product of Species
Arithmetic Product of SpeciesArithmetic Product of Species
Arithmetic Product of SpeciesJi Li
 
Exponential lindley additive failure rate model
Exponential lindley additive failure rate modelExponential lindley additive failure rate model
Exponential lindley additive failure rate modeleSAT Journals
 
Measurement of Consumer Welfare
Measurement of Consumer WelfareMeasurement of Consumer Welfare
Measurement of Consumer WelfareNBER
 
A Thresholding Method to Estimate Quantities of Each Class
A Thresholding Method to Estimate Quantities of Each ClassA Thresholding Method to Estimate Quantities of Each Class
A Thresholding Method to Estimate Quantities of Each ClassWaqas Tariq
 

What's hot (17)

Concepts in order statistics and bayesian estimation
Concepts in order statistics and bayesian estimationConcepts in order statistics and bayesian estimation
Concepts in order statistics and bayesian estimation
 
PosterPresentations.com-3 6x48-Template-V5 - 副本
PosterPresentations.com-3 6x48-Template-V5 - 副本PosterPresentations.com-3 6x48-Template-V5 - 副本
PosterPresentations.com-3 6x48-Template-V5 - 副本
 
A novel numerical approach for odd higher order boundary value problems
A novel numerical approach for odd higher order boundary value problemsA novel numerical approach for odd higher order boundary value problems
A novel numerical approach for odd higher order boundary value problems
 
Shriram Nandakumar & Deepa Naik
Shriram Nandakumar & Deepa NaikShriram Nandakumar & Deepa Naik
Shriram Nandakumar & Deepa Naik
 
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
 
Sf math
Sf mathSf math
Sf math
 
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...
ANALYTICAL FORMULATIONS FOR THE LEVEL BASED WEIGHTED AVERAGE VALUE OF DISCRET...
 
Common fixed point and weak commuting mappings
Common fixed point and weak commuting mappingsCommon fixed point and weak commuting mappings
Common fixed point and weak commuting mappings
 
Study of Different Multi-instance Learning kNN Algorithms
Study of Different Multi-instance Learning kNN AlgorithmsStudy of Different Multi-instance Learning kNN Algorithms
Study of Different Multi-instance Learning kNN Algorithms
 
Comparison of the optimal design
Comparison of the optimal designComparison of the optimal design
Comparison of the optimal design
 
A New Method for Preserving Privacy in Data Publishing
A New Method for Preserving Privacy in Data PublishingA New Method for Preserving Privacy in Data Publishing
A New Method for Preserving Privacy in Data Publishing
 
1. df tests
1. df tests1. df tests
1. df tests
 
Arithmetic Product of Species
Arithmetic Product of SpeciesArithmetic Product of Species
Arithmetic Product of Species
 
Exponential lindley additive failure rate model
Exponential lindley additive failure rate modelExponential lindley additive failure rate model
Exponential lindley additive failure rate model
 
Measurement of Consumer Welfare
Measurement of Consumer WelfareMeasurement of Consumer Welfare
Measurement of Consumer Welfare
 
A Thresholding Method to Estimate Quantities of Each Class
A Thresholding Method to Estimate Quantities of Each ClassA Thresholding Method to Estimate Quantities of Each Class
A Thresholding Method to Estimate Quantities of Each Class
 
D034017022
D034017022D034017022
D034017022
 

Similar to Cluster analysis (2)

3.1 clustering
3.1 clustering3.1 clustering
3.1 clusteringKrish_ver2
 
DM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year studentsDM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year studentssriharipatilin
 
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS
Maxim Kazantsev
 
Model complexity
Model complexityModel complexity
Model complexityDeb Roy
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysisAcad
 
Jewei Hans & Kamber Capter 7
Jewei Hans & Kamber Capter 7Jewei Hans & Kamber Capter 7
Jewei Hans & Kamber Capter 7Houw Liong The
 
Dimensionality Reduction and feature extraction.pptx
Dimensionality Reduction and feature extraction.pptxDimensionality Reduction and feature extraction.pptx
Dimensionality Reduction and feature extraction.pptxSivam Chinna
 
Survival and hazard estimation of weibull distribution based on
Survival and hazard estimation of weibull distribution based onSurvival and hazard estimation of weibull distribution based on
Survival and hazard estimation of weibull distribution based onAlexander Decker
 
Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)준식 최
 
4_22865_IS465_2019_1__2_1_02Data-2.ppt
4_22865_IS465_2019_1__2_1_02Data-2.ppt4_22865_IS465_2019_1__2_1_02Data-2.ppt
4_22865_IS465_2019_1__2_1_02Data-2.pptPaoloOchengco
 
Hierarchical clustering
Hierarchical clusteringHierarchical clustering
Hierarchical clusteringishmecse13
 

Similar to Cluster analysis (2) (20)

3.1 clustering
3.1 clustering3.1 clustering
3.1 clustering
 
clustering.ppt
clustering.pptclustering.ppt
clustering.ppt
 
DM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year studentsDM UNIT_4 PPT for btech final year students
DM UNIT_4 PPT for btech final year students
 
Cluster
ClusterCluster
Cluster
 
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS
FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

FUNCTION OF RIVAL SIMILARITY IN A COGNITIVE DATA ANALYSIS

 
Cs501 cluster analysis
Cs501 cluster analysisCs501 cluster analysis
Cs501 cluster analysis
 
Clustering
ClusteringClustering
Clustering
 
Chapter 07
Chapter 07Chapter 07
Chapter 07
 
Model complexity
Model complexityModel complexity
Model complexity
 
ClusetrigBasic.ppt
ClusetrigBasic.pptClusetrigBasic.ppt
ClusetrigBasic.ppt
 
Cluster analysis
Cluster analysisCluster analysis
Cluster analysis
 
Dbm630 lecture09
Dbm630 lecture09Dbm630 lecture09
Dbm630 lecture09
 
Jewei Hans & Kamber Capter 7
Jewei Hans & Kamber Capter 7Jewei Hans & Kamber Capter 7
Jewei Hans & Kamber Capter 7
 
Dimensionality Reduction and feature extraction.pptx
Dimensionality Reduction and feature extraction.pptxDimensionality Reduction and feature extraction.pptx
Dimensionality Reduction and feature extraction.pptx
 
8clst.ppt
8clst.ppt8clst.ppt
8clst.ppt
 
Survival and hazard estimation of weibull distribution based on
Survival and hazard estimation of weibull distribution based onSurvival and hazard estimation of weibull distribution based on
Survival and hazard estimation of weibull distribution based on
 
Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)Paper Summary of Disentangling by Factorising (Factor-VAE)
Paper Summary of Disentangling by Factorising (Factor-VAE)
 
sisvsp2012_sessione9_giusti_marchetti_pratesi_
sisvsp2012_sessione9_giusti_marchetti_pratesi_sisvsp2012_sessione9_giusti_marchetti_pratesi_
sisvsp2012_sessione9_giusti_marchetti_pratesi_
 
4_22865_IS465_2019_1__2_1_02Data-2.ppt
4_22865_IS465_2019_1__2_1_02Data-2.ppt4_22865_IS465_2019_1__2_1_02Data-2.ppt
4_22865_IS465_2019_1__2_1_02Data-2.ppt
 
Hierarchical clustering
Hierarchical clusteringHierarchical clustering
Hierarchical clustering
 

Recently uploaded

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfAsst.prof M.Gokilavani
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 

Recently uploaded (20)

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 

Cluster analysis (2)

  • 2. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 2
  • 3. GENERAL APPLICATIONS OF CLUSTERING  Pattern Recognition  Spatial Data Analysis  Create thematic maps in GIS by clustering feature spaces  Detect spatial clusters and explain them in spatial data mining  Image Processing  Economic Science (especially market research)  WWW  Document classification  Cluster Weblog data to discover groups of similar access patterns April 5, 2022 4
  • 4. EXAMPLES OF CLUSTERING APPLICATIONS  Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs  Land use: Identification of areas of similar land use in an earth observation database  Insurance: Identifying groups of motor insurance policy holders with a high average claim cost  City-planning: Identifying groups of houses according to their house type, value, and geographical location  Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults April 5, 2022 5
  • 5. WHAT IS GOOD CLUSTERING?  A good clustering method will produce high quality clusters with  High intra-class similarity  Low inter-class similarity  The quality of a clustering result depends on both the similarity measure used by the method and its implementation.  The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns. April 5, 2022 6
  • 6. REQUIREMENTS OF CLUSTERING IN DATA MINING  Scalability  Ability to deal with different types of attributes  Discovery of clusters with arbitrary shape  Minimal requirements for domain knowledge to determine input parameters  Able to deal with noise and outliers  Insensitive to order of input records  High dimensionality  Incorporation of user-specified constraints  Interpretability and usability April 5, 2022 7
  • 7. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 8
  • 8. DATA STRUCTURES  Data matrix  (two modes)  Dissimilarity matrix  (one mode) April 5, 2022 9                   np x ... nf x ... n1 x ... ... ... ... ... ip x ... if x ... i1 x ... ... ... ... ... 1p x ... 1f x ... 11 x                 0 ... ) 2 , ( ) 1 , ( : : : ) 2 , 3 ( ) ... n d n d 0 d d(3,1 0 d(2,1) 0
  • 9. MEASURE THE QUALITY OF CLUSTERING  Dissimilarity/Similarity metric : Similarity is expressed in terms of a distance function, which is typically metric: d(i, j)  There is a separate “quality” function that measures the “goodness” of a cluster.  The definitions of distance functions are usually very different for interval-scaled, Boolean, Categorical, Ordinal and Ratio variables.  Weights should be associated with different variables based on applications and data semantics.  It is hard to define “similar enough” or “good enough”  the answer is typically highly subjective. April 5, 2022 10
  • 10. TYPE OF DATA IN CLUSTERING ANALYSIS  Interval-scaled variables:  Binary variables:  Nominal, ordinal, and ratio variables:  Variables of mixed types: April 5, 2022 11
  • 11. INTERVAL-VALUED VARIABLES  Standardize data  Calculate the mean absolute deviation: where  Calculate the standardized measurement (z-score)  Using mean absolute deviation is more robust than using standard deviation April 5, 2022 12 . ) ... 2 1 1 nf f f f x x (x n m     |) | ... | | | (| 1 2 1 f nf f f f f f m x m x m x n s        f f if if s m x z  
  • 12. SIMILARITY AND DISSIMILARITY BETWEEN OBJECTS  Distances are normally used to measure the similarity or dissimilarity between two data objects  Some popular ones include: Minkowski distance: where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two p- dimensional data objects, and q is a positive integer  If q = 1, d is Manhattan distance April 5, 2022 13 q q p p q q j x i x j x i x j x i x j i d ) | | ... | | | (| ) , ( 2 2 1 1        | | ... | | | | ) , ( 2 2 1 1 p p j x i x j x i x j x i x j i d       
  • 13. SIMILARITY AND DISSIMILARITY BETWEEN OBJECTS (CONT.)  If q = 2, d is Euclidean distance:  Properties  d(i,j)  0  d(i,i) = 0  d(i,j) = d(j,i)  d(i,j)  d(i,k) + d(k,j)  Also one can use weighted distance, parametric Pearson product moment correlation, or other dissimilarity measures. April 5, 2022 14 ) | | ... | | | (| ) , ( 2 2 2 2 2 1 1 p p j x i x j x i x j x i x j i d       
  • 14.  A Binary variable has only two states (0 or 1)  0 means variable is absent  1 means variable is present  Ex: variable smoker (1 indicates patient smokes and 0 indicates patient does not)  Treating binary variables as interval-scaled can lead to misleading results  There may be symmetric and asymmetric binary variables April 5, 2022 15 BINARY VARIABLES  To compute the dissimilarity between two binary variables  If all binary variables are thought of as having same weight , construct a 2-by-2 contingency table.
  • 15. BINARY VARIABLES (CONTD…)  A contingency table for binary data Where, p is total no of variable and p= a + b + c + d April 5, 2022 16 p d b c a sum d c d c b a b a sum     0 1 0 1 Object i Object j  A binary variable is symmetric if both of states are equally valuable and carry the same weight, no preference on the outcome should be coded as 0 or 1.  Ex: gender (male or female)  Dissimilarity based on symmetric binary variable is called symmetric binary dissimilarity d c b a c b j i d      ) , (
  • 16. April 5, 2022 17 c b a c b j i d     ) , ( BINARY VARIABLES (CONTD…)  A binary variable is asymmetric if the states are not equally important.  Ex: test (positive or negative)  By convention, we shall code the most important outcome, which is usually the rarest one by 1 (e.g. HIV positive) and the other by 0 (e.g. HIV negative)  Given two asymmetric variables , the agreement of two 1s (a positive match) is considered more significant than two 0s (a negative match)  Dissimilarity based on asymmetric binary variable is called asymmetric binary dissimilarity where the no. of –ve matches d is considered unimportant and thus ignored in the computation.
  • 17. April 5, 2022 18 BINARY VARIABLES (CONTD…)  Complementarily , we can measure the distance between two binary variables based on notion of similarity instead of dissimilarity  Jaccard coefficient , sim (i, j) = ) , ( 1 j i d c b a a    
  • 18. DISSIMILARITY BETWEEN BINARY VARIABLES  Example  gender is a symmetric attribute  the remaining attributes are asymmetric binary  let the values Y and P be set to 1, and the value N be set to 0 April 5, 2022 19 Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4 Jack M Y N P N N N Mary F Y N P N P N Jim M Y P N N N N 75 . 0 2 1 1 2 1 ) , ( 67 . 0 1 1 1 1 1 ) , ( 33 . 0 1 0 2 1 0 ) , (                m ary jim d jim jack d m ary jack d
  • 19. NOMINAL VARIABLES (OR CATEGORICAL)  A categorical variable (sometimes called a nominal variable) is one that has two or more categories  But there is no intrinsic ordering to the categories  For example, gender is a categorical variable having two categories (male and female) and there is no intrinsic ordering to the categories  Hair colour is also a categorical variable having a number of categories (blonde, brown, brunette, red, etc.) and again, there is no agreed way to order these from highest to lowest.  A purely categorical variable is one that simply allows you to assign categories but you cannot clearly order the variables. If the variable has a clear ordering, then that variable would be an ordinal variable April 5, 2022 20
  • 20.  A generalization of the binary variable is that it can take more than 2 states, e.g., red, yellow, blue, green  Let no. of states of a categorical variable be M  The states can be denoted as 1,2,….,M  Method : Simple matching  m: # of matches, p: total # of variables April 5, 2022 21 p m p j i d   ) , ( NOMINAL VARIABLES (OR CATEGORICAL) CONTD…
  • 21. Object Identifier Test-1 (categorical) Test-2 (ordinal) Test-3 (ratio-scaled) 1 Code-A Excellent 445 2 Code-B Fair 22 3 Code-C Good 164 4 Code-A Excellent 1,210 April 5, 2022 22 Data Mining: Concepts and Techniques NOMINAL VARIABLES (OR CATEGORICAL) CONTD… Example: Dissimilarity between categorical variables Suppose, we have object identifier and test-1 are the variables. The dissimilarity matrix is : 0 d(2,1) 0 d(3,1) d(3,2) 0 d(4,1) d(4,2) d(4,3) 0 Since, we have one categorical variable test-1, we set p=1 in equation and d(i, j) is 0 if objects i and j match, and 1 if differ. 0 1 0 1 1 0 0 1 1 0
  • 22.  An ordinal variable is similar to a categorical variable.  The difference between the two is that there is a clear ordering of the variables  For example, suppose you have a variable, economic status, with three categories (low, medium and high).  In addition to being able to classify people into these three categories, you can order the categories as low, medium and high  Now consider a variable like educational experience (with values such as elementary school graduate, high school graduate, some college and college graduate).  Even though we can order these from lowest to highest, the spacing between the values may not be the same across the levels of the variables. April 5, 2022 23 Data Mining: Concepts and Techniques ORDINAL VARIABLES
  • 23.  Say we assign scores 1, 2, 3 and 4 to these four levels of educational experience and we compare the difference in education between categories one and two with the difference in educational experience between categories two and three, or the difference between categories three and four.  The difference between categories one and two (elementary and high school) is probably much bigger than the difference between categories two and three (high school and some college).  In this example, we can order the people in level of educational experience but the size of the difference between categories is inconsistent (because the spacing between categories one and two is bigger than categories two and three). April 5, 2022 24 Data Mining: Concepts and Techniques ORDINAL VARIABLES (CONTD…)
  • 24. HOW ARE ORDINAL VARIABLES HANDLED ?  Quite similar as interval-scaled variable while computing the dissimilarity between objects  Let f is a variable from a set of ordinal variables describing n objects  The dissimilarity w.r.t f involves the following steps:  replacing xif by their rank  map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by  compute the dissimilarity using methods for interval-scaled variables April 5, 2022 25 1 1    f if if M r z } ,..., 1 { f if M r 
  • 25. April 5, 2022 26 ORDINAL VARIABLES (EXAMPLE)  Example: Dissimilarity between ordinal variables  Suppose, we have object identifier and test-2 are the variables.  There are 3 states for test-2 i.e Mf = 3  Step1 : Replace each value of test-2 by its rank, i.e 3,1,2,3  Step2: Normalize the ranking by mapping rank 1 to 0, rank 2 to 0.5 and rank 3 to 1.  Step3: Use Euclidian distance to find the dissimilarity matrix 0 1 0 0.5 0.5 0 0 1.0 0.5 0
  • 26. RATIO-SCALED VARIABLES  Ratio-scaled variable: a positive measurement on a nonlinear scale, approximately at exponential scale, such as AeBt or Ae-Bt  Methods:  treat them like interval-scaled variables — not a good choice! (since it is likely that the scale may be distorted)  apply logarithmic transformation yif = log(xif)  treat them as continuous ordinal data treat their rank as interval-scaled. April 5, 2022 27
  • 27.  Ratio variables are those in which the ratio of two of the numbers have meaning, such as miles per gallon, for example. If car A gets 15 mpg and car B gets 20 mpg, you can take the ratio of the two: 15/20 and compute 0.75, meaning car A gets 75% of the mileage of car B. April 5, 2022 28 RATIO-SCALED VARIABLES (EXAMPLE)
  • 28. April 5, 2022 29 RATIO-SCALED VARIABLES (EXAMPLE)  Example: Dissimilarity between ratio-scaled variables  Suppose, we have object identifier and test-3 are the variables.  Step1 : Let us try logarithmic transformation( the results are 2.65 , 1.34, 2.21 and 3.08)  Step3: Use Euclidian distance to find the dissimilarity matrix 0 1.31 0 0.44 0.87 0 0.43 1.74 0.87 0
  • 29. WHY DOES IT MATTER WHETHER A VARIABLE IS CATEGORICAL, ORDINAL OR INTERVAL?  Statistical computations and analyses assume that the variables have a specific levels of measurement.  For example, it would not make sense to compute an average hair colour.  An average of a categorical variable does not make much sense because there is no intrinsic ordering of the levels of the categories.  Moreover, if you tried to compute the average of educational experience as defined in the ordinal section above, you would also obtain a nonsensical result.  Because the spacing between the four levels of educational experience is very uneven, the meaning of this average would be very questionable. April 5, 2022 30
  • 30.  In short, an average requires a variable to be interval.  Sometimes you have variables that are "in between" ordinal and interval, for example, a five-point scale with values "strongly agree", "agree", "neutral", "disagree" and "strongly disagree".  If we cannot be sure that the intervals between each of these five values are the same, then we would not be able to say that this is an interval variable, but we would say that it is an ordinal variable.  However, in order to be able to use statistics that assume the variable is interval, we will assume that the intervals are equally spaced. April 5, 2022 31 Data Mining: Concepts and Techniques WHY DOES IT MATTER WHETHER A VARIABLE IS CATEGORICAL, ORDINAL OR INTERVAL?
  • 31. VARIABLES OF MIXED TYPES  A database may contain all the six types of variables  symmetric binary, asymmetric binary, nominal, ordinal, interval and ratio.  One may use a weighted formula to combine their effects.  f is binary or nominal: dij (f) = 0 if xif = xjf , or dij (f) = 1  f is interval-based: use the normalized distance  f is ordinal or ratio-scaled  compute ranks rif and  and treat zif as interval-scaled April 5, 2022 32 ) ( 1 ) ( ) ( 1 ) , ( f ij p f f ij f ij p f d j i d        1 1    f if M r zif
  • 32. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 33 Data Mining: Concepts and Techniques
  • 33. MAJOR CLUSTERING APPROACHES  Partitioning algorithms: Construct various partitions and then evaluate them by some criterion  Hierarchy algorithms: Create a hierarchical decomposition of the set of data (or objects) using some criterion  Density-based: based on connectivity and density functions  Grid-based: based on a multiple-level granularity structure  Model-based: A model is hypothesized for each of the clusters and the idea is to find the best fit of that model to each other April 5, 2022 34
  • 34. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 35
  • 35. PARTITIONING ALGORITHMS: BASIC CONCEPT  Partitioning method: Construct a partition of a database D of n objects into a set of k clusters  Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion  Global optimal: exhaustively enumerate all partitions  Heuristic methods: k-means and k-medoids algorithms  k-means (MacQueen’67): Each cluster is represented by the center of the cluster  k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw’87): Each cluster is represented by one of the objects in the cluster April 5, 2022 36
  • 36. THE K-MEANS CLUSTERING METHOD  Given k, the k-means algorithm is implemented in 4 steps:  Partition objects into k nonempty subsets  Compute seed points as the centroids of the clusters of the current partition. The centroid is the center (mean point) of the cluster.  Assign each object to the cluster with the nearest seed point.  Go back to Step 2, stop when no more new assignment. April 5, 2022 37
  • 37. THE K-MEANS CLUSTERING METHOD  Example April 5, 2022 38 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
  • 39. Suppose we have several objects (4 types of medicines) and each object have two attributes or features as shown in table below. Our goal is to group these objects into K=2 group of medicine based on the two features (pH and weight index). April 5, 2022 40 THE K-MEANS CLUSTERING NUMERICAL EXAMPLE Object attribute 1 (X): weight index attribute 2 (Y): pH Medicine A 1 1 Medicine B 2 1 Medicine C 4 3 Medicine D 5 4 Each medicine represents one point with two attributes (X, Y) that we can represent it as coordinate in an attribute space as shown in the figure next slide.
  • 40. April 5, 2022 41 THE K-MEANS CLUSTERING NUMERICAL EXAMPLE 1. Initial value of centroids : Suppose we use medicine A and medicine B as the first centroids. Let c1 and c2 denote the coordinate of the centroids, then c1=(1,1) and c2= (2,1) 2.Objects-Centroids distance : we calculate the distance between cluster centroid to each object. Let us use Euclidean distance, then we have distance matrix at iteration 0 is
  • 41. April 5, 2022 42 THE K-MEANS CLUSTERING NUMERICAL EXAMPLE  Each column in the distance matrix symbolizes the object. The first row of the distance matrix corresponds to the distance of each object to the first centroid and the second row is the distance of each object to the second centroid.  For example, distance from medicine C = (4, 3) to the first centroid c1=(1,1) is  and its distance to the second centroid c2= (2,1) is
  • 42. April 5, 2022 43 3. Objects clustering : We assign each object based on the minimum distance. Thus, medicine A is assigned to group 1, medicine B to group 2, medicine C to group 2 and medicine D to group 2. The element of Group matrix below is 1 if and only if the object is assigned to that group. THE K-MEANS CLUSTERING NUMERICAL EXAMPLE 4. Iteration-1, determine centroids : Knowing the members of each group, now we compute the new centroid of each group based on these new memberships. Group 1 only has one member thus the centroid remains in i.e c1= (1,1) and
  • 43. Group 2 now has three members, thus the centroid is the average coordinate among the three members: c1=(1,1) and April 5, 2022 44 THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
  • 44. 5. Iteration-1, Objects-Centroids distances : The next step is to compute the distance of all objects to the new centroids. Similar to step 2, we have distance matrix at iteration 1 is April 5, 2022 45 THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
  • 45. 6. Iteration-1, Objects clustering: Similar to step 3, we assign each object based on the minimum distance. Based on the new distance matrix, we move the medicine B to Group 1 while all the other objects remain. The Group matrix is shown below April 5, 2022 46 THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
  • 46. April 5, 2022 47 7. Iteration 2, determine centroids: Now we repeat step 4 to calculate the new centroids coordinate based on the clustering of previous iteration. Group1 and group 2 both has two members, thus the new centroids are and THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
  • 47. April 5, 2022 48 8.Iteration-2,Objects-Centroids distances : Repeat step 2 again, we have new distance matrix at iteration 2 as 9.Iteration-2, Objects clustering : Again, we assign each object based on the minimum distance. THE K-MEANS CLUSTERING NUMERICAL EXAMPLE
  • 48.  We obtain result that G2 = G1. Comparing the grouping of last iteration and this iteration reveals that the objects does not move group anymore. Thus, the computation of the k-mean clustering has reached its stability and no more iteration is needed. We get the final grouping as the results April 5, 2022 49 THE K-MEANS CLUSTERING NUMERICAL EXAMPLE Object attribute 1 (X): weight index attribute 2 (Y): pH Group (result) Medicine A 1 1 1 Medicine B 2 1 1 Medicine C 4 3 2 Medicine D 5 4 2
  • 49. STRENGTHS OF K-MEANS METHOD  Strength  It is sensitive with respect to data ordering  Easily understood  k-means is most used method  Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. April 5, 2022 50
  • 50.  Weakness  Applicable only when mean is defined, then what about categorical data?  Need to specify k, the number of clusters, in advance  It is not obvious what is a good k to use  The process is sensitive with respect to outliers  The algorithm lacks scalability  Only numerical attributes are covered  Resulting clusters can be unbalanced  Unable to handle noisy data and outliers  Not suitable to discover clusters with non-convex shapes  The result strongly depends on the initial guess of centroids (or assignments) April 5, 2022 51 WEAKNESSES OF K-MEANS METHOD
  • 51.  Complexity:  K-Means: O(kn) per iteration  Uses:  All data sizes,  Best with well separated clusters  Examples: PAM, CLARA, CLARANS, HMETIS, BAG April 5, 2022 52 COMMENTS ON K-MEANS METHOD
  • 52. VARIATIONS OF THE K-MEANS METHOD April 5, 2022 53  K-medoids – instead of mean, use medians of each cluster  Mean of 1, 3, 5, 7, 9 is  Mean of 1, 3, 5, 7, 1009 is  Median of 1, 3, 5, 7, 1009 is  Median advantage: not affected by extreme values  For large databases, use sampling 5 205 5
  • 53. THE K-MEDOIDS CLUSTERING METHOD  Find representative objects, called medoids, in clusters  PAM (Partitioning Around Medoids, 1987)  Starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non- medoids if it improves the total distance of the resulting clustering  PAM works effectively for small data sets, but does not scale well for large data sets  CLARA (Kaufmann & Rousseeuw, 1990)  CLARANS (Ng & Han, 1994): Randomized sampling April 5, 2022 54
  • 55. April 5, 2022 56 PAM (PARTITIONING AROUND MEDOIDS) K-Medoids  Handles outliers well.  Ordering of input does not impact results.  Does not scale well.  Each cluster represented by one item, called the medoid.  Initial set of k medoids randomly chosen.
  • 56. April 5, 2022 57 PAM: BASIC STRATEGY  First find a representative object (the medoid) for each cluster  Each remaining object is clustered with the medoid to which it is most “similar”  Iteratively replace one of the medoids by a non- medoid as long as the “quality” of the clustering is improved
  • 57. DEMONSTRATION OF PAM  Cluster the following data set of ten objects into two clusters i.e k = 2.  Consider a data set of ten objects as follows: April 5, 2022 58
  • 58.  Step 1  Initialise k centre  Let us assume c1 = (3,4) and c2 = (7,4)  So here c1 and c2 are selected as medoid.  Calculating distance so as to associate each data object to its nearest medoid. Cost is calculated using Minkowski distance metric with r = 1. April 5, 2022 59 DEMONSTRATION OF PAM (CONT…)
  • 59. April 5, 2022 60 Then so the clusters become: Cluster1 = {(3,4)(2,6)(3,8)(4,7)} Cluster2 = {(7,4)(6,2)(6,4)(7,3)(8,5)(7,6)} Since the points (2,6) (3,8) and (4,7) are close to c1 hence they form one cluster whilst remaining points form another cluster. So the total cost involved is 20. Where cost between any two points is found using formula DEMONSTRATION OF PAM (CONT…)
  • 60. April 5, 2022 61 where x is any data object, c is the medoid, and d is the dimension of the object which in this case is 2. Total cost is the summation of the cost of data object from its medoid in its cluster so here: DEMONSTRATION OF PAM (CONT…)
  • 61.  Step 2  Selection of nonmedoid O′ randomly  Let us assume O′ = (7,3)  So now the medoids are c1(3,4) and O′(7,3)  If c1 and O′ are new medoids, calculate the total cost involved  By using the formula in the step 1 April 5, 2022 62 DEMONSTRATION OF PAM (CONT…)
  • 63. PAM (PARTITIONING AROUND MEDOIDS)  PAM (Kaufman and Rousseeuw, 1987), built in Splus  Use real object to represent the cluster  Select k representative objects arbitrarily  For each pair of non-selected object h and selected object i, calculate the total swapping cost TCih  For each pair of i and h,  If TCih < 0, i is replaced by h  Then assign each non-selected object to the most similar representative object  repeat steps 2-3 until there is no change April 5, 2022 64
  • 64. April 5, 2022 65 o PAM is more robust than k-means in the presence of noise and outliers o Medoids are less influenced by outliers o PAM is efficiently for small data sets but does not scale well for large data sets o For each iteration Cost TCih for k(n-k) pairs is to be determined o Sampling based method: CLARA Adv & Disadvantages of PAM
  • 65. CLARA (CLUSTERING LARGE APPLICATIONS) (1990)  CLARA (Kaufmann and Rousseeuw in 1990)  Built in statistical analysis packages, such as S+  It draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output  Strength: deals with larger data sets than PAM  Weakness:  Efficiency depends on the sample size  A good clustering based on samples will not necessarily represent a good clustering of the whole data set if the sample is biased April 5, 2022 66
  • 66. CLARANS (“RANDOMIZED” CLARA) (1994)  CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han’94)  CLARANS draws sample of neighbors dynamically  The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids April 5, 2022 67
  • 67. CHAPTER 8. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 68 Data Mining: Concepts and Techniques
  • 68. April 5, 2022 69 HIERARCHICAL CLUSTERING  Clusters are created in levels actually creating sets of clusters at each level.  Agglomerative Nesting( AGNES)  Initially each item is its own cluster  Iteratively clusters are merged together  Bottom Up  Divisive Analysis(DIANA)  Initially all items in one cluster  Large clusters are successively divided  Top Down
  • 70. April 5, 2022 71 DIFFICULTIES WITH HIERARCHICAL CLUSTERING  Can never undo.  No object swapping is allowed  Merge or split decisions ,if not well chosen may lead to poor quality clusters.  do not scale well: time complexity of at least O(n2), where n is the number of total objects.
  • 71. April 5, 2022 72 HIERARCHICAL ALGORITHMS  Single Link  Complete Link  Average Link
  • 72. April 5, 2022 73 DENDROGRAM  Dendrogram: a tree data structure which illustrates hierarchical clustering techniques.  Each level shows clusters for that level.  Leaf – individual clusters  Root – one cluster  A cluster at level i is the union of its children clusters at level i+1.
  • 74. April 5, 2022 75 SINGLE LINK  View all items with links (distances) between them.  Finds maximal connected components in this graph.  Two clusters are merged if there is at least one edge which connects them.  Uses threshold distances at each level.  Could be agglomerative or divisive.
  • 75. April 5, 2022 76 SINGLE LINKAGE CLUSTERING  It is an example of agglomerative hierarchical clustering.  We consider the distance between one cluster and another cluster to be equal to the shortest distance from any member of one cluster to any member of the other cluster.
  • 76. 77 ALGORITHM Given a set of N items to be clustered, and an NxN distance (or similarity) matrix, the basic process of single linkage clustering is this: 1. Start by assigning each item to its own cluster, so that if we have N items, we now have N clusters, each containing just one item. Let the distances (similarities) between the clusters equal the distances (similarities) between the items they contain. 2. Find the closest (most similar) pair of clusters and merge them into a single cluster, so that now you have one less cluster. 3. Compute distances (similarities) between the new cluster and each of the old clusters. 4. Repeat steps 2 and 3 until all items are clustered into a single cluster of size N.
  • 77. April 5, 2022 78 HOW TO COMPUTE GROUP SIMILARITY? Given two groups g1 and g2, Single-link algorithm: s(g1,g2)= similarity of the closest pair Complete-link algorithm: s(g1,g2)= similarity of the farthest pair Average-link algorithm: s(g1,g2)= average of similarity of all pairs Three Popular Methods:
  • 78. April 5, 2022 79 Single-link algorithm ? g1 g2 complete-link algorithm …… average-link algorithm THREE METHODS ILLUSTRATED
  • 79. April 5, 2022 80 HIERARCHICAL: SINGLE LINK Cluster similarity = similarity of two most similar members
  • 83. April 5, 2022 84 HIERARCHICAL: COMPLETE LINK Cluster similarity = similarity of two least similar members
  • 87. April 5, 2022 88 HIERARCHICAL: AVERAGE LINK  Cluster similarity = average similarity of all pairs
  • 91. MORE ON HIERARCHICAL CLUSTERING METHODS  Major weakness of agglomerative clustering methods  do not scale well: time complexity of at least O(n2), where n is the number of total objects  can never undo what was done previously  Integration of hierarchical with distance-based clustering  BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters  CURE (1998): selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction  CHAMELEON (1999): hierarchical clustering using dynamic modeling April 5, 2022 92
  • 92. BIRCH (1996)  Birch: Balanced Iterative Reducing and Clustering using Hierarchies, by Zhang, Ramakrishnan, Livny (SIGMOD’96)  Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering  Phase 1: scan DB to build an initial in-memory CF tree (a multi- level compression of the data that tries to preserve the inherent clustering structure of the data)  Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree  Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans  Weakness: handles only numeric data, and sensitive to the order of the data record. April 5, 2022 93
  • 93. April 5, 2022 94 Clustering Feature Vector Clustering Feature: CF = (N, LS, SS) N: Number of data points LS: N i=1=Xi SS: N i=1=Xi 2 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 CF = (5, (16,30),(54,190)) (3,4) (2,6) (4,5) (4,7) (3,8)
  • 94. CF TREE April 5, 2022 95 CF1 child1 CF3 child3 CF2 child2 CF6 child6 CF1 child1 CF3 child3 CF2 child2 CF5 child5 CF1 CF2 CF6 prev next CF1 CF2 CF4 prev next B = 7 L = 6 Root Non-leaf node Leaf node Leaf node
  • 95. CURE (CLUSTERING USING REPRESENTATIVES)  CURE: proposed by Guha, Rastogi & Shim, 1998  Stops the creation of a cluster hierarchy if a level consists of k clusters  Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effect April 5, 2022 96
  • 96. CURE: THE ALGORITHM  Draw random sample s.  Partition sample to p partitions with size s/p  Partially cluster partitions into s/pq clusters  Eliminate outliers  By random sampling  If a cluster grows too slow, eliminate it.  Cluster partial clusters.  Label data in disk April 5, 2022 97
  • 97. DATA PARTITIONING AND CLUSTERING  s = 50  p = 2  s/p = 25 April 5, 2022 98 Data Mining: Concepts and Techniques x x x y y y y x y x s/pq = 5
  • 98. CURE: SHRINKING REPRESENTATIVE POINTS  Shrink the multiple representative points towards the gravity center by a fraction of .  Multiple representatives capture the shape of the cluster April 5, 2022 99 x y x y
  • 99. CLUSTERING CATEGORICAL DATA: ROCK  ROCK: Robust Clustering using linKs, by S. Guha, R. Rastogi, K. Shim (ICDE’99).  Use links to measure similarity/proximity  Not distance based  Computational complexity:  Basic ideas:  Similarity function and neighbors: Let T1 = {1,2,3}, T2={3,4,5} April 5, 2022 100 O n nm m n n m a ( log ) 2 2   Sim T T T T T T ( , ) 1 2 1 2 1 2    Sim T T ( , ) { } { , , , , } . 1 2 3 1 2 3 4 5 1 5 0 2   
  • 100. ROCK: ALGORITHM  Links: The number of common neighbours for the two points.  Algorithm  Draw random sample  Cluster with links  Label data in disk April 5, 2022 101 {1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5} {1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5} {1,2,3} {1,2,4} 3
  • 101. CHAMELEON  CHAMELEON: Hierarchical clustering using dynamic modeling, by G. Karypis, E.H. Han and V. Kumar’99  Measures the similarity based on a dynamic model  Two clusters are merged only if the interconnectivity and closeness (proximity) between two clusters are high relative to the internal interconnectivity of the clusters and closeness of items within the clusters  A two phase algorithm  1. Use a graph partitioning algorithm: cluster objects into a large number of relatively small sub-clusters  2. Use an agglomerative hierarchical clustering algorithm: find the genuine clusters by repeatedly combining these sub-clusters April 5, 2022 102
  • 102. OVERALL FRAMEWORK OF CHAMELEON April 5, 2022 103 Construct Sparse Graph Partition the Graph Merge Partition Final Clusters Data Set
  • 103. CHAPTER 8. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 104 Data Mining: Concepts and Techniques
  • 104. DENSITY-BASED CLUSTERING METHODS  Clustering based on density (local cluster criterion), such as density-connected points  Major features:  Discover clusters of arbitrary shape  Handle noise  One scan  Need density parameters as termination condition  Several interesting studies:  DBSCAN: Ester, et al. (KDD’96)  OPTICS: Ankerst, et al (SIGMOD’99).  DENCLUE: Hinneburg & D. Keim (KDD’98)  CLIQUE: Agrawal, et al. (SIGMOD’98) April 5, 2022 105 Data Mining: Concepts and Techniques
  • 105. DENSITY-BASED CLUSTERING: BACKGROUND  Two parameters:  Eps: Maximum radius of the neighbourhood  MinPts: Minimum number of points in an Eps-neighbourhood of that point  NEps(p): {q belongs to D | dist(p,q) <= Eps}  Directly density-reachable: A point p is directly density- reachable from a point q wrt. Eps, MinPts if  1) p belongs to NEps(q)  2) core point condition: |NEps (q)| >= MinPts April 5, 2022 106 Data Mining: Concepts and Techniques p q MinPts = Eps = 1 c
  • 106. DENSITY-BASED CLUSTERING: BACKGROUND (II)  Density-reachable:  A point p is density-reachable from a point q wrt. Eps, MinPts if there is a chain of points p1, …, pn, p1 = q, pn = p such that pi+1 is directly density- reachable from pi  Density-connected  A point p is density-connected to a point q wrt. Eps, MinPts if there is a point o such that both, p and q are density- reachable from o wrt. Eps and MinPts. April 5, 2022 107 Data Mining: Concepts and Techniques p q p1 p q o
  • 107. DBSCAN: DENSITY BASED SPATIAL CLUSTERING OF APPLICATIONS WITH NOISE  Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points  Discovers clusters of arbitrary shape in spatial databases with noise April 5, 2022 108 Data Mining: Concepts and Techniques Core Border Outlier Eps Min
  • 108. DBSCAN: THE ALGORITHM  Arbitrary select a point p  Retrieve all points density-reachable from p wrt Eps and MinPts.  If p is a core point, a cluster is formed.  If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database.  Continue the process until all of the points have been processed. April 5, 2022 109 Data Mining: Concepts and Techniques
  • 109. OPTICS: A CLUSTER-ORDERING METHOD (1999)  OPTICS: Ordering Points To Identify the Clustering Structure  Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)  Produces a special order of the database wrt its density-based clustering structure  This cluster-ordering contains info equiv to the density-based clusterings corresponding to a broad range of parameter settings  Good for both automatic and interactive cluster analysis, including finding intrinsic clustering structure  Can be represented graphically or using visualization techniques April 5, 2022 110 Data Mining: Concepts and Techniques
  • 110. OPTICS: SOME EXTENSION FROM DBSCAN  Index-based:  k = number of dimensions  N = 20  p = 75%  M = N(1-p) = 5  Complexity: O(kN2)  Core Distance  Reachability Distance April 5, 2022 111 Data Mining: Concepts and Techniques D p2 MinPts = 5 e = 3 cm Max (core-distance (o), d (o, p)) r(p1, o) = 2.8cm. r(p2,o) = 4cm o o p1
  • 111. April 5, 2022 Data Mining: Concepts and Tec 112 e e Reachability -distance Cluster-order of the objects undefined e‘
  • 112. DENCLUE: USING DENSITY FUNCTIONS  DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)  Major features  Solid mathematical foundation  Good for data sets with large amounts of noise  Allows a compact mathematical description of arbitrarily shaped clusters in high-dimensional data sets  Significant faster than existing algorithm (faster than DBSCAN by a factor of up to 45)  But needs a large number of parameters April 5, 2022 113 Data Mining: Concepts and Techniques
  • 113. DENCLUE: TECHNICAL ESSENCE  Uses grid cells but only keeps information about grid cells that do actually contain data points and manages these cells in a tree-based access structure.  Influence function: describes the impact of a data point within its neighborhood.  Overall density of the data space can be calculated as the sum of the influence function of all data points.  Clusters can be determined mathematically by identifying density attractors.  Density attractors are local maximal of the overall density function. April 5, 2022 114 Data Mining: Concepts and Techniques
  • 114. GRADIENT: THE STEEPNESS OF A SLOPE  Example April 5, 2022 115 Data Mining: Concepts and Techniques     N i x x d D Gaussian i e x f 1 2 ) , ( 2 2 ) (        N i x x d i i D Gaussian i e x x x x f 1 2 ) , ( 2 2 ) ( ) , (  f x y e Gaussian d x y ( , ) ( , )   2 2 2
  • 117. CHAPTER 8. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 118 Data Mining: Concepts and Techniques
  • 118. GRID-BASED CLUSTERING METHOD  Using multi-resolution grid data structure  Several interesting methods  STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997)  WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB’98)  A multi-resolution clustering approach using wavelet method  CLIQUE: Agrawal, et al. (SIGMOD’98) April 5, 2022 119 Data Mining: Concepts and Techniques
  • 119. STING: A STATISTICAL INFORMATION GRID APPROACH  Wang, Yang and Muntz (VLDB’97)  The spatial area area is divided into rectangular cells  There are several levels of cells corresponding to different levels of resolution April 5, 2022 120 Data Mining: Concepts and Techniques
  • 120. STING: A STATISTICAL INFORMATION GRID APPROACH (2)  Each cell at a high level is partitioned into a number of smaller cells in the next lower level  Statistical info of each cell is calculated and stored beforehand and is used to answer queries  Parameters of higher level cells can be easily calculated from parameters of lower level cell  count, mean, s, min, max  type of distribution—normal, uniform, etc.  Use a top-down approach to answer spatial data queries  Start from a pre-selected layer—typically with a small number of cells  For each cell in the current level compute the confidence interval
  • 121. STING: A STATISTICAL INFORMATION GRID APPROACH (3)  Remove the irrelevant cells from further consideration  When finish examining the current layer, proceed to the next lower level  Repeat this process until the bottom layer is reached  Advantages:  Query-independent, easy to parallelize, incremental update  O(K), where K is the number of grid cells at the lowest level  Disadvantages:  All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected
  • 122. WAVECLUSTER (1998)  Sheikholeslami, Chatterjee, and Zhang (VLDB’98)  A multi-resolution clustering approach which applies wavelet transform to the feature space  A wavelet transform is a signal processing technique that decomposes a signal into different frequency sub-band.  Both grid-based and density-based  Input parameters:  # of grid cells for each dimension  the wavelet, and the # of applications of wavelet transform. April 5, 2022 123 Data Mining: Concepts and Techniques
  • 123. WAVECLUSTER (1998)  How to apply wavelet transform to find clusters  Summaries the data by imposing a multidimensional grid structure onto data space  These multidimensional spatial data objects are represented in a n-dimensional feature space  Apply wavelet transform on feature space to find the dense regions in the feature space  Apply wavelet transform multiple times which result in clusters at different scales from fine to coarse April 5, 2022 125 Data Mining: Concepts and Techniques
  • 124. WHAT IS WAVELET (2)? April 5, 2022 126 Data Mining: Concepts and Techniques
  • 127. WAVECLUSTER (1998)  Why is wavelet transformation useful for clustering  Unsupervised clustering It uses hat-shape filters to emphasize region where points cluster, but simultaneously to suppress weaker information in their boundary  Effective removal of outliers  Multi-resolution  Cost efficiency  Major features:  Complexity O(N)  Detect arbitrary shaped clusters at different scales  Not sensitive to noise, not sensitive to input order  Only applicable to low dimensional data April 5, 2022 129 Data Mining: Concepts and Techniques
  • 128. CLIQUE (CLUSTERING IN QUEST)  Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98).  Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space  CLIQUE can be considered as both density-based and grid- based  It partitions each dimension into the same number of equal length interval  It partitions an m-dimensional data space into non-overlapping rectangular units  A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter  A cluster is a maximal set of connected dense units within a subspace April 5, 2022 130 Data Mining: Concepts and Techniques
  • 129. CLIQUE: THE MAJOR STEPS  Partition the data space and find the number of points that lie inside each cell of the partition.  Identify the subspaces that contain clusters using the Apriori principle  Identify clusters:  Determine dense units in all subspaces of interests  Determine connected dense units in all subspaces of interests.  Generate minimal description for the clusters  Determine maximal regions that cover a cluster of connected dense units for each cluster  Determination of minimal cover for each cluster April 5, 2022 131 Data Mining: Concepts and Techniques
  • 130. April 5, 2022 Data Mining: Concepts and Tec 132 Salary (10,000) 20 30 40 50 60 age 5 4 3 1 2 6 7 0 20 30 40 50 60 age 5 4 3 1 2 6 7 0 Vacation (week) age Vacation 30 50  = 3
  • 131. STRENGTH AND WEAKNESS OF CLIQUE  Strength  It automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces  It is insensitive to the order of records in input and does not presume some canonical data distribution  It scales linearly with the size of input and has good scalability as the number of dimensions in the data increases  Weakness  The accuracy of the clustering result may be degraded at the expense of simplicity of the method April 5, 2022 133 Data Mining: Concepts and Techniques
  • 132. CHAPTER 8. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 134 Data Mining: Concepts and Techniques
  • 133. MODEL-BASED CLUSTERING METHODS  Attempt to optimize the fit between the data and some mathematical model  Statistical and AI approach  Conceptual clustering  A form of clustering in machine learning  Produces a classification scheme for a set of unlabeled objects  Finds characteristic description for each concept (class)  COBWEB (Fisher’87)  A popular a simple method of incremental conceptual learning  Creates a hierarchical clustering in the form of a classification tree  Each node refers to a concept and contains a probabilistic description of that concept April 5, 2022 135 Data Mining: Concepts and Techniques
  • 135. MORE ON STATISTICAL-BASED CLUSTERING  Limitations of COBWEB  The assumption that the attributes are independent of each other is often too strong because correlation may exist  Not suitable for clustering large database data – skewed tree and expensive probability distributions  CLASSIT  an extension of COBWEB for incremental clustering of continuous data  suffers similar problems as COBWEB  AutoClass (Cheeseman and Stutz, 1996)  Uses Bayesian statistical analysis to estimate the number of clusters  Popular in industry April 5, 2022 137 Data Mining: Concepts and Techniques
  • 136. OTHER MODEL-BASED CLUSTERING METHODS  Neural network approaches  Represent each cluster as an exemplar, acting as a “prototype” of the cluster  New objects are distributed to the cluster whose exemplar is the most similar according to some dostance measure  Competitive learning  Involves a hierarchical architecture of several units (neurons)  Neurons compete in a “winner-takes-all” fashion for the object currently being presented April 5, 2022 138 Data Mining: Concepts and Techniques
  • 138. SELF-ORGANIZING FEATURE MAPS (SOMS)  Clustering is also performed by having several units competing for the current object  The unit whose weight vector is closest to the current object wins  The winner and its neighbors learn by having their weights adjusted  SOMs are believed to resemble processing that can occur in the brain  Useful for visualizing high-dimensional data in 2- or 3-D space April 5, 2022 140 Data Mining: Concepts and Techniques
  • 139. CHAPTER 8. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 141 Data Mining: Concepts and Techniques
  • 140. WHAT IS OUTLIER DISCOVERY?  What are outliers?  The set of objects are considerably dissimilar from the remainder of the data  Example: Sports: Michael Jordon, Wayne Gretzky, ...  Problem  Find top n outlier points  Applications:  Credit card fraud detection  Telecom fraud detection  Customer segmentation  Medical analysis April 5, 2022 142 Data Mining: Concepts and Techniques
  • 141. OUTLIER DISCOVERY: STATISTICAL APPROACHES  Assume a model underlying distribution that generates data set (e.g. normal distribution)  Use discordancy tests depending on  data distribution  distribution parameter (e.g., mean, variance)  number of expected outliers  Drawbacks  most tests are for single attribute  In many cases, data distribution may not be known April 5, 2022 143 Data Mining: Concepts and Techniques
  • 142. OUTLIER DISCOVERY: DISTANCE-BASED APPROACH  Introduced to counter the main limitations imposed by statistical methods  We need multi-dimensional analysis without knowing data distribution.  Distance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least a fraction p of the objects in T lies at a distance greater than D from O  Algorithms for mining distance-based outliers  Index-based algorithm  Nested-loop algorithm  Cell-based algorithm
  • 143. OUTLIER DISCOVERY: DEVIATION-BASED APPROACH  Identifies outliers by examining the main characteristics of objects in a group  Objects that “deviate” from this description are considered outliers  sequential exception technique  simulates the way in which humans can distinguish unusual objects from among a series of supposedly like objects  OLAP data cube technique  uses data cubes to identify regions of anomalies in large multidimensional data April 5, 2022 145 Data Mining: Concepts and Techniques
  • 144. CHAPTER 8. CLUSTER ANALYSIS  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods  Hierarchical Methods  Density-Based Methods  Grid-Based Methods  Model-Based Clustering Methods  Outlier Analysis  Summary April 5, 2022 146 Data Mining: Concepts and Techniques
  • 145. PROBLEMS AND CHALLENGES  Considerable progress has been made in scalable clustering methods  Partitioning: k-means, k-medoids, CLARANS  Hierarchical: BIRCH, CURE  Density-based: DBSCAN, CLIQUE, OPTICS  Grid-based: STING, WaveCluster  Model-based: Autoclass, Denclue, Cobweb  Current clustering techniques do not address all the requirements adequately  Constraint-based clustering analysis: Constraints exist in data space (bridges and highways) or in user queries April 5, 2022 147 Data Mining: Concepts and Techniques
  • 146. CONSTRAINT-BASED CLUSTERING ANALYSIS  Clustering analysis: less parameters but more user-desired constraints, e.g., an ATM allocation problem April 5, 2022 148 Data Mining: Concepts and Techniques
  • 147. SUMMARY  Cluster analysis groups objects based on their similarity and has wide applications  Measure of similarity can be computed for various types of data  Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods, and model-based methods  Outlier detection and analysis are very useful for fraud detection, etc. and can be performed by statistical, distance-based or deviation-based approaches  There are still lots of research issues on cluster analysis, such as constraint-based clustering April 5, 2022 149 Data Mining: Concepts and Techniques
  • 148. REFERENCES (1)  R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98  M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.  M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SIGMOD’99.  P. Arabie, L. J. Hubert, and G. De Soete. Clustering and Classification. World Scietific, 1996  M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases. KDD'96.  M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing techniques for efficient class identification. SSD'95.  D. Fisher. Knowledge acquisition via incremental conceptual clustering. Machine Learning, 2:139-172, 1987.  D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. In Proc. VLDB’98.  S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases. SIGMOD'98.  A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988. April 5, 2022 150 Data Mining: Concepts and Techniques
  • 149. REFERENCES (2)  L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, 1990.  E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB’98.  G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, 1988.  P. Michaud. Clustering techniques. Future Generation Computer systems, 13, 1997.  R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94.  E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets. Proc. 1996 Int. Conf. on Pattern Recognition, 101-105.  G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering approach for very large spatial databases. VLDB’98.  W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining, VLDB’97.  T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : an efficient data clustering method for very large databases. SIGMOD'96. April 5, 2022 151 Data Mining: Concepts and Techniques