SlideShare a Scribd company logo
1 of 89
Data Mining
Cluster Analysis: Advanced Concepts
and Algorithms
Lecture Notes for Chapter 8
Introduction to Data Mining
by
Tan, Steinbach, Kumar
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
3/31/2021 2
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Outline
๏ฌ Prototype-based
โ€“ Fuzzy c-means
โ€“ Mixture Model Clustering
โ€“ Self-Organizing Maps
๏ฌ Density-based
โ€“ Grid-based clustering
โ€“ Subspace clustering: CLIQUE
โ€“ Kernel-based: DENCLUE
๏ฌ Graph-based
โ€“ Chameleon
โ€“ Jarvis-Patrick
โ€“ Shared Nearest Neighbor (SNN)
๏ฌ Characteristics of Clustering Algorithms
3/31/2021 3
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Hard (Crisp) vs Soft (Fuzzy) Clustering
๏ฌ Hard (Crisp) vs. Soft (Fuzzy) clustering
โ€“ For soft clustering allow point to belong to more than
one cluster
โ€“ For K-means, generalize objective function
๐‘ค๐‘–๐‘—: weight with which object xi belongs to cluster ๐’„๐’‹
โ€“ To minimize SSE, repeat the following steps:
๏ต Fix ๐’„๐’‹and determine ๐‘ค๐‘–๐‘— (cluster assignment)
๏ต Fix ๐‘ค๐‘–๐‘—and recompute ๐’„๐’‹
โ€“ Hard clustering: ๐‘ค๐‘–๐‘— ๏ƒŽ {0,1}
1
1
๏€ฝ
๏ƒฅ
๏€ฝ
k
j
ij
w
๐‘†๐‘†๐ธ =
๐‘—=1
๐‘˜
๐‘–=1
๐‘š
๐‘ค๐‘–๐‘— ๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2
3/31/2021 4
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Soft (Fuzzy) Clustering: Estimating Weights
SSE(x) has a minimum value of 2.25 when wx1 = 1, wx2 = 0
1 2.5 5
c1 c2
x
๐‘†๐‘†๐ธ = ๐‘ค๐‘ฅ1 2.5 โˆ’ 1 2 + ๐‘ค๐‘ฅ2 5 โˆ’ 2.5 2
= 2.25๐‘ค๐‘ฅ1 + 6.25๐‘ค๐‘ฅ2
3/31/2021 5
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Fuzzy C-means
๏ฌ Objective function
๏ต ๐‘ค๐‘–๐‘—: weight with which object ๐’™๐‘– belongs to cluster ๐’„๐’‹
๏ต ๐‘: is a power for the weight not a superscript and controls how
โ€œfuzzyโ€ the clustering is
โ€“ To minimize objective function, repeat the following:
๏ต Fix ๐’„๐’‹ and determine ๐‘ค๐‘–๐‘—
๏ต Fix ๐‘ค๐‘–๐‘— and recompute ๐’„
โ€“ Fuzzy c-means clustering: ๐‘ค๐‘–๐‘— ๏ƒŽ[0,1]
Bezdek, James C. Pattern recognition with fuzzy objective function algorithms. Kluwer Academic Publishers, 1981.
1
1
๏€ฝ
๏ƒฅ
๏€ฝ
k
j
ij
w
p: fuzzifier (p > 1)
๐‘†๐‘†๐ธ =
๐‘—=1
๐‘˜
๐‘–=1
๐‘š
๐‘ค๐‘–๐‘—
๐‘
๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2
3/31/2021 6
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๐‘†๐‘†๐ธ = ๐‘ค๐‘ฅ1
2
2.5 โˆ’ 1 2 + ๐‘ค๐‘ฅ2
2
5 โˆ’ 2.5 2
Fuzzy C-means
= 2.25๐‘ค๐‘ฅ1
2
+ 6.25๐‘ค๐‘ฅ2
2
1 2.5 5
c1 c2
x
SSE(x) has a minimum value of 1.654 when wx1 = 0.74, wx2 = 0.36
3/31/2021 7
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Fuzzy C-means
๏ฌ Objective function:
๏ฌ Initialization: choose the weights ๐‘ค๐‘–๐‘— randomly subject to
the constraint that
๏ฌ Repeat:
โ€“ Update centroids:
โ€“ Update weights:
1
1
๏€ฝ
๏ƒฅ
๏€ฝ
k
j
ij
w
p: fuzzifier (p > 1)
๐‘ค๐‘–๐‘— = (1/๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2)
1
๐‘โˆ’1/
๐‘ž=1
๐‘˜
(1/๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘ž)2)
1
๐‘โˆ’1
๐’„๐’‹ =
๐‘–=1
๐‘š
๐‘ค๐‘–๐‘—๐’™๐‘– /
๐‘–=1
๐‘š
๐‘ค๐‘–๐‘—
๐‘†๐‘†๐ธ =
๐‘—=1
๐‘˜
๐‘–=1
๐‘š
๐‘ค๐‘–๐‘—
๐‘
๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2
1
1
๏€ฝ
๏ƒฅ
๏€ฝ
k
j
ij
w
3/31/2021 8
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Fuzzy K-means Applied to Sample Data
maximum
membership
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
3/31/2021 9
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
An Example Application: Image Segmentation
๏ฌ Modified versions of fuzzy c-means have been
used for image segmentation
โ€“ Especially fMRI images (functional magnetic
resonance images)
๏ฌ References
โ€“ Gong, Maoguo, Yan Liang, Jiao Shi, Wenping Ma, and Jingjing Ma. "Fuzzy c-means clustering with local
information and kernel metric for image segmentation." Image Processing, IEEE Transactions on 22, no. 2
(2013): 573-584.
From left to right: original images, fuzzy c-means, EM, BCFCM
โ€“ Ahmed, Mohamed N., Sameh M. Yamany, Nevin Mohamed, Aly A. Farag, and Thomas Moriarty. "A modified
fuzzy c-means algorithm for bias field estimation and segmentation of MRI data." Medical Imaging, IEEE
Transactions on 21, no. 3 (2002): 193-199.
3/31/2021 10
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clustering Using Mixture Models
๏ฌ Idea is to model the set of data points as arising from a
mixture of distributions
โ€“ Typically, normal (Gaussian) distribution is used
โ€“ But other distributions have been very profitably used
๏ฌ Clusters are found by estimating the parameters of the
statistical distributions using the Expectation-
Maximization (EM) algorithm
๏ต k-means is a special case of this approach
๏ตProvides a compact representation of clusters
๏ตThe probabilities with which point belongs to each cluster provide a
functionality similar to fuzzy clustering.
3/31/2021 11
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Probabilistic Clustering: Example
๏ฌ Informal example: consider
modeling the points that
generate the following
histogram.
๏ฌ Looks like a combination of two
normal (Gaussian) distributions
๏ฌ Suppose we can estimate the
mean and standard deviation of each normal distribution.
โ€“ This completely describes the two clusters
โ€“ We can compute the probabilities with which each point belongs to each
cluster
โ€“ Can assign each point to the
cluster (distribution) for which it
is most probable.
3/31/2021 12
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Probabilistic Clustering: EM Algorithm
Initialize the parameters
Repeat
For each point, compute its probability under each distribution
Using these probabilities, update the parameters of each distribution
Until there is no change
๏ฌ Very similar to of K-means
๏ฌ Consists of assignment and update steps
๏ฌ Can use random initialization
โ€“ Problem of local minima
๏ฌ For normal distributions, typically use K-means to initialize
๏ฌ If using normal distributions, can find elliptical as well as
spherical shapes.
3/31/2021 13
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Probabilistic Clustering: Updating Centroids
Update formula for weights assuming an estimate for statistical parameters
๏ฌ Very similar to the fuzzy k-means formula
โ€“ Weights are probabilities
โ€“ Weights are not raised to a power
โ€“ Probabilities calculated using Bayes rule:
๏ฌ Need to assign weights to each cluster
โ€“ Weights may not be equal
โ€“ Similar to prior probabilities
โ€“ Can be estimated:
๏ƒฅ
๏ƒฅ ๏€ฝ
๏€ฝ
๏€ฝ
m
i
i
j
m
i
i
j
i
j C
p
C
p
1
1
)
|
(
/
)
|
( x
x
x
c
๏ƒฅ
๏€ฝ
๏€ฝ
m
i
i
j
j C
p
m
C
p
1
)
|
(
1
)
( x
๐’™๐’Š ๐‘–๐‘  ๐‘Ž ๐‘‘๐‘Ž๐‘ก๐‘Ž ๐‘๐‘œ๐‘–๐‘›๐‘ก
๐ถ๐’‹ ๐‘–๐‘  ๐‘Ž ๐‘๐‘™๐‘ข๐‘ ๐‘ก๐‘’๐‘Ÿ
๐’„๐’‹ ๐‘–๐‘  ๐‘Ž ๐‘๐‘’๐‘›๐‘ก๐‘Ÿ๐‘œ๐‘–๐‘‘
๐‘ ๐ถ๐’‹ ๐’™๐’Š =
๐‘ ๐’™๐’Š ๐ถ๐’‹ ๐‘(๐ถ๐’‹)
๐‘™=1
๐‘˜
๐‘ ๐’™๐’Š ๐ถ๐‘™ ๐‘(๐ถ๐’)
3/31/2021 14
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
More Detailed EM Algorithm
3/31/2021 15
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Probabilistic Clustering Applied to Sample Data
maximum
probability
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
3/31/2021 16
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Probabilistic Clustering: Dense and Sparse Clusters
-10 -8 -6 -4 -2 0 2 4
-8
-6
-4
-2
0
2
4
6
x
y
?
3/31/2021 17
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Problems with EM
๏ฌ Convergence can be slow
๏ฌ Only guarantees finding local maxima
๏ฌ Makes some significant statistical assumptions
๏ฌ Number of parameters for Gaussian distribution
grows as O(d2), d the number of dimensions
โ€“ Parameters associated with covariance matrix
โ€“ K-means only estimates cluster means, which grow
as O(d)
3/31/2021 18
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Alternatives to EM
๏ฌ Method of moments / Spectral methods
โ€“ ICML 2014 workshop bibliography
https://sites.google.com/site/momentsicml2014/bibliography
๏ฌ Markov chain Monte Carlo (MCMC)
๏ฌ Other approaches
3/31/2021 19
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SOM: Self-Organizing Maps
๏ฌ Self-organizing maps (SOM)
โ€“ Centroid based clustering scheme
โ€“ Like K-means, a fixed number of clusters are specified
โ€“ However, the spatial relationship
of clusters is also specified,
typically as a grid
โ€“ Points are considered one by one
โ€“ Each point is assigned to the
closest centroid, and this centroid is updated
โ€“ Other centroids are updated based
on their spatial proximity to the closest centroid
Kohonen, Teuvo, and Self-Organizing Maps. "Springer series in information sciences." Self-
organizing maps 30 (1995).
3/31/2021 20
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SOM: Self-Organizing Maps
๏ฌ Updates are weighted by distance
โ€“ Centroids farther away are affected less
๏ฌ The impact of the updates decreases with each time
โ€“ At some point the centroids will not change much
๏ฌ SOM can be viewed as a type of dimensionality reduction
โ€“ If a 2D (3D) grid is used, the results can be easily visualized, and it can facilitate the
interpretation of clusters
3/31/2021 21
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SOM Clusters of LA Times Document Data
3/31/2021 22
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Another SOM Example: 2D Points
3/31/2021 23
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Issues with SOM
๏ฌ High computational complexity
๏ฌ No guarantee of convergence
๏ฌ Choice of grid and other parameters is somewhat
arbitrary
๏ฌ Lack of a specific objective function
3/31/2021 24
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Grid-based Clustering
๏ฌ A type of density-based clustering
3/31/2021 25
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Subspace Clustering
๏ฌ Until now, we found clusters by considering all of the
attributes
๏ฌ Some clusters may involve only a subset of attributes, i.e.,
subspaces of the data
3/31/2021 26
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clusters in subspaces
3/31/2021 27
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clusters in subspaces
3/31/2021 28
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clusters in subspaces
3/31/2021 29
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clique โ€“ A Subspace Clustering Algorithm
๏ฌ A grid-based clustering algorithm that
methodically finds subspace clusters
โ€“ Partitions the data space into rectangular units of
equal volume in all possible subspaces
โ€“ Measures the density of each unit by the fraction of
points it contains
โ€“ A unit is dense if the fraction of overall points it
contains is above a user specified threshold, ๏ด
โ€“ A cluster is a set of contiguous (touching) dense units
3/31/2021 30
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clique Algorithm
๏ฌ It is impractical to check volume units in all
possible subspaces, since there is an
exponential number of such units
๏ฌ Monotone property of density-based clusters:
โ€“ If a set of points cannot form a density based cluster
in k dimensions, then the same set of points cannot
form a density based cluster in all possible supersets
of those dimensions
๏ฌ Very similar to Apriori algorithm
๏ฌ Can find overlapping clusters
3/31/2021 31
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clique Algorithm
3/31/2021 32
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Limitations of Clique
๏ฌ Time complexity is exponential in number of
dimensions
โ€“ Especially if โ€œtoo manyโ€ dense units are generated at
lower stages
๏ฌ May fail if clusters are of widely differing
densities, since the threshold is fixed
โ€“ Determining appropriate threshold and unit interval
length can be challenging
3/31/2021 33
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Denclue (DENsity CLUstering)
๏ฌ Based on the notion of kernel-density estimation
โ€“ Contribution of each point to the density is given by
an influence or kernel function
โ€“ Overall density is the sum of the contributions of all
points
Formula and plot of Gaussian Kernel
3/31/2021 34
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Example of Density from Gaussian Kernel
3/31/2021 35
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
DENCLUE Algorithm
๏ฌ Find the density function
๏ฌ Identify local maxima (density attractors)
๏ฌ Assign each point to the density attractor
โ€“ Follow direction of maximum increase in density
๏ฌ Define clusters as groups consisting of points associated with density attractor
๏ฌ Discard clusters whose density attractor has a density less than a user specified
minimum, ๏ธ
๏ฌ Combine clusters connected by paths of points that are connected by points with
density above ๏ธ
3/31/2021 36
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Graph-Based Clustering: General Concepts
๏ฌ Graph-Based clustering needs a proximity graph
โ€“ Each edge between two nodes has a weight which is the
proximity between the two points
๏ฌ Many hierarchical clustering algorithms can be viewed in
graph terms
โ€“ MIN (single link) merges subgraph pairs connected with a lowest
weight edge
โ€“ Group Average merges subgraph pairs that have high average
connectivity
๏ฌ In the simplest case, clusters are connected
components in the graph.
3/31/2021 37
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Graph-Based Clustering: Chameleon
๏ฌ Based on several key ideas
โ€“ Sparsification of the proximity graph
โ€“ Partitioning the data into clusters that are
relatively pure subclusters of the โ€œtrueโ€ clusters
โ€“ Merging based on preserving characteristics of
clusters
3/31/2021 38
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Graph-Based Clustering: Sparsification
๏ฌ The amount of data that needs to be
processed is drastically reduced
โ€“ Sparsification can eliminate more than 99% of the
entries in a proximity matrix
โ€“ The amount of time required to cluster the data is
drastically reduced
โ€“ The size of the problems that can be handled is
increased
3/31/2021 39
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Graph-Based Clustering: Sparsification โ€ฆ
๏ฌ Clustering may work better
โ€“ Sparsification techniques keep the connections to the most
similar (nearest) neighbors of a point while breaking the
connections to less similar points.
โ€“ This reduces the impact of noise and outliers and sharpens
the distinction between clusters.
๏ฌ Sparsification facilitates the use of graph
partitioning algorithms (or algorithms based
on graph partitioning algorithms)
โ€“ Chameleon and Hypergraph-based Clustering
3/31/2021 40
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Limitations of Current Merging Schemes
๏ฌ Existing merging schemes in hierarchical
clustering algorithms are static in nature
โ€“ MIN:
๏ต Merge two clusters based on their closeness (or minimum
distance)
โ€“ GROUP-AVERAGE:
๏ต Merge two clusters based on their average connectivity
3/31/2021 41
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Limitations of Current Merging Schemes
Closeness schemes
will merge (a) and (b)
(a)
(b)
(c)
(d)
Average connectivity schemes
will merge (c) and (d)
3/31/2021 42
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Chameleon: Clustering Using Dynamic Modeling
๏ฌ Adapt to the characteristics of the data set to find the
natural clusters
๏ฌ Use a dynamic model to measure the similarity between
clusters
โ€“ Main properties are the relative closeness and relative inter-
connectivity of the cluster
โ€“ Two clusters are combined if the resulting cluster shares certain
properties with the constituent clusters
โ€“ The merging scheme preserves self-similarity
3/31/2021 43
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Relative Interconnectivity
3/31/2021 44
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Relative Closeness
3/31/2021 45
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Chameleon: Steps
๏ฌ Preprocessing Step:
Represent the data by a Graph
โ€“ Given a set of points, construct the k-nearest-neighbor (k-NN)
graph to capture the relationship between a point and its k
nearest neighbors
โ€“ Concept of neighborhood is captured dynamically (even if region
is sparse)
๏ฌ Phase 1: Use a multilevel graph partitioning
algorithm on the graph to find a large number of
clusters of well-connected vertices
โ€“ Each cluster should contain mostly points from one โ€œtrueโ€ cluster,
i.e., be a sub-cluster of a โ€œrealโ€ cluster
3/31/2021 46
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Chameleon: Steps โ€ฆ
๏ฌ Phase 2: Use Hierarchical Agglomerative
Clustering to merge sub-clusters
โ€“ Two clusters are combined if the resulting cluster shares certain
properties with the constituent clusters
โ€“ Two key properties used to model cluster similarity:
๏ต Relative Interconnectivity: Absolute interconnectivity of two
clusters normalized by the internal connectivity of the
clusters
๏ต Relative Closeness: Absolute closeness of two clusters
normalized by the internal closeness of the clusters
3/31/2021 47
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Experimental Results: CHAMELEON
3/31/2021 48
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Experimental Results: CURE (10 clusters)
3/31/2021 49
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Experimental Results: CURE (15 clusters)
3/31/2021 50
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Experimental Results: CHAMELEON
3/31/2021 51
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Experimental Results: CURE (9 clusters)
3/31/2021 52
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Experimental Results: CURE (15 clusters)
3/31/2021 53
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Experimental Results: CHAMELEON
3/31/2021 54
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Spectral Clustering
๏ฌ Spectral clustering is a graph-based clustering
approach
โ€“ Does a graph partitioning of the proximity graph of a data
set
โ€“ Breaks the graph into components, such that
๏ต The nodes in a component are strongly connected to other
nodes in the component
๏ต The nodes in a component are weakly connected to nodes in
other components
๏ต See simple example below (W is the proximity matrix)
3/31/2021 55
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Spectral Clustering
๏ฌ For the simple graph below, the proximity matrix
can be written as
โ€“ Because the graph consists of two connected
components finding clusters is easy.
โ€“ More generally, we need an automated approach
๏ต Must be able to handle graphs where the components are not
completely separate
๏ต Spectral graph partitioning provides such an approach
๏ต Based on eigenvalue decomposition of a slight modification
of the proximity matrix.
3/31/2021 56
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clustering via Spectral Graph Partitioning
๏ฌ Uses an eigenvalue based approach to do the graph
portioning
โ€“ Based on the Laplacian matrix (L) of a graph, which is derived
from the proximity matrix (W)
๏ต W is also known as the weighted adjacency matrix
๏ฌ Define a diagonal matrix D
๐ท๐‘–๐‘— = ๐‘˜
๐‘Š๐‘–๐‘—, ๐‘–๐‘“ ๐‘– = ๐‘—
0, ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’
๏ฌ kth diagonal entry of D is the sum of the edges of the kth node
of W
โ€“ See example below
3/31/2021 57
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Clustering via Spectral Graph Partitioning โ€ฆ
๏ฌ Define a matrix called the Laplacian of the graph, L
๐‹ = ๐ƒ โˆ’ ๐–
๏ฌ L has the following properties:
โ€“ It is symmetric
โ€“ vTLv โ‰ฅ 0, i.e., the matrix is positive semi-definite and all
eigenvalues are positive.
โ€“ Last eigenvalue is 0
๏ฌ Can write L in terms of its eigenvalue decomposition as
โ€“ ๐‹ = ๐šฒ๐•, where ๐šฒ is a diagonal matrix of the eigenvalues and
๐• is the matrix of eigenvectors
3/31/2021 58
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
A Slightly More Complicated Example
3/31/2021 59
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Spectral Graph Clustering Algorithm
๏ฌ Given the Laplacian of a graph, it is easy to
define a spectral graph clustering algorithm
๏ฌ We simply apply k-means to the matrix
consisting of the first k eignenvectors of L
๏ฌ Note that we cluster the rows of that matrix
3/31/2021 60
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Application of K-means and Spectral Clustering to a 2-D Ring
3/31/2021 61
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Strengths and Limitations
๏ฌ Can detect clusters of different shape and sizes
๏ฌ Sensitive to how graph is created and sparsified
๏ฌ Sensitive to outliers
๏ฌ Time complexity depends on the sparsity of the
data matrix
โ€“ Improved by sparsification
3/31/2021 62
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
i j i j
4
Shared Nearest Neighbor (SNN) graph: the weight of an edge is
the number of shared nearest neighbors between vertices given
that the vertices are connected
Graph-Based Clustering: SNN Approach
3/31/2021 63
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
i j i j
4
Shared Nearest Neighbor (SNN) graph: the weight of an edge is
the number of shared neighbors between vertices given that the
vertices are connected
Graph-Based Clustering: SNN Approach
If two points are similar to many of the same points, then they are
likely similar to one another, even if a direct measurement of
similarity does not indicate this.
3/31/2021 64
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Creating the SNN Graph
Sparse Graph
Link weights are similarities
between neighboring points
Shared Near Neighbor Graph
Link weights are number of
Shared Nearest Neighbors
3/31/2021 65
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Jarvis-Patrick Clustering
๏ฌ First, the k-nearest neighbors of all points are found
โ€“ In graph terms this can be regarded as breaking all but the k
strongest links from a point to other points in the proximity graph
๏ฌ A pair of points is put in the same cluster if
โ€“ any two points share more than T neighbors and
โ€“ the two points are in each others k nearest neighbor list
๏ฌ For instance, we might choose a nearest neighbor list of
size 20 and put points in the same cluster if they share
more than 10 near neighbors
๏ฌ Jarvis-Patrick clustering is too brittle
3/31/2021 66
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
When Jarvis-Patrick Works Reasonably Well
Original Points Jarvis Patrick Clustering
6 shared neighbors out of 20
3/31/2021 67
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Smallest threshold, T,
that does not merge
clusters.
Threshold of T - 1
When Jarvis-Patrick Does NOT Work Well
3/31/2021 68
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Density-Based Clustering
๏ฌ Combines:
โ€“ Graph based clustering (similarity definition based on
number of shared nearest neighbors)
โ€“ Density based clustering (DBScan-like approach)
๏ฌ SNN density measures whether a point is
surrounded by similar points (with respect to its
nearest neighbors)
3/31/2021 69
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Clustering Algorithm
1. Compute the similarity matrix
This corresponds to a similarity graph with data points for nodes and
edges whose weights are the similarities between data points
2. Sparsify the similarity matrix by keeping only the k most similar
neighbors
This corresponds to only keeping the k strongest links of the similarity
graph
3. Construct the shared nearest neighbor graph from the sparsified
similarity matrix.
At this point, we could apply a similarity threshold and find the
connected components to obtain the clusters (Jarvis-Patrick
algorithm)
4. Find the SNN density of each Point.
Using a user specified parameters, Eps, find the number points that
have an SNN similarity of Eps or greater to each point. This is the
SNN density of the point
3/31/2021 70
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Clustering Algorithm โ€ฆ
5. Find the core points
Using a user specified parameter, MinPts, find the core
points, i.e., all points that have an SNN density greater
than MinPts
6. Form clusters from the core points
If two core points are within a โ€œradiusโ€, Eps, of each
other they are placed in the same cluster
7. Discard all noise points
All non-core points that are not within a โ€œradiusโ€ of Eps of
a core point are discarded
8. Assign all non-noise, non-core points to clusters
This can be done by assigning such points to the
nearest core point
(Note that steps 4-8 are DBSCAN)
3/31/2021 71
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Density
a) All Points b) High SNN Density
c) Medium SNN Density d) Low SNN Density
3/31/2021 72
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Clustering Can Handle Differing Densities
Original Points SNN Clustering
3/31/2021 73
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Clustering Can Handle Other Difficult Situations
3/31/2021 74
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Clusters in Sea-Surface Temperature (SST) Time-series Data
Steinbach, et al (KDD 2003)
longitude
latitude
107 SNN Clusters for Detrended Monthly Z SST (1958-1998)
-180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180
90
60
30
0
-30
-60
-90
3/31/2021 75
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SST Clusters that Correspond to El Nino Climate Indices
Steinbach, et al (KDD 2003)
SNN clusters of SST that are highly correlated
with El Nino indices, ~ 0.93 correlation.
El Nino Regions Defined
by Earth Scientists
Niรฑo
Region
Range
Longitude
Range
Latitude
1+2 (94) 90ยฐW-80ยฐW 10ยฐS-0ยฐ
3 (67) 150ยฐW-90ยฐW 5ยฐS-5ยฐN
3.4 (78) 170ยฐW-120ยฐW 5ยฐS-5ยฐN
4 (75) 160ยฐE-150ยฐW 5ยฐS-5ยฐN
longitude
latitude
EL Nino Related SST Clusters
-180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180
90
60
30
0
-30
-60
-90
75 78 67 94
3/31/2021 76
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
SNN Clusters in Sea-Level-Pressure (SLP) Time-series Data
Steinbach, et al (KDD 2003)
26 SLP Clusters via Shared Nearest Neighbor Clustering (100 NN, 1982-1994)
longitude
latitude
-180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180
90
60
30
0
-30
-60
-90
13
26
24
25
22
14
16 20 17 18
19
15
23
1 9
6
4
7 10
12
11
3
5
2
8
21
SNN Clusters of SLP.
SNN Density of SLP Time Series Data
longitude
latitude
-180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180
90
60
30
0
-30
-60
-90
SNN Density of Points on the Globe.
3/31/2021 77
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Pairs of SLP Clusters that Correspond to El-Nino SOI
82 83 84 85 86 87 88 89 90 91 92 93 94
-3
-2
-1
0
1
2
3
SLP Clusters 15 and 20
Cluster 15
Cluster 20
Centroids of SLP clusters 15 and 20
(near Darwin, Australia and Tahiti)
1982-1993.
82 83 84 85 86 87 88 89 90 91 92 93 94
-4
-3
-2
-1
0
1
2
3
SOI vs. Cluster Centroid 20 - Cluster Centroid 15 ( corr = 0.78 )
SOI
20 - 15
Centroid of cluster 20 โ€“ Centroid of cluster 15
versus SOI
3/31/2021 78
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
Limitations of SNN Clustering
๏ฌ Complexity of SNN Clustering is high
โ€“ O( n * time to find numbers of neighbor within Eps)
โ€“ In worst case, this is O(n2)
โ€“ For lower dimensions, there are more efficient ways to find
the nearest neighbors
๏ต R* Tree
๏ต k-d Trees
๏ฌ Parameterization is not easy
Characteristics of Data, Clusters, and
Clustering Algorithms
๏ฌ A cluster analysis is affected by characteristics of
โ€“ Data
โ€“ Clusters
โ€“ Clustering algorithms
๏ฌ Looking at these characteristics gives us a
number of dimensions that you can use to
describe clustering algorithms and the results that
they produce
ยฉ Tan,Steinbach, Kumar Introduction to Data Mining 4/13/2006 79
3/31/2021 80
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ High dimensionality
โ€“ Dimensionality reduction
๏ฌ Types of attributes
โ€“ Binary, discrete, continuous, asymmetric
โ€“ Mixed attribute types (e.g., some are continuous, others nominal)
๏ฌ Differences in attribute scales
โ€“ Normalization techniques
๏ฌ Size of data set
๏ฌ Noise and Outliers
๏ฌ Properties of the data space
โ€“ Can you define a meaningful centroid or a meaningful notion of
density
Characteristics of Data
3/31/2021 81
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ Data distribution
โ€“ Parametric models
๏ฌ Shape
โ€“ Globular or arbitrary shape
๏ฌ Differing sizes
๏ฌ Differing densities
๏ฌ Level of separation among clusters
๏ฌ Relationship among clusters
๏ฌ Subspace clusters
Characteristics of Clusters
3/31/2021 82
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ Order dependence
๏ฌ Non-determinism
๏ฌ Scalability
๏ฌ Number of parameters
Characteristics of Clustering Algorithms
3/31/2021 83
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ Type of Clustering
โ€“ Taxonomy vs flat
๏ฌ Type of Cluster
โ€“ Prototype vs connected reguions vs density-based
๏ฌ Characteristics of Clusters
โ€“ Subspace clusters, spatial inter-relationships
๏ฌ Characteristics of Data Sets and Attributes
๏ฌ Noise and Outliers
๏ฌ Number of Data Objects
๏ฌ Number of Attributes
๏ฌ Algorithmic Considerations
Which Clustering Algorithm?
3/31/2021 84
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ We assume EM clustering using the Gaussian (normal) distribution.
๏ฌ MIN is hierarchical, EM clustering is partitional.
๏ฌ Both MIN and EM clustering are complete.
๏ฌ MIN has a graph-based (contiguity-based) notion of a cluster, while
EM clustering has a prototype (or model-based) notion of a cluster.
๏ฌ MIN will not be able to distinguish poorly separated clusters, but EM
can manage this in many situations.
๏ฌ MIN can find clusters of different shapes and sizes; EM clustering
prefers globular clusters and can have trouble with clusters of
different sizes.
๏ฌ Min has trouble with clusters of different densities, while EM can
often handle this.
๏ฌ Neither MIN nor EM clustering finds subspace clusters.
Comparison of MIN and EM-Clustering
3/31/2021 85
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ MIN can handle outliers, but noise can join clusters; EM clustering
can tolerate noise, but can be strongly affected by outliers.
๏ฌ EM can only be applied to data for which a centroid is meaningful;
MIN only requires a meaningful definition of proximity.
๏ฌ EM will have trouble as dimensionality increases and the number
of its parameters (the number of entries in the covariance matrix)
increases as the square of the number of dimensions; MIN can
work well with a suitable definition of proximity.
๏ฌ EM is designed for Euclidean data, although versions of EM
clustering have been developed for other types of data. MIN is
shielded from the data type by the fact that it uses a similarity
matrix.
๏ฌ MIN makes no distribution assumptions; the version of EM we are
considering assumes Gaussian distributions.
Comparison of MIN and EM-Clustering
3/31/2021 86
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ EM has an O(n) time complexity; MIN is O(n2log(n)).
๏ฌ Because of random initialization, the clusters found by EM
can vary from one run to another; MIN produces the same
clusters unless there are ties in the similarity matrix.
๏ฌ Neither MIN nor EM automatically determine the number
of clusters.
๏ฌ MIN does not have any user-specified parameters; EM
has the number of clusters and possibly the weights of
the clusters.
๏ฌ EM clustering can be viewed as an optimization problem;
MIN uses a graph model of the data.
๏ฌ Neither EM or MIN are order dependent.
Comparison of MIN and EM-Clustering
3/31/2021 87
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ Both are partitional.
๏ฌ K-means is complete; DBSCAN is not.
๏ฌ K-means has a prototype-based notion of a cluster; DB
uses a density-based notion.
๏ฌ K-means can find clusters that are not well-separated.
DBSCAN will merge clusters that touch.
๏ฌ DBSCAN handles clusters of different shapes and sizes;
K-means prefers globular clusters.
Comparison of DBSCAN and K-means
3/31/2021 88
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ DBSCAN can handle noise and outliers; K-means
performs poorly in the presence of outliers
๏ฌ K-means can only be applied to data for which a centroid
is meaningful; DBSCAN requires a meaningful definition
of density
๏ฌ DBSCAN works poorly on high-dimensional data; K-
means works well for some types of high-dimensional
data
๏ฌ Both techniques were designed for Euclidean data, but
extended to other types of data
๏ฌ DBSCAN makes no distribution assumptions; K-means is
really assuming spherical Gaussian distributions
Comparison of DBSCAN and K-means
3/31/2021 89
Introduction to Data Mining, 2nd Edition
Tan, Steinbach, Karpatne, Kumar
๏ฌ K-means has an O(n) time complexity; DBSCAN is
O(n^2)
๏ฌ Because of random initialization, the clusters found by K-
means can vary from one run to another; DBSCAN
always produces the same clusters
๏ฌ DBSCAN automatically determines the number of
clusters; K-means does not
๏ฌ K-means has only one parameter, DBSCAN has two.
๏ฌ K-means clustering can be viewed as an optimization
problem and as a special case of EM clustering; DBSCAN
is not based on a formal model.
Comparison of DBSCAN and K-means

More Related Content

Similar to Advanced_cluster_analysis.pptx

Mat189: Cluster Analysis with NBA Sports Data
Mat189: Cluster Analysis with NBA Sports DataMat189: Cluster Analysis with NBA Sports Data
Mat189: Cluster Analysis with NBA Sports DataKathleneNgo
ย 
Chapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptChapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptSubrata Kumer Paul
ย 
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...CSCJournals
ย 
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...Waqas Tariq
ย 
K means Clustering - algorithm to cluster n objects
K means Clustering - algorithm to cluster n objectsK means Clustering - algorithm to cluster n objects
K means Clustering - algorithm to cluster n objectsVoidVampire
ย 
Chapter 11 cluster advanced : web and text mining
Chapter 11 cluster advanced : web and text miningChapter 11 cluster advanced : web and text mining
Chapter 11 cluster advanced : web and text miningHouw Liong The
ย 
Chapter 11 cluster advanced, Han & Kamber
Chapter 11 cluster advanced, Han & KamberChapter 11 cluster advanced, Han & Kamber
Chapter 11 cluster advanced, Han & KamberHouw Liong The
ย 
Lecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.pptLecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.pptSyedNahin1
ย 
On Selection of Periodic Kernels Parameters in Time Series Prediction
On Selection of Periodic Kernels Parameters in Time Series Prediction On Selection of Periodic Kernels Parameters in Time Series Prediction
On Selection of Periodic Kernels Parameters in Time Series Prediction cscpconf
ย 
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTION
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTIONON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTION
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTIONcscpconf
ย 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
ย 
Business analytics course in delhi
Business analytics course in delhiBusiness analytics course in delhi
Business analytics course in delhibhuvan8999
ย 
data science course in delhi
data science course in delhidata science course in delhi
data science course in delhidevipatnala1
ย 
business analytics course in delhi
business analytics course in delhibusiness analytics course in delhi
business analytics course in delhidevipatnala1
ย 
Data science certification
Data science certificationData science certification
Data science certificationprathyusha1234
ย 
Best data science training, best data science training institute in hyderabad.
 Best data science training, best data science training institute in hyderabad. Best data science training, best data science training institute in hyderabad.
Best data science training, best data science training institute in hyderabad.Data Analytics Courses in Pune
ย 
Best data science training, best data science training institute in hyderabad.
 Best data science training, best data science training institute in hyderabad. Best data science training, best data science training institute in hyderabad.
Best data science training, best data science training institute in hyderabad.hrhrenurenu
ย 

Similar to Advanced_cluster_analysis.pptx (20)

Mat189: Cluster Analysis with NBA Sports Data
Mat189: Cluster Analysis with NBA Sports DataMat189: Cluster Analysis with NBA Sports Data
Mat189: Cluster Analysis with NBA Sports Data
ย 
Chapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptChapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.ppt
ย 
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
ย 
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
ย 
K means Clustering - algorithm to cluster n objects
K means Clustering - algorithm to cluster n objectsK means Clustering - algorithm to cluster n objects
K means Clustering - algorithm to cluster n objects
ย 
Chapter 11 cluster advanced : web and text mining
Chapter 11 cluster advanced : web and text miningChapter 11 cluster advanced : web and text mining
Chapter 11 cluster advanced : web and text mining
ย 
Master's Thesis Presentation
Master's Thesis PresentationMaster's Thesis Presentation
Master's Thesis Presentation
ย 
Chapter 11 cluster advanced, Han & Kamber
Chapter 11 cluster advanced, Han & KamberChapter 11 cluster advanced, Han & Kamber
Chapter 11 cluster advanced, Han & Kamber
ย 
Lecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.pptLecture_3_k-mean-clustering.ppt
Lecture_3_k-mean-clustering.ppt
ย 
Clustering.pptx
Clustering.pptxClustering.pptx
Clustering.pptx
ย 
On Selection of Periodic Kernels Parameters in Time Series Prediction
On Selection of Periodic Kernels Parameters in Time Series Prediction On Selection of Periodic Kernels Parameters in Time Series Prediction
On Selection of Periodic Kernels Parameters in Time Series Prediction
ย 
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTION
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTIONON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTION
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTION
ย 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
ย 
11 clusadvanced
11 clusadvanced11 clusadvanced
11 clusadvanced
ย 
Business analytics course in delhi
Business analytics course in delhiBusiness analytics course in delhi
Business analytics course in delhi
ย 
data science course in delhi
data science course in delhidata science course in delhi
data science course in delhi
ย 
business analytics course in delhi
business analytics course in delhibusiness analytics course in delhi
business analytics course in delhi
ย 
Data science certification
Data science certificationData science certification
Data science certification
ย 
Best data science training, best data science training institute in hyderabad.
 Best data science training, best data science training institute in hyderabad. Best data science training, best data science training institute in hyderabad.
Best data science training, best data science training institute in hyderabad.
ย 
Best data science training, best data science training institute in hyderabad.
 Best data science training, best data science training institute in hyderabad. Best data science training, best data science training institute in hyderabad.
Best data science training, best data science training institute in hyderabad.
ย 

More from PerumalPitchandi

Lecture Notes on Recommender System Introduction
Lecture Notes on Recommender System IntroductionLecture Notes on Recommender System Introduction
Lecture Notes on Recommender System IntroductionPerumalPitchandi
ย 
22ADE002 โ€“ Business Analytics- Module 1.pptx
22ADE002 โ€“ Business Analytics- Module 1.pptx22ADE002 โ€“ Business Analytics- Module 1.pptx
22ADE002 โ€“ Business Analytics- Module 1.pptxPerumalPitchandi
ย 
ppt_ids-data science.pdf
ppt_ids-data science.pdfppt_ids-data science.pdf
ppt_ids-data science.pdfPerumalPitchandi
ย 
ANOVA Presentation.ppt
ANOVA Presentation.pptANOVA Presentation.ppt
ANOVA Presentation.pptPerumalPitchandi
ย 
Data Science Intro.pptx
Data Science Intro.pptxData Science Intro.pptx
Data Science Intro.pptxPerumalPitchandi
ย 
Descriptive_Statistics_PPT.ppt
Descriptive_Statistics_PPT.pptDescriptive_Statistics_PPT.ppt
Descriptive_Statistics_PPT.pptPerumalPitchandi
ย 
SW_Cost_Estimation.ppt
SW_Cost_Estimation.pptSW_Cost_Estimation.ppt
SW_Cost_Estimation.pptPerumalPitchandi
ย 
CostEstimation-1.ppt
CostEstimation-1.pptCostEstimation-1.ppt
CostEstimation-1.pptPerumalPitchandi
ย 
20IT204-COA-Lecture 18.ppt
20IT204-COA-Lecture 18.ppt20IT204-COA-Lecture 18.ppt
20IT204-COA-Lecture 18.pptPerumalPitchandi
ย 
20IT204-COA- Lecture 17.pptx
20IT204-COA- Lecture 17.pptx20IT204-COA- Lecture 17.pptx
20IT204-COA- Lecture 17.pptxPerumalPitchandi
ย 
Capability Maturity Model (CMM).pptx
Capability Maturity Model (CMM).pptxCapability Maturity Model (CMM).pptx
Capability Maturity Model (CMM).pptxPerumalPitchandi
ย 
Comparison_between_Waterfall_and_Agile_m (1).pptx
Comparison_between_Waterfall_and_Agile_m (1).pptxComparison_between_Waterfall_and_Agile_m (1).pptx
Comparison_between_Waterfall_and_Agile_m (1).pptxPerumalPitchandi
ย 
Introduction to Data Science.pptx
Introduction to Data Science.pptxIntroduction to Data Science.pptx
Introduction to Data Science.pptxPerumalPitchandi
ย 
FDS Module I 24.1.2022.ppt
FDS Module I 24.1.2022.pptFDS Module I 24.1.2022.ppt
FDS Module I 24.1.2022.pptPerumalPitchandi
ย 
FDS Module I 20.1.2022.ppt
FDS Module I 20.1.2022.pptFDS Module I 20.1.2022.ppt
FDS Module I 20.1.2022.pptPerumalPitchandi
ย 
AgileSoftwareDevelopment.ppt
AgileSoftwareDevelopment.pptAgileSoftwareDevelopment.ppt
AgileSoftwareDevelopment.pptPerumalPitchandi
ย 
Agile and its impact to Project Management 022218.pptx
Agile and its impact to Project Management 022218.pptxAgile and its impact to Project Management 022218.pptx
Agile and its impact to Project Management 022218.pptxPerumalPitchandi
ย 
Data_exploration.ppt
Data_exploration.pptData_exploration.ppt
Data_exploration.pptPerumalPitchandi
ย 
state-spaces29Sep06.ppt
state-spaces29Sep06.pptstate-spaces29Sep06.ppt
state-spaces29Sep06.pptPerumalPitchandi
ย 

More from PerumalPitchandi (20)

Lecture Notes on Recommender System Introduction
Lecture Notes on Recommender System IntroductionLecture Notes on Recommender System Introduction
Lecture Notes on Recommender System Introduction
ย 
22ADE002 โ€“ Business Analytics- Module 1.pptx
22ADE002 โ€“ Business Analytics- Module 1.pptx22ADE002 โ€“ Business Analytics- Module 1.pptx
22ADE002 โ€“ Business Analytics- Module 1.pptx
ย 
biv_mult.ppt
biv_mult.pptbiv_mult.ppt
biv_mult.ppt
ย 
ppt_ids-data science.pdf
ppt_ids-data science.pdfppt_ids-data science.pdf
ppt_ids-data science.pdf
ย 
ANOVA Presentation.ppt
ANOVA Presentation.pptANOVA Presentation.ppt
ANOVA Presentation.ppt
ย 
Data Science Intro.pptx
Data Science Intro.pptxData Science Intro.pptx
Data Science Intro.pptx
ย 
Descriptive_Statistics_PPT.ppt
Descriptive_Statistics_PPT.pptDescriptive_Statistics_PPT.ppt
Descriptive_Statistics_PPT.ppt
ย 
SW_Cost_Estimation.ppt
SW_Cost_Estimation.pptSW_Cost_Estimation.ppt
SW_Cost_Estimation.ppt
ย 
CostEstimation-1.ppt
CostEstimation-1.pptCostEstimation-1.ppt
CostEstimation-1.ppt
ย 
20IT204-COA-Lecture 18.ppt
20IT204-COA-Lecture 18.ppt20IT204-COA-Lecture 18.ppt
20IT204-COA-Lecture 18.ppt
ย 
20IT204-COA- Lecture 17.pptx
20IT204-COA- Lecture 17.pptx20IT204-COA- Lecture 17.pptx
20IT204-COA- Lecture 17.pptx
ย 
Capability Maturity Model (CMM).pptx
Capability Maturity Model (CMM).pptxCapability Maturity Model (CMM).pptx
Capability Maturity Model (CMM).pptx
ย 
Comparison_between_Waterfall_and_Agile_m (1).pptx
Comparison_between_Waterfall_and_Agile_m (1).pptxComparison_between_Waterfall_and_Agile_m (1).pptx
Comparison_between_Waterfall_and_Agile_m (1).pptx
ย 
Introduction to Data Science.pptx
Introduction to Data Science.pptxIntroduction to Data Science.pptx
Introduction to Data Science.pptx
ย 
FDS Module I 24.1.2022.ppt
FDS Module I 24.1.2022.pptFDS Module I 24.1.2022.ppt
FDS Module I 24.1.2022.ppt
ย 
FDS Module I 20.1.2022.ppt
FDS Module I 20.1.2022.pptFDS Module I 20.1.2022.ppt
FDS Module I 20.1.2022.ppt
ย 
AgileSoftwareDevelopment.ppt
AgileSoftwareDevelopment.pptAgileSoftwareDevelopment.ppt
AgileSoftwareDevelopment.ppt
ย 
Agile and its impact to Project Management 022218.pptx
Agile and its impact to Project Management 022218.pptxAgile and its impact to Project Management 022218.pptx
Agile and its impact to Project Management 022218.pptx
ย 
Data_exploration.ppt
Data_exploration.pptData_exploration.ppt
Data_exploration.ppt
ย 
state-spaces29Sep06.ppt
state-spaces29Sep06.pptstate-spaces29Sep06.ppt
state-spaces29Sep06.ppt
ย 

Recently uploaded

Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
ย 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .DerechoLaboralIndivi
ย 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLManishPatel169454
ย 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
ย 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
ย 
Top Rated Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...Call Girls in Nagpur High Profile
ย 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
ย 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
ย 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
ย 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
ย 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptDineshKumar4165
ย 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
ย 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
ย 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
ย 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
ย 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdfSuman Jyoti
ย 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
ย 
Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar โ‰ผ๐Ÿ” Delhi door step de...
Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar  โ‰ผ๐Ÿ” Delhi door step de...Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar  โ‰ผ๐Ÿ” Delhi door step de...
Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar โ‰ผ๐Ÿ” Delhi door step de...9953056974 Low Rate Call Girls In Saket, Delhi NCR
ย 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
ย 

Recently uploaded (20)

Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
ย 
NFPA 5000 2024 standard .
NFPA 5000 2024 standard                                  .NFPA 5000 2024 standard                                  .
NFPA 5000 2024 standard .
ย 
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELLPVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
PVC VS. FIBERGLASS (FRP) GRAVITY SEWER - UNI BELL
ย 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
ย 
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Ankleshwar 7001035870 Whatsapp Number, 24/07 Booking
ย 
Top Rated Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth โŸŸ 6297143586 โŸŸ Call Me For Genuine Se...
ย 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
ย 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
ย 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
ย 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
ย 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
ย 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
ย 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
ย 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
ย 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
ย 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
ย 
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank  Design by Working Stress - IS Method.pdfIntze Overhead Water Tank  Design by Working Stress - IS Method.pdf
Intze Overhead Water Tank Design by Working Stress - IS Method.pdf
ย 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
ย 
Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar โ‰ผ๐Ÿ” Delhi door step de...
Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar  โ‰ผ๐Ÿ” Delhi door step de...Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar  โ‰ผ๐Ÿ” Delhi door step de...
Call Now โ‰ฝ 9953056974 โ‰ผ๐Ÿ” Call Girls In New Ashok Nagar โ‰ผ๐Ÿ” Delhi door step de...
ย 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
ย 

Advanced_cluster_analysis.pptx

  • 1. Data Mining Cluster Analysis: Advanced Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
  • 2. 3/31/2021 2 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Outline ๏ฌ Prototype-based โ€“ Fuzzy c-means โ€“ Mixture Model Clustering โ€“ Self-Organizing Maps ๏ฌ Density-based โ€“ Grid-based clustering โ€“ Subspace clustering: CLIQUE โ€“ Kernel-based: DENCLUE ๏ฌ Graph-based โ€“ Chameleon โ€“ Jarvis-Patrick โ€“ Shared Nearest Neighbor (SNN) ๏ฌ Characteristics of Clustering Algorithms
  • 3. 3/31/2021 3 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Hard (Crisp) vs Soft (Fuzzy) Clustering ๏ฌ Hard (Crisp) vs. Soft (Fuzzy) clustering โ€“ For soft clustering allow point to belong to more than one cluster โ€“ For K-means, generalize objective function ๐‘ค๐‘–๐‘—: weight with which object xi belongs to cluster ๐’„๐’‹ โ€“ To minimize SSE, repeat the following steps: ๏ต Fix ๐’„๐’‹and determine ๐‘ค๐‘–๐‘— (cluster assignment) ๏ต Fix ๐‘ค๐‘–๐‘—and recompute ๐’„๐’‹ โ€“ Hard clustering: ๐‘ค๐‘–๐‘— ๏ƒŽ {0,1} 1 1 ๏€ฝ ๏ƒฅ ๏€ฝ k j ij w ๐‘†๐‘†๐ธ = ๐‘—=1 ๐‘˜ ๐‘–=1 ๐‘š ๐‘ค๐‘–๐‘— ๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2
  • 4. 3/31/2021 4 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Soft (Fuzzy) Clustering: Estimating Weights SSE(x) has a minimum value of 2.25 when wx1 = 1, wx2 = 0 1 2.5 5 c1 c2 x ๐‘†๐‘†๐ธ = ๐‘ค๐‘ฅ1 2.5 โˆ’ 1 2 + ๐‘ค๐‘ฅ2 5 โˆ’ 2.5 2 = 2.25๐‘ค๐‘ฅ1 + 6.25๐‘ค๐‘ฅ2
  • 5. 3/31/2021 5 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Fuzzy C-means ๏ฌ Objective function ๏ต ๐‘ค๐‘–๐‘—: weight with which object ๐’™๐‘– belongs to cluster ๐’„๐’‹ ๏ต ๐‘: is a power for the weight not a superscript and controls how โ€œfuzzyโ€ the clustering is โ€“ To minimize objective function, repeat the following: ๏ต Fix ๐’„๐’‹ and determine ๐‘ค๐‘–๐‘— ๏ต Fix ๐‘ค๐‘–๐‘— and recompute ๐’„ โ€“ Fuzzy c-means clustering: ๐‘ค๐‘–๐‘— ๏ƒŽ[0,1] Bezdek, James C. Pattern recognition with fuzzy objective function algorithms. Kluwer Academic Publishers, 1981. 1 1 ๏€ฝ ๏ƒฅ ๏€ฝ k j ij w p: fuzzifier (p > 1) ๐‘†๐‘†๐ธ = ๐‘—=1 ๐‘˜ ๐‘–=1 ๐‘š ๐‘ค๐‘–๐‘— ๐‘ ๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2
  • 6. 3/31/2021 6 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๐‘†๐‘†๐ธ = ๐‘ค๐‘ฅ1 2 2.5 โˆ’ 1 2 + ๐‘ค๐‘ฅ2 2 5 โˆ’ 2.5 2 Fuzzy C-means = 2.25๐‘ค๐‘ฅ1 2 + 6.25๐‘ค๐‘ฅ2 2 1 2.5 5 c1 c2 x SSE(x) has a minimum value of 1.654 when wx1 = 0.74, wx2 = 0.36
  • 7. 3/31/2021 7 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Fuzzy C-means ๏ฌ Objective function: ๏ฌ Initialization: choose the weights ๐‘ค๐‘–๐‘— randomly subject to the constraint that ๏ฌ Repeat: โ€“ Update centroids: โ€“ Update weights: 1 1 ๏€ฝ ๏ƒฅ ๏€ฝ k j ij w p: fuzzifier (p > 1) ๐‘ค๐‘–๐‘— = (1/๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2) 1 ๐‘โˆ’1/ ๐‘ž=1 ๐‘˜ (1/๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘ž)2) 1 ๐‘โˆ’1 ๐’„๐’‹ = ๐‘–=1 ๐‘š ๐‘ค๐‘–๐‘—๐’™๐‘– / ๐‘–=1 ๐‘š ๐‘ค๐‘–๐‘— ๐‘†๐‘†๐ธ = ๐‘—=1 ๐‘˜ ๐‘–=1 ๐‘š ๐‘ค๐‘–๐‘— ๐‘ ๐‘‘๐‘–๐‘ ๐‘ก(๐’™๐‘–, ๐’„๐‘—)2 1 1 ๏€ฝ ๏ƒฅ ๏€ฝ k j ij w
  • 8. 3/31/2021 8 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Fuzzy K-means Applied to Sample Data maximum membership 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
  • 9. 3/31/2021 9 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar An Example Application: Image Segmentation ๏ฌ Modified versions of fuzzy c-means have been used for image segmentation โ€“ Especially fMRI images (functional magnetic resonance images) ๏ฌ References โ€“ Gong, Maoguo, Yan Liang, Jiao Shi, Wenping Ma, and Jingjing Ma. "Fuzzy c-means clustering with local information and kernel metric for image segmentation." Image Processing, IEEE Transactions on 22, no. 2 (2013): 573-584. From left to right: original images, fuzzy c-means, EM, BCFCM โ€“ Ahmed, Mohamed N., Sameh M. Yamany, Nevin Mohamed, Aly A. Farag, and Thomas Moriarty. "A modified fuzzy c-means algorithm for bias field estimation and segmentation of MRI data." Medical Imaging, IEEE Transactions on 21, no. 3 (2002): 193-199.
  • 10. 3/31/2021 10 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clustering Using Mixture Models ๏ฌ Idea is to model the set of data points as arising from a mixture of distributions โ€“ Typically, normal (Gaussian) distribution is used โ€“ But other distributions have been very profitably used ๏ฌ Clusters are found by estimating the parameters of the statistical distributions using the Expectation- Maximization (EM) algorithm ๏ต k-means is a special case of this approach ๏ตProvides a compact representation of clusters ๏ตThe probabilities with which point belongs to each cluster provide a functionality similar to fuzzy clustering.
  • 11. 3/31/2021 11 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Probabilistic Clustering: Example ๏ฌ Informal example: consider modeling the points that generate the following histogram. ๏ฌ Looks like a combination of two normal (Gaussian) distributions ๏ฌ Suppose we can estimate the mean and standard deviation of each normal distribution. โ€“ This completely describes the two clusters โ€“ We can compute the probabilities with which each point belongs to each cluster โ€“ Can assign each point to the cluster (distribution) for which it is most probable.
  • 12. 3/31/2021 12 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Probabilistic Clustering: EM Algorithm Initialize the parameters Repeat For each point, compute its probability under each distribution Using these probabilities, update the parameters of each distribution Until there is no change ๏ฌ Very similar to of K-means ๏ฌ Consists of assignment and update steps ๏ฌ Can use random initialization โ€“ Problem of local minima ๏ฌ For normal distributions, typically use K-means to initialize ๏ฌ If using normal distributions, can find elliptical as well as spherical shapes.
  • 13. 3/31/2021 13 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Probabilistic Clustering: Updating Centroids Update formula for weights assuming an estimate for statistical parameters ๏ฌ Very similar to the fuzzy k-means formula โ€“ Weights are probabilities โ€“ Weights are not raised to a power โ€“ Probabilities calculated using Bayes rule: ๏ฌ Need to assign weights to each cluster โ€“ Weights may not be equal โ€“ Similar to prior probabilities โ€“ Can be estimated: ๏ƒฅ ๏ƒฅ ๏€ฝ ๏€ฝ ๏€ฝ m i i j m i i j i j C p C p 1 1 ) | ( / ) | ( x x x c ๏ƒฅ ๏€ฝ ๏€ฝ m i i j j C p m C p 1 ) | ( 1 ) ( x ๐’™๐’Š ๐‘–๐‘  ๐‘Ž ๐‘‘๐‘Ž๐‘ก๐‘Ž ๐‘๐‘œ๐‘–๐‘›๐‘ก ๐ถ๐’‹ ๐‘–๐‘  ๐‘Ž ๐‘๐‘™๐‘ข๐‘ ๐‘ก๐‘’๐‘Ÿ ๐’„๐’‹ ๐‘–๐‘  ๐‘Ž ๐‘๐‘’๐‘›๐‘ก๐‘Ÿ๐‘œ๐‘–๐‘‘ ๐‘ ๐ถ๐’‹ ๐’™๐’Š = ๐‘ ๐’™๐’Š ๐ถ๐’‹ ๐‘(๐ถ๐’‹) ๐‘™=1 ๐‘˜ ๐‘ ๐’™๐’Š ๐ถ๐‘™ ๐‘(๐ถ๐’)
  • 14. 3/31/2021 14 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar More Detailed EM Algorithm
  • 15. 3/31/2021 15 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Probabilistic Clustering Applied to Sample Data maximum probability 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
  • 16. 3/31/2021 16 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Probabilistic Clustering: Dense and Sparse Clusters -10 -8 -6 -4 -2 0 2 4 -8 -6 -4 -2 0 2 4 6 x y ?
  • 17. 3/31/2021 17 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Problems with EM ๏ฌ Convergence can be slow ๏ฌ Only guarantees finding local maxima ๏ฌ Makes some significant statistical assumptions ๏ฌ Number of parameters for Gaussian distribution grows as O(d2), d the number of dimensions โ€“ Parameters associated with covariance matrix โ€“ K-means only estimates cluster means, which grow as O(d)
  • 18. 3/31/2021 18 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Alternatives to EM ๏ฌ Method of moments / Spectral methods โ€“ ICML 2014 workshop bibliography https://sites.google.com/site/momentsicml2014/bibliography ๏ฌ Markov chain Monte Carlo (MCMC) ๏ฌ Other approaches
  • 19. 3/31/2021 19 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SOM: Self-Organizing Maps ๏ฌ Self-organizing maps (SOM) โ€“ Centroid based clustering scheme โ€“ Like K-means, a fixed number of clusters are specified โ€“ However, the spatial relationship of clusters is also specified, typically as a grid โ€“ Points are considered one by one โ€“ Each point is assigned to the closest centroid, and this centroid is updated โ€“ Other centroids are updated based on their spatial proximity to the closest centroid Kohonen, Teuvo, and Self-Organizing Maps. "Springer series in information sciences." Self- organizing maps 30 (1995).
  • 20. 3/31/2021 20 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SOM: Self-Organizing Maps ๏ฌ Updates are weighted by distance โ€“ Centroids farther away are affected less ๏ฌ The impact of the updates decreases with each time โ€“ At some point the centroids will not change much ๏ฌ SOM can be viewed as a type of dimensionality reduction โ€“ If a 2D (3D) grid is used, the results can be easily visualized, and it can facilitate the interpretation of clusters
  • 21. 3/31/2021 21 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SOM Clusters of LA Times Document Data
  • 22. 3/31/2021 22 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Another SOM Example: 2D Points
  • 23. 3/31/2021 23 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Issues with SOM ๏ฌ High computational complexity ๏ฌ No guarantee of convergence ๏ฌ Choice of grid and other parameters is somewhat arbitrary ๏ฌ Lack of a specific objective function
  • 24. 3/31/2021 24 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Grid-based Clustering ๏ฌ A type of density-based clustering
  • 25. 3/31/2021 25 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Subspace Clustering ๏ฌ Until now, we found clusters by considering all of the attributes ๏ฌ Some clusters may involve only a subset of attributes, i.e., subspaces of the data
  • 26. 3/31/2021 26 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clusters in subspaces
  • 27. 3/31/2021 27 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clusters in subspaces
  • 28. 3/31/2021 28 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clusters in subspaces
  • 29. 3/31/2021 29 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clique โ€“ A Subspace Clustering Algorithm ๏ฌ A grid-based clustering algorithm that methodically finds subspace clusters โ€“ Partitions the data space into rectangular units of equal volume in all possible subspaces โ€“ Measures the density of each unit by the fraction of points it contains โ€“ A unit is dense if the fraction of overall points it contains is above a user specified threshold, ๏ด โ€“ A cluster is a set of contiguous (touching) dense units
  • 30. 3/31/2021 30 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clique Algorithm ๏ฌ It is impractical to check volume units in all possible subspaces, since there is an exponential number of such units ๏ฌ Monotone property of density-based clusters: โ€“ If a set of points cannot form a density based cluster in k dimensions, then the same set of points cannot form a density based cluster in all possible supersets of those dimensions ๏ฌ Very similar to Apriori algorithm ๏ฌ Can find overlapping clusters
  • 31. 3/31/2021 31 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clique Algorithm
  • 32. 3/31/2021 32 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Limitations of Clique ๏ฌ Time complexity is exponential in number of dimensions โ€“ Especially if โ€œtoo manyโ€ dense units are generated at lower stages ๏ฌ May fail if clusters are of widely differing densities, since the threshold is fixed โ€“ Determining appropriate threshold and unit interval length can be challenging
  • 33. 3/31/2021 33 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Denclue (DENsity CLUstering) ๏ฌ Based on the notion of kernel-density estimation โ€“ Contribution of each point to the density is given by an influence or kernel function โ€“ Overall density is the sum of the contributions of all points Formula and plot of Gaussian Kernel
  • 34. 3/31/2021 34 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Example of Density from Gaussian Kernel
  • 35. 3/31/2021 35 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar DENCLUE Algorithm ๏ฌ Find the density function ๏ฌ Identify local maxima (density attractors) ๏ฌ Assign each point to the density attractor โ€“ Follow direction of maximum increase in density ๏ฌ Define clusters as groups consisting of points associated with density attractor ๏ฌ Discard clusters whose density attractor has a density less than a user specified minimum, ๏ธ ๏ฌ Combine clusters connected by paths of points that are connected by points with density above ๏ธ
  • 36. 3/31/2021 36 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Graph-Based Clustering: General Concepts ๏ฌ Graph-Based clustering needs a proximity graph โ€“ Each edge between two nodes has a weight which is the proximity between the two points ๏ฌ Many hierarchical clustering algorithms can be viewed in graph terms โ€“ MIN (single link) merges subgraph pairs connected with a lowest weight edge โ€“ Group Average merges subgraph pairs that have high average connectivity ๏ฌ In the simplest case, clusters are connected components in the graph.
  • 37. 3/31/2021 37 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Graph-Based Clustering: Chameleon ๏ฌ Based on several key ideas โ€“ Sparsification of the proximity graph โ€“ Partitioning the data into clusters that are relatively pure subclusters of the โ€œtrueโ€ clusters โ€“ Merging based on preserving characteristics of clusters
  • 38. 3/31/2021 38 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Graph-Based Clustering: Sparsification ๏ฌ The amount of data that needs to be processed is drastically reduced โ€“ Sparsification can eliminate more than 99% of the entries in a proximity matrix โ€“ The amount of time required to cluster the data is drastically reduced โ€“ The size of the problems that can be handled is increased
  • 39. 3/31/2021 39 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Graph-Based Clustering: Sparsification โ€ฆ ๏ฌ Clustering may work better โ€“ Sparsification techniques keep the connections to the most similar (nearest) neighbors of a point while breaking the connections to less similar points. โ€“ This reduces the impact of noise and outliers and sharpens the distinction between clusters. ๏ฌ Sparsification facilitates the use of graph partitioning algorithms (or algorithms based on graph partitioning algorithms) โ€“ Chameleon and Hypergraph-based Clustering
  • 40. 3/31/2021 40 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Limitations of Current Merging Schemes ๏ฌ Existing merging schemes in hierarchical clustering algorithms are static in nature โ€“ MIN: ๏ต Merge two clusters based on their closeness (or minimum distance) โ€“ GROUP-AVERAGE: ๏ต Merge two clusters based on their average connectivity
  • 41. 3/31/2021 41 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Limitations of Current Merging Schemes Closeness schemes will merge (a) and (b) (a) (b) (c) (d) Average connectivity schemes will merge (c) and (d)
  • 42. 3/31/2021 42 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Chameleon: Clustering Using Dynamic Modeling ๏ฌ Adapt to the characteristics of the data set to find the natural clusters ๏ฌ Use a dynamic model to measure the similarity between clusters โ€“ Main properties are the relative closeness and relative inter- connectivity of the cluster โ€“ Two clusters are combined if the resulting cluster shares certain properties with the constituent clusters โ€“ The merging scheme preserves self-similarity
  • 43. 3/31/2021 43 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Relative Interconnectivity
  • 44. 3/31/2021 44 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Relative Closeness
  • 45. 3/31/2021 45 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Chameleon: Steps ๏ฌ Preprocessing Step: Represent the data by a Graph โ€“ Given a set of points, construct the k-nearest-neighbor (k-NN) graph to capture the relationship between a point and its k nearest neighbors โ€“ Concept of neighborhood is captured dynamically (even if region is sparse) ๏ฌ Phase 1: Use a multilevel graph partitioning algorithm on the graph to find a large number of clusters of well-connected vertices โ€“ Each cluster should contain mostly points from one โ€œtrueโ€ cluster, i.e., be a sub-cluster of a โ€œrealโ€ cluster
  • 46. 3/31/2021 46 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Chameleon: Steps โ€ฆ ๏ฌ Phase 2: Use Hierarchical Agglomerative Clustering to merge sub-clusters โ€“ Two clusters are combined if the resulting cluster shares certain properties with the constituent clusters โ€“ Two key properties used to model cluster similarity: ๏ต Relative Interconnectivity: Absolute interconnectivity of two clusters normalized by the internal connectivity of the clusters ๏ต Relative Closeness: Absolute closeness of two clusters normalized by the internal closeness of the clusters
  • 47. 3/31/2021 47 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Experimental Results: CHAMELEON
  • 48. 3/31/2021 48 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Experimental Results: CURE (10 clusters)
  • 49. 3/31/2021 49 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Experimental Results: CURE (15 clusters)
  • 50. 3/31/2021 50 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Experimental Results: CHAMELEON
  • 51. 3/31/2021 51 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Experimental Results: CURE (9 clusters)
  • 52. 3/31/2021 52 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Experimental Results: CURE (15 clusters)
  • 53. 3/31/2021 53 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Experimental Results: CHAMELEON
  • 54. 3/31/2021 54 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Spectral Clustering ๏ฌ Spectral clustering is a graph-based clustering approach โ€“ Does a graph partitioning of the proximity graph of a data set โ€“ Breaks the graph into components, such that ๏ต The nodes in a component are strongly connected to other nodes in the component ๏ต The nodes in a component are weakly connected to nodes in other components ๏ต See simple example below (W is the proximity matrix)
  • 55. 3/31/2021 55 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Spectral Clustering ๏ฌ For the simple graph below, the proximity matrix can be written as โ€“ Because the graph consists of two connected components finding clusters is easy. โ€“ More generally, we need an automated approach ๏ต Must be able to handle graphs where the components are not completely separate ๏ต Spectral graph partitioning provides such an approach ๏ต Based on eigenvalue decomposition of a slight modification of the proximity matrix.
  • 56. 3/31/2021 56 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clustering via Spectral Graph Partitioning ๏ฌ Uses an eigenvalue based approach to do the graph portioning โ€“ Based on the Laplacian matrix (L) of a graph, which is derived from the proximity matrix (W) ๏ต W is also known as the weighted adjacency matrix ๏ฌ Define a diagonal matrix D ๐ท๐‘–๐‘— = ๐‘˜ ๐‘Š๐‘–๐‘—, ๐‘–๐‘“ ๐‘– = ๐‘— 0, ๐‘œ๐‘กโ„Ž๐‘’๐‘Ÿ๐‘ค๐‘–๐‘ ๐‘’ ๏ฌ kth diagonal entry of D is the sum of the edges of the kth node of W โ€“ See example below
  • 57. 3/31/2021 57 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Clustering via Spectral Graph Partitioning โ€ฆ ๏ฌ Define a matrix called the Laplacian of the graph, L ๐‹ = ๐ƒ โˆ’ ๐– ๏ฌ L has the following properties: โ€“ It is symmetric โ€“ vTLv โ‰ฅ 0, i.e., the matrix is positive semi-definite and all eigenvalues are positive. โ€“ Last eigenvalue is 0 ๏ฌ Can write L in terms of its eigenvalue decomposition as โ€“ ๐‹ = ๐šฒ๐•, where ๐šฒ is a diagonal matrix of the eigenvalues and ๐• is the matrix of eigenvectors
  • 58. 3/31/2021 58 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar A Slightly More Complicated Example
  • 59. 3/31/2021 59 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Spectral Graph Clustering Algorithm ๏ฌ Given the Laplacian of a graph, it is easy to define a spectral graph clustering algorithm ๏ฌ We simply apply k-means to the matrix consisting of the first k eignenvectors of L ๏ฌ Note that we cluster the rows of that matrix
  • 60. 3/31/2021 60 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Application of K-means and Spectral Clustering to a 2-D Ring
  • 61. 3/31/2021 61 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Strengths and Limitations ๏ฌ Can detect clusters of different shape and sizes ๏ฌ Sensitive to how graph is created and sparsified ๏ฌ Sensitive to outliers ๏ฌ Time complexity depends on the sparsity of the data matrix โ€“ Improved by sparsification
  • 62. 3/31/2021 62 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar i j i j 4 Shared Nearest Neighbor (SNN) graph: the weight of an edge is the number of shared nearest neighbors between vertices given that the vertices are connected Graph-Based Clustering: SNN Approach
  • 63. 3/31/2021 63 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar i j i j 4 Shared Nearest Neighbor (SNN) graph: the weight of an edge is the number of shared neighbors between vertices given that the vertices are connected Graph-Based Clustering: SNN Approach If two points are similar to many of the same points, then they are likely similar to one another, even if a direct measurement of similarity does not indicate this.
  • 64. 3/31/2021 64 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Creating the SNN Graph Sparse Graph Link weights are similarities between neighboring points Shared Near Neighbor Graph Link weights are number of Shared Nearest Neighbors
  • 65. 3/31/2021 65 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Jarvis-Patrick Clustering ๏ฌ First, the k-nearest neighbors of all points are found โ€“ In graph terms this can be regarded as breaking all but the k strongest links from a point to other points in the proximity graph ๏ฌ A pair of points is put in the same cluster if โ€“ any two points share more than T neighbors and โ€“ the two points are in each others k nearest neighbor list ๏ฌ For instance, we might choose a nearest neighbor list of size 20 and put points in the same cluster if they share more than 10 near neighbors ๏ฌ Jarvis-Patrick clustering is too brittle
  • 66. 3/31/2021 66 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar When Jarvis-Patrick Works Reasonably Well Original Points Jarvis Patrick Clustering 6 shared neighbors out of 20
  • 67. 3/31/2021 67 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Smallest threshold, T, that does not merge clusters. Threshold of T - 1 When Jarvis-Patrick Does NOT Work Well
  • 68. 3/31/2021 68 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Density-Based Clustering ๏ฌ Combines: โ€“ Graph based clustering (similarity definition based on number of shared nearest neighbors) โ€“ Density based clustering (DBScan-like approach) ๏ฌ SNN density measures whether a point is surrounded by similar points (with respect to its nearest neighbors)
  • 69. 3/31/2021 69 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Clustering Algorithm 1. Compute the similarity matrix This corresponds to a similarity graph with data points for nodes and edges whose weights are the similarities between data points 2. Sparsify the similarity matrix by keeping only the k most similar neighbors This corresponds to only keeping the k strongest links of the similarity graph 3. Construct the shared nearest neighbor graph from the sparsified similarity matrix. At this point, we could apply a similarity threshold and find the connected components to obtain the clusters (Jarvis-Patrick algorithm) 4. Find the SNN density of each Point. Using a user specified parameters, Eps, find the number points that have an SNN similarity of Eps or greater to each point. This is the SNN density of the point
  • 70. 3/31/2021 70 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Clustering Algorithm โ€ฆ 5. Find the core points Using a user specified parameter, MinPts, find the core points, i.e., all points that have an SNN density greater than MinPts 6. Form clusters from the core points If two core points are within a โ€œradiusโ€, Eps, of each other they are placed in the same cluster 7. Discard all noise points All non-core points that are not within a โ€œradiusโ€ of Eps of a core point are discarded 8. Assign all non-noise, non-core points to clusters This can be done by assigning such points to the nearest core point (Note that steps 4-8 are DBSCAN)
  • 71. 3/31/2021 71 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Density a) All Points b) High SNN Density c) Medium SNN Density d) Low SNN Density
  • 72. 3/31/2021 72 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Clustering Can Handle Differing Densities Original Points SNN Clustering
  • 73. 3/31/2021 73 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Clustering Can Handle Other Difficult Situations
  • 74. 3/31/2021 74 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Clusters in Sea-Surface Temperature (SST) Time-series Data Steinbach, et al (KDD 2003) longitude latitude 107 SNN Clusters for Detrended Monthly Z SST (1958-1998) -180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180 90 60 30 0 -30 -60 -90
  • 75. 3/31/2021 75 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SST Clusters that Correspond to El Nino Climate Indices Steinbach, et al (KDD 2003) SNN clusters of SST that are highly correlated with El Nino indices, ~ 0.93 correlation. El Nino Regions Defined by Earth Scientists Niรฑo Region Range Longitude Range Latitude 1+2 (94) 90ยฐW-80ยฐW 10ยฐS-0ยฐ 3 (67) 150ยฐW-90ยฐW 5ยฐS-5ยฐN 3.4 (78) 170ยฐW-120ยฐW 5ยฐS-5ยฐN 4 (75) 160ยฐE-150ยฐW 5ยฐS-5ยฐN longitude latitude EL Nino Related SST Clusters -180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180 90 60 30 0 -30 -60 -90 75 78 67 94
  • 76. 3/31/2021 76 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar SNN Clusters in Sea-Level-Pressure (SLP) Time-series Data Steinbach, et al (KDD 2003) 26 SLP Clusters via Shared Nearest Neighbor Clustering (100 NN, 1982-1994) longitude latitude -180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180 90 60 30 0 -30 -60 -90 13 26 24 25 22 14 16 20 17 18 19 15 23 1 9 6 4 7 10 12 11 3 5 2 8 21 SNN Clusters of SLP. SNN Density of SLP Time Series Data longitude latitude -180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180 90 60 30 0 -30 -60 -90 SNN Density of Points on the Globe.
  • 77. 3/31/2021 77 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Pairs of SLP Clusters that Correspond to El-Nino SOI 82 83 84 85 86 87 88 89 90 91 92 93 94 -3 -2 -1 0 1 2 3 SLP Clusters 15 and 20 Cluster 15 Cluster 20 Centroids of SLP clusters 15 and 20 (near Darwin, Australia and Tahiti) 1982-1993. 82 83 84 85 86 87 88 89 90 91 92 93 94 -4 -3 -2 -1 0 1 2 3 SOI vs. Cluster Centroid 20 - Cluster Centroid 15 ( corr = 0.78 ) SOI 20 - 15 Centroid of cluster 20 โ€“ Centroid of cluster 15 versus SOI
  • 78. 3/31/2021 78 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar Limitations of SNN Clustering ๏ฌ Complexity of SNN Clustering is high โ€“ O( n * time to find numbers of neighbor within Eps) โ€“ In worst case, this is O(n2) โ€“ For lower dimensions, there are more efficient ways to find the nearest neighbors ๏ต R* Tree ๏ต k-d Trees ๏ฌ Parameterization is not easy
  • 79. Characteristics of Data, Clusters, and Clustering Algorithms ๏ฌ A cluster analysis is affected by characteristics of โ€“ Data โ€“ Clusters โ€“ Clustering algorithms ๏ฌ Looking at these characteristics gives us a number of dimensions that you can use to describe clustering algorithms and the results that they produce ยฉ Tan,Steinbach, Kumar Introduction to Data Mining 4/13/2006 79
  • 80. 3/31/2021 80 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ High dimensionality โ€“ Dimensionality reduction ๏ฌ Types of attributes โ€“ Binary, discrete, continuous, asymmetric โ€“ Mixed attribute types (e.g., some are continuous, others nominal) ๏ฌ Differences in attribute scales โ€“ Normalization techniques ๏ฌ Size of data set ๏ฌ Noise and Outliers ๏ฌ Properties of the data space โ€“ Can you define a meaningful centroid or a meaningful notion of density Characteristics of Data
  • 81. 3/31/2021 81 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ Data distribution โ€“ Parametric models ๏ฌ Shape โ€“ Globular or arbitrary shape ๏ฌ Differing sizes ๏ฌ Differing densities ๏ฌ Level of separation among clusters ๏ฌ Relationship among clusters ๏ฌ Subspace clusters Characteristics of Clusters
  • 82. 3/31/2021 82 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ Order dependence ๏ฌ Non-determinism ๏ฌ Scalability ๏ฌ Number of parameters Characteristics of Clustering Algorithms
  • 83. 3/31/2021 83 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ Type of Clustering โ€“ Taxonomy vs flat ๏ฌ Type of Cluster โ€“ Prototype vs connected reguions vs density-based ๏ฌ Characteristics of Clusters โ€“ Subspace clusters, spatial inter-relationships ๏ฌ Characteristics of Data Sets and Attributes ๏ฌ Noise and Outliers ๏ฌ Number of Data Objects ๏ฌ Number of Attributes ๏ฌ Algorithmic Considerations Which Clustering Algorithm?
  • 84. 3/31/2021 84 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ We assume EM clustering using the Gaussian (normal) distribution. ๏ฌ MIN is hierarchical, EM clustering is partitional. ๏ฌ Both MIN and EM clustering are complete. ๏ฌ MIN has a graph-based (contiguity-based) notion of a cluster, while EM clustering has a prototype (or model-based) notion of a cluster. ๏ฌ MIN will not be able to distinguish poorly separated clusters, but EM can manage this in many situations. ๏ฌ MIN can find clusters of different shapes and sizes; EM clustering prefers globular clusters and can have trouble with clusters of different sizes. ๏ฌ Min has trouble with clusters of different densities, while EM can often handle this. ๏ฌ Neither MIN nor EM clustering finds subspace clusters. Comparison of MIN and EM-Clustering
  • 85. 3/31/2021 85 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ MIN can handle outliers, but noise can join clusters; EM clustering can tolerate noise, but can be strongly affected by outliers. ๏ฌ EM can only be applied to data for which a centroid is meaningful; MIN only requires a meaningful definition of proximity. ๏ฌ EM will have trouble as dimensionality increases and the number of its parameters (the number of entries in the covariance matrix) increases as the square of the number of dimensions; MIN can work well with a suitable definition of proximity. ๏ฌ EM is designed for Euclidean data, although versions of EM clustering have been developed for other types of data. MIN is shielded from the data type by the fact that it uses a similarity matrix. ๏ฌ MIN makes no distribution assumptions; the version of EM we are considering assumes Gaussian distributions. Comparison of MIN and EM-Clustering
  • 86. 3/31/2021 86 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ EM has an O(n) time complexity; MIN is O(n2log(n)). ๏ฌ Because of random initialization, the clusters found by EM can vary from one run to another; MIN produces the same clusters unless there are ties in the similarity matrix. ๏ฌ Neither MIN nor EM automatically determine the number of clusters. ๏ฌ MIN does not have any user-specified parameters; EM has the number of clusters and possibly the weights of the clusters. ๏ฌ EM clustering can be viewed as an optimization problem; MIN uses a graph model of the data. ๏ฌ Neither EM or MIN are order dependent. Comparison of MIN and EM-Clustering
  • 87. 3/31/2021 87 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ Both are partitional. ๏ฌ K-means is complete; DBSCAN is not. ๏ฌ K-means has a prototype-based notion of a cluster; DB uses a density-based notion. ๏ฌ K-means can find clusters that are not well-separated. DBSCAN will merge clusters that touch. ๏ฌ DBSCAN handles clusters of different shapes and sizes; K-means prefers globular clusters. Comparison of DBSCAN and K-means
  • 88. 3/31/2021 88 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ DBSCAN can handle noise and outliers; K-means performs poorly in the presence of outliers ๏ฌ K-means can only be applied to data for which a centroid is meaningful; DBSCAN requires a meaningful definition of density ๏ฌ DBSCAN works poorly on high-dimensional data; K- means works well for some types of high-dimensional data ๏ฌ Both techniques were designed for Euclidean data, but extended to other types of data ๏ฌ DBSCAN makes no distribution assumptions; K-means is really assuming spherical Gaussian distributions Comparison of DBSCAN and K-means
  • 89. 3/31/2021 89 Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar ๏ฌ K-means has an O(n) time complexity; DBSCAN is O(n^2) ๏ฌ Because of random initialization, the clusters found by K- means can vary from one run to another; DBSCAN always produces the same clusters ๏ฌ DBSCAN automatically determines the number of clusters; K-means does not ๏ฌ K-means has only one parameter, DBSCAN has two. ๏ฌ K-means clustering can be viewed as an optimization problem and as a special case of EM clustering; DBSCAN is not based on a formal model. Comparison of DBSCAN and K-means