SlideShare a Scribd company logo
1 of 63
DATA MINING
LECTURE 7
Hierarchical Clustering, DBSCAN
The EM Algorithm
Subrata Kumer Paul
Assistant Professor, Dept. of CSE, BAUET
sksubrata96@gmail.com
CLUSTERING
What is a Clustering?
• In general a grouping of objects such that the objects in a
group (cluster) are similar (or related) to one another and
different from (or unrelated to) the objects in other groups
Inter-cluster
distances are
maximized
Intra-cluster
distances are
minimized
Clustering Algorithms
• K-means and its variants
• Hierarchical clustering
• DBSCAN
HIERARCHICAL
CLUSTERING
Hierarchical Clustering
• Two main types of hierarchical clustering
• Agglomerative:
• Start with the points as individual clusters
• At each step, merge the closest pair of clusters until only one cluster (or
k clusters) left
• Divisive:
• Start with one, all-inclusive cluster
• At each step, split a cluster until each cluster contains a point (or there
are k clusters)
• Traditional hierarchical algorithms use a similarity or
distance matrix
• Merge or split one cluster at a time
Hierarchical Clustering
• Produces a set of nested clusters organized as a
hierarchical tree
• Can be visualized as a dendrogram
• A tree like diagram that records the sequences of
merges or splits
1 3 2 5 4 6
0
0.05
0.1
0.15
0.2
1
2
3
4
5
6
1
2
3 4
5
Strengths of Hierarchical Clustering
• Do not have to assume any particular number of
clusters
• Any desired number of clusters can be obtained by
‘cutting’ the dendogram at the proper level
• They may correspond to meaningful taxonomies
• Example in biological sciences (e.g., animal kingdom,
phylogeny reconstruction, …)
Agglomerative Clustering Algorithm
• More popular hierarchical clustering technique
• Basic algorithm is straightforward
1. Compute the proximity matrix
2. Let each data point be a cluster
3. Repeat
4. Merge the two closest clusters
5. Update the proximity matrix
6. Until only a single cluster remains
• Key operation is the computation of the proximity
of two clusters
• Different approaches to defining the distance between
clusters distinguish the different algorithms
Starting Situation
• Start with clusters of individual points and a
proximity matrix
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
. Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
Intermediate Situation
• After some merging steps, we have some clusters
C1
C4
C2 C5
C3
C2
C1
C1
C3
C5
C4
C2
C3 C4 C5
Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
Intermediate Situation
• We want to merge the two closest clusters (C2 and C5) and
update the proximity matrix.
C1
C4
C2 C5
C3
C2
C1
C1
C3
C5
C4
C2
C3 C4 C5
Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
After Merging
• The question is “How do we update the proximity matrix?”
C1
C4
C2 U C5
C3
? ? ? ?
?
?
?
C2
U
C5
C1
C1
C3
C4
C2 U C5
C3 C4
Proximity Matrix
...
p1 p2 p3 p4 p9 p10 p11 p12
How to Define Inter-Cluster Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Similarity?
 MIN
 MAX
 Group Average
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
Proximity Matrix
How to Define Inter-Cluster Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
 MIN
 MAX
 Group Average
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
How to Define Inter-Cluster Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
 MIN
 MAX
 Group Average
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
How to Define Inter-Cluster Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
 MIN
 MAX
 Group Average
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
How to Define Inter-Cluster Similarity
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
.
Proximity Matrix
 MIN
 MAX
 Group Average
 Distance Between Centroids
 Other methods driven by an objective
function
– Ward’s Method uses squared error
 
Single Link – Complete Link
• Another way to view the processing of the
hierarchical algorithm is that we create links
between their elements in order of increasing
distance
• The MIN – Single Link, will merge two clusters when a
single pair of elements is linked
• The MAX – Complete Linkage will merge two clusters
when all pairs of elements have been linked.
Hierarchical Clustering: MIN
Nested Clusters Dendrogram
1
2
3
4
5
6
1
2
3
4
5
3 6 2 5 4 1
0
0.05
0.1
0.15
0.2
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
2 .24 0 .15 .20 .14 .25
3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 .34 .14 .28 .29 0 .39
6 .23 .25 .11 .22 .39 0
Strength of MIN
Original Points Two Clusters
• Can handle non-elliptical shapes
Limitations of MIN
Original Points Two Clusters
• Sensitive to noise and outliers
Hierarchical Clustering: MAX
Nested Clusters Dendrogram
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
1
2 5
3
4
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
2 .24 0 .15 .20 .14 .25
3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 .34 .14 .28 .29 0 .39
6 .23 .25 .11 .22 .39 0
Strength of MAX
Original Points Two Clusters
• Less susceptible to noise and outliers
Limitations of MAX
Original Points Two Clusters
•Tends to break large clusters
•Biased towards globular clusters
Cluster Similarity: Group Average
• Proximity of two clusters is the average of pairwise proximity
between points in the two clusters.
• Need to use average connectivity for scalability since total
proximity favors large clusters
|
|Cluster
|
|Cluster
)
p
,
p
proximity(
)
Cluster
,
Cluster
proximity(
j
i
Cluster
p
Cluster
p
j
i
j
i
j
j
i
i





1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
2 .24 0 .15 .20 .14 .25
3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 .34 .14 .28 .29 0 .39
6 .23 .25 .11 .22 .39 0
Hierarchical Clustering: Group Average
Nested Clusters Dendrogram
3 6 4 1 2 5
0
0.05
0.1
0.15
0.2
0.25
1
2
3
4
5
6
1
2
5
3
4
1 2 3 4 5 6
1 0 .24 .22 .37 .34 .23
2 .24 0 .15 .20 .14 .25
3 .22 .15 0 .15 .28 .11
4 .37 .20 .15 0 .29 .22
5 .34 .14 .28 .29 0 .39
6 .23 .25 .11 .22 .39 0
Hierarchical Clustering: Group Average
• Compromise between Single and
Complete Link
• Strengths
• Less susceptible to noise and outliers
• Limitations
• Biased towards globular clusters
Cluster Similarity: Ward’s Method
• Similarity of two clusters is based on the increase
in squared error (SSE) when two clusters are
merged
• Similar to group average if distance between points is
distance squared
• Less susceptible to noise and outliers
• Biased towards globular clusters
• Hierarchical analogue of K-means
• Can be used to initialize K-means
Hierarchical Clustering: Comparison
Group Average
Ward’s Method
1
2
3
4
5
6
1
2
5
3
4
MIN MAX
1
2
3
4
5
6
1
2
5
3
4
1
2
3
4
5
6
1
2 5
3
4
1
2
3
4
5
6
1
2
3
4
5
Hierarchical Clustering:
Time and Space requirements
• O(N2) space since it uses the proximity matrix.
• N is the number of points.
• O(N3) time in many cases
• There are N steps and at each step the size, N2,
proximity matrix must be updated and searched
• Complexity can be reduced to O(N2 log(N) ) time for
some approaches
Hierarchical Clustering:
Problems and Limitations
• Computational complexity in time and space
• Once a decision is made to combine two clusters, it
cannot be undone
• No objective function is directly minimized
• Different schemes have problems with one or more of
the following:
• Sensitivity to noise and outliers
• Difficulty handling different sized clusters and convex shapes
• Breaking large clusters
DBSCAN
DBSCAN: Density-Based Clustering
• DBSCAN is a Density-Based Clustering algorithm
• Reminder: In density based clustering we partition points
into dense regions separated by not-so-dense regions.
• Important Questions:
• How do we measure density?
• What is a dense region?
• DBSCAN:
• Density at point p: number of points within a circle of radius Eps
• Dense Region: A circle of radius Eps that contains at least MinPts
points
DBSCAN
• Characterization of points
• A point is a core point if it has more than a specified
number of points (MinPts) within Eps
• These points belong in a dense region and are at the interior of
a cluster
• A border point has fewer than MinPts within Eps, but
is in the neighborhood of a core point.
• A noise point is any point that is not a core point or a
border point.
DBSCAN: Core, Border, and Noise Points
DBSCAN: Core, Border and Noise Points
Original Points
Point types: core,
border and noise
Eps = 10, MinPts = 4
Density-Connected points
• Density edge
• We place an edge between two core
points q and p if they are within
distance Eps.
• Density-connected
• A point p is density-connected to a
point q if there is a path of edges
from p to q
p
q
p1
p q
o
DBSCAN Algorithm
• Label points as core, border and noise
• Eliminate noise points
• For every core point p that has not been assigned
to a cluster
• Create a new cluster with the point p and all the
points that are density-connected to p.
• Assign border points to the cluster of the closest
core point.
DBSCAN: Determining Eps and MinPts
• Idea is that for points in a cluster, their kth nearest neighbors
are at roughly the same distance
• Noise points have the kth nearest neighbor at farther distance
• So, plot sorted distance of every point to its kth nearest
neighbor
• Find the distance d where there is a “knee” in the curve
• Eps = d, MinPts = k
Eps ~ 7-10
MinPts = 4
When DBSCAN Works Well
Original Points
Clusters
• Resistant to Noise
• Can handle clusters of different shapes and sizes
When DBSCAN Does NOT Work Well
Original Points
(MinPts=4, Eps=9.75).
(MinPts=4, Eps=9.92)
• Varying densities
• High-dimensional data
DBSCAN: Sensitive to Parameters
Other algorithms
• PAM, CLARANS: Solutions for the k-medoids
problem
• BIRCH: Constructs a hierarchical tree that acts a
summary of the data, and then clusters the leaves.
• MST: Clustering using the Minimum Spanning Tree.
• ROCK: clustering categorical data by neighbor and
link analysis
• LIMBO, COOLCAT: Clustering categorical data using
information theoretic tools.
• CURE: Hierarchical algorithm uses different
representation of the cluster
• CHAMELEON: Hierarchical algorithm uses closeness
and interconnectivity for merging
MIXTURE MODELS AND
THE EM ALGORITHM
Model-based clustering
• In order to understand our data, we will assume that there
is a generative process (a model) that creates/describes
the data, and we will try to find the model that best fits the
data.
• Models of different complexity can be defined, but we will assume
that our model is a distribution from which data points are sampled
• Example: the data is the height of all people in Greece
• In most cases, a single distribution is not good enough to
describe all data points: different parts of the data follow a
different distribution
• Example: the data is the height of all people in Greece and China
• We need a mixture model
• Different distributions correspond to different clusters in the data.
Gaussian Distribution
• Example: the data is the height of all people in
Greece
• Experience has shown that this data follows a Gaussian
(Normal) distribution
• Reminder: Normal distribution:
• 𝜇 = mean, 𝜎 = standard deviation
𝑃 𝑥 =
1
2𝜋𝜎
𝑒
−
𝑥−𝜇 2
2𝜎2
Gaussian Model
• What is a model?
• A Gaussian distribution is fully defined by the mean
𝜇 and the standard deviation 𝜎
• We define our model as the pair of parameters 𝜃 =
(𝜇, 𝜎)
• This is a general principle: a model is defined as
a vector of parameters 𝜃
Fitting the model
• We want to find the normal distribution that best
fits our data
• Find the best values for 𝜇 and 𝜎
• But what does best fit mean?
Maximum Likelihood Estimation (MLE)
• Suppose that we have a vector 𝑋 = (𝑥1, … , 𝑥𝑛) of values
• And we want to fit a Gaussian 𝑁(𝜇, 𝜎) model to the data
• Probability of observing point 𝑥𝑖:
• Probability of observing all points (assume independence)
• We want to find the parameters 𝜃 = (𝜇, 𝜎) that maximize
the probability 𝑃(𝑋|𝜃)
𝑃 𝑥𝑖 =
1
2𝜋𝜎
𝑒
−
𝑥𝑖−𝜇 2
2𝜎2
𝑃 𝑋 =
𝑖=1
𝑛
𝑃 𝑥𝑖 =
𝑖=1
𝑛
1
2𝜋𝜎
𝑒
−
𝑥𝑖−𝜇 2
2𝜎2
Maximum Likelihood Estimation (MLE)
• The probability 𝑃(𝑋|𝜃) as a function of 𝜃 is called the
Likelihood function
• It is usually easier to work with the Log-Likelihood
function
• Maximum Likelihood Estimation
• Find parameters 𝜇, 𝜎 that maximize 𝐿𝐿(𝜃)
𝐿(𝜃) =
𝑖=1
𝑛
1
2𝜋𝜎
𝑒
−
𝑥𝑖−𝜇 2
2𝜎2
𝐿𝐿 𝜃 = −
𝑖=1
𝑛
𝑥𝑖 − 𝜇 2
2𝜎2
−
1
2
𝑛 log 2𝜋 − 𝑛 log 𝜎
𝜇 =
1
𝑛
𝑖=1
𝑛
𝑥𝑖 = 𝜇𝑋 𝜎2
=
1
𝑛
𝑖=1
𝑛
(𝑥𝑖−𝜇)2
= 𝜎𝑋
2
Sample Mean Sample Variance
MLE
• Note: these are also the most likely parameters
given the data
𝑃 𝜃 𝑋 =
𝑃 𝑋 𝜃 𝑃(𝜃)
𝑃(𝑋)
• If we have no prior information about 𝜃, or X, then
maximizing 𝑃 𝑋 𝜃 is the same as maximizing
𝑃 𝜃 𝑋
Mixture of Gaussians
• Suppose that you have the heights of people from
Greece and China and the distribution looks like
the figure below (dramatization)
Mixture of Gaussians
• In this case the data is the result of the mixture of
two Gaussians
• One for Greek people, and one for Chinese people
• Identifying for each value which Gaussian is most likely
to have generated it will give us a clustering.
Mixture model
• A value 𝑥𝑖 is generated according to the following
process:
• First select the nationality
• With probability 𝜋𝐺 select Greek, with probability 𝜋𝐶 select China
(𝜋𝐺 + 𝜋𝐶 = 1)
• Given the nationality, generate the point from the
corresponding Gaussian
• 𝑃 𝑥𝑖 𝜃𝐺 ~ 𝑁 𝜇𝐺, 𝜎𝐺 if Greece
• 𝑃 𝑥𝑖 𝜃𝐺 ~ 𝑁 𝜇𝐺, 𝜎𝐺 if China
We can also thing of this as a Hidden Variable Z
• Our model has the following parameters
Θ = (𝜋𝐺, 𝜋𝐶, 𝜇𝐺, 𝜇𝐶, 𝜎𝐺, 𝜎𝐶)
• For value 𝑥𝑖, we have:
𝑃 𝑥𝑖|Θ = 𝜋𝐺𝑃 𝑥𝑖 𝜃𝐺 + 𝜋𝐶𝑃(𝑥𝑖|𝜃𝐶)
• For all values 𝑋 = 𝑥1, … , 𝑥𝑛
𝑃 𝑋|Θ =
𝑖=1
𝑛
𝑃(𝑥𝑖|Θ)
• We want to estimate the parameters that maximize
the Likelihood of the data
Mixture Model
Mixture probabilities Distribution Parameters
Mixture Models
• Once we have the parameters Θ =
(𝜋𝐺, 𝜋𝐶, 𝜇𝐺, 𝜇𝐶, 𝜎𝐺, 𝜎𝐶) we can estimate the
membership probabilities 𝑃 𝐺 𝑥𝑖 and 𝑃 𝐶 𝑥𝑖 for
each point 𝑥𝑖:
• This is the probability that point 𝑥𝑖 belongs to the Greek
or the Chinese population (cluster)
𝑃 𝐺 𝑥𝑖 =
𝑃 𝑥𝑖 𝐺 𝑃(𝐺)
𝑃 𝑥𝑖 𝐺 𝑃 𝐺 + 𝑃 𝑥𝑖 𝐶 𝑃(𝐶)
=
𝑃 𝑥𝑖 𝐺 𝜋𝐺
𝑃 𝑥𝑖 𝐺 𝜋𝐺 + 𝑃 𝑥𝑖 𝐶 𝜋𝐶
EM (Expectation Maximization) Algorithm
• Initialize the values of the parameters in Θ to some
random values
• Repeat until convergence
• E-Step: Given the parameters Θ estimate the membership
probabilities 𝑃 𝐺 𝑥𝑖 and 𝑃 𝐶 𝑥𝑖
• M-Step: Compute the parameter values that (in expectation)
maximize the data likelihood
𝜇𝐶 =
𝑖=1
𝑛
𝑃 𝐶 𝑥𝑖
𝑛 ∗ 𝜋𝐶
𝑥𝑖
𝜋𝐶 =
1
𝑛
𝑖=1
𝑛
𝑃(𝐶|𝑥𝑖)
𝜋𝐺 =
1
𝑛
𝑖=1
𝑛
𝑃(𝐺|𝑥𝑖)
𝜇𝐺 =
𝑖=1
𝑛
𝑃 𝐺 𝑥𝑖
𝑛 ∗ 𝜋𝐺
𝑥𝑖
𝜎𝐶
2
=
𝑖=1
𝑛
𝑃 𝐶 𝑥𝑖
𝑛 ∗ 𝜋𝐶
𝑥𝑖 − 𝜇𝐶
2 𝜎𝐺
2
=
𝑖=1
𝑛
𝑃 𝐺 𝑥𝑖
𝑛 ∗ 𝜋𝐺
𝑥𝑖 − 𝜇𝐺
2
MLE Estimates
if 𝜋’s were fixed
Fraction of
population in G,C
Relationship to K-means
• E-Step: Assignment of points to clusters
• K-means: hard assignment, EM: soft assignment
• M-Step: Computation of centroids
• K-means assumes common fixed variance (spherical
clusters)
• EM: can change the variance for different clusters or
different dimensions (elipsoid clusters)
• If the variance is fixed then both minimize the
same error function
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptx

More Related Content

Similar to Data Mining Lecture_7.pptx

Similar to Data Mining Lecture_7.pptx (20)

Unsupervised Learning in Machine Learning
Unsupervised Learning in Machine LearningUnsupervised Learning in Machine Learning
Unsupervised Learning in Machine Learning
 
Advanced database and data mining & clustering concepts
Advanced database and data mining & clustering conceptsAdvanced database and data mining & clustering concepts
Advanced database and data mining & clustering concepts
 
DS9 - Clustering.pptx
DS9 - Clustering.pptxDS9 - Clustering.pptx
DS9 - Clustering.pptx
 
DBSCAN (1) (4).pptx
DBSCAN (1) (4).pptxDBSCAN (1) (4).pptx
DBSCAN (1) (4).pptx
 
6 clustering
6 clustering6 clustering
6 clustering
 
Db Scan
Db ScanDb Scan
Db Scan
 
CLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptxCLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptx
 
Hierachical clustering
Hierachical clusteringHierachical clustering
Hierachical clustering
 
26-Clustering MTech-2017.ppt
26-Clustering MTech-2017.ppt26-Clustering MTech-2017.ppt
26-Clustering MTech-2017.ppt
 
Mean shift and Hierarchical clustering
Mean shift and Hierarchical clustering Mean shift and Hierarchical clustering
Mean shift and Hierarchical clustering
 
05 Clustering in Data Mining
05 Clustering in Data Mining05 Clustering in Data Mining
05 Clustering in Data Mining
 
Hierarchical clustering.pptx
Hierarchical clustering.pptxHierarchical clustering.pptx
Hierarchical clustering.pptx
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clustering
 
UNIT_V_Cluster Analysis.pptx
UNIT_V_Cluster Analysis.pptxUNIT_V_Cluster Analysis.pptx
UNIT_V_Cluster Analysis.pptx
 
multiarmed bandit.ppt
multiarmed bandit.pptmultiarmed bandit.ppt
multiarmed bandit.ppt
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012
 
Training machine learning k means 2017
Training machine learning k means 2017Training machine learning k means 2017
Training machine learning k means 2017
 
Clustering
ClusteringClustering
Clustering
 
Data mining techniques unit v
Data mining techniques unit vData mining techniques unit v
Data mining techniques unit v
 

More from Subrata Kumer Paul

Chapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.pptChapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.pptSubrata Kumer Paul
 
Chapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.pptChapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.pptSubrata Kumer Paul
 
Chapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptSubrata Kumer Paul
 
Chapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.pptChapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.pptSubrata Kumer Paul
 
Chapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.pptChapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.pptSubrata Kumer Paul
 
Chapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptChapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptSubrata Kumer Paul
 
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptChapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptSubrata Kumer Paul
 
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Subrata Kumer Paul
 
Chapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.pptChapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.pptSubrata Kumer Paul
 
Chapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.pptChapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.pptSubrata Kumer Paul
 
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptChapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptSubrata Kumer Paul
 
Data Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptxData Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptxSubrata Kumer Paul
 

More from Subrata Kumer Paul (20)

Chapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.pptChapter 9. Classification Advanced Methods.ppt
Chapter 9. Classification Advanced Methods.ppt
 
Chapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.pptChapter 13. Trends and Research Frontiers in Data Mining.ppt
Chapter 13. Trends and Research Frontiers in Data Mining.ppt
 
Chapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.ppt
 
Chapter 2. Know Your Data.ppt
Chapter 2. Know Your Data.pptChapter 2. Know Your Data.ppt
Chapter 2. Know Your Data.ppt
 
Chapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.pptChapter 12. Outlier Detection.ppt
Chapter 12. Outlier Detection.ppt
 
Chapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.pptChapter 7. Advanced Frequent Pattern Mining.ppt
Chapter 7. Advanced Frequent Pattern Mining.ppt
 
Chapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.pptChapter 11. Cluster Analysis Advanced Methods.ppt
Chapter 11. Cluster Analysis Advanced Methods.ppt
 
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptChapter 10. Cluster Analysis Basic Concepts and Methods.ppt
Chapter 10. Cluster Analysis Basic Concepts and Methods.ppt
 
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
Chapter 6. Mining Frequent Patterns, Associations and Correlations Basic Conc...
 
Chapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.pptChapter 5. Data Cube Technology.ppt
Chapter 5. Data Cube Technology.ppt
 
Chapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.pptChapter 3. Data Preprocessing.ppt
Chapter 3. Data Preprocessing.ppt
 
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.pptChapter 4. Data Warehousing and On-Line Analytical Processing.ppt
Chapter 4. Data Warehousing and On-Line Analytical Processing.ppt
 
Chapter 1. Introduction.ppt
Chapter 1. Introduction.pptChapter 1. Introduction.ppt
Chapter 1. Introduction.ppt
 
Data Mining Lecture_8(a).pptx
Data Mining Lecture_8(a).pptxData Mining Lecture_8(a).pptx
Data Mining Lecture_8(a).pptx
 
Data Mining Lecture_13.pptx
Data Mining Lecture_13.pptxData Mining Lecture_13.pptx
Data Mining Lecture_13.pptx
 
Data Mining Lecture_9.pptx
Data Mining Lecture_9.pptxData Mining Lecture_9.pptx
Data Mining Lecture_9.pptx
 
Data Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptxData Mining Lecture_10(b).pptx
Data Mining Lecture_10(b).pptx
 
Data Mining Lecture_8(b).pptx
Data Mining Lecture_8(b).pptxData Mining Lecture_8(b).pptx
Data Mining Lecture_8(b).pptx
 
Data Mining Lecture_6.pptx
Data Mining Lecture_6.pptxData Mining Lecture_6.pptx
Data Mining Lecture_6.pptx
 
Data Mining Lecture_11.pptx
Data Mining Lecture_11.pptxData Mining Lecture_11.pptx
Data Mining Lecture_11.pptx
 

Recently uploaded

Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfROCENODodongVILLACER
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIkoyaldeepu123
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixingviprabot1
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 

Recently uploaded (20)

Risk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdfRisk Assessment For Installation of Drainage Pipes.pdf
Risk Assessment For Installation of Drainage Pipes.pdf
 
EduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AIEduAI - E learning Platform integrated with AI
EduAI - E learning Platform integrated with AI
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Effects of rheological properties on mixing
Effects of rheological properties on mixingEffects of rheological properties on mixing
Effects of rheological properties on mixing
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
POWER SYSTEMS-1 Complete notes examples
POWER SYSTEMS-1 Complete notes  examplesPOWER SYSTEMS-1 Complete notes  examples
POWER SYSTEMS-1 Complete notes examples
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 

Data Mining Lecture_7.pptx

  • 1. DATA MINING LECTURE 7 Hierarchical Clustering, DBSCAN The EM Algorithm Subrata Kumer Paul Assistant Professor, Dept. of CSE, BAUET sksubrata96@gmail.com
  • 3. What is a Clustering? • In general a grouping of objects such that the objects in a group (cluster) are similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized
  • 4. Clustering Algorithms • K-means and its variants • Hierarchical clustering • DBSCAN
  • 6. Hierarchical Clustering • Two main types of hierarchical clustering • Agglomerative: • Start with the points as individual clusters • At each step, merge the closest pair of clusters until only one cluster (or k clusters) left • Divisive: • Start with one, all-inclusive cluster • At each step, split a cluster until each cluster contains a point (or there are k clusters) • Traditional hierarchical algorithms use a similarity or distance matrix • Merge or split one cluster at a time
  • 7. Hierarchical Clustering • Produces a set of nested clusters organized as a hierarchical tree • Can be visualized as a dendrogram • A tree like diagram that records the sequences of merges or splits 1 3 2 5 4 6 0 0.05 0.1 0.15 0.2 1 2 3 4 5 6 1 2 3 4 5
  • 8. Strengths of Hierarchical Clustering • Do not have to assume any particular number of clusters • Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level • They may correspond to meaningful taxonomies • Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)
  • 9. Agglomerative Clustering Algorithm • More popular hierarchical clustering technique • Basic algorithm is straightforward 1. Compute the proximity matrix 2. Let each data point be a cluster 3. Repeat 4. Merge the two closest clusters 5. Update the proximity matrix 6. Until only a single cluster remains • Key operation is the computation of the proximity of two clusters • Different approaches to defining the distance between clusters distinguish the different algorithms
  • 10. Starting Situation • Start with clusters of individual points and a proximity matrix p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 11. Intermediate Situation • After some merging steps, we have some clusters C1 C4 C2 C5 C3 C2 C1 C1 C3 C5 C4 C2 C3 C4 C5 Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 12. Intermediate Situation • We want to merge the two closest clusters (C2 and C5) and update the proximity matrix. C1 C4 C2 C5 C3 C2 C1 C1 C3 C5 C4 C2 C3 C4 C5 Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 13. After Merging • The question is “How do we update the proximity matrix?” C1 C4 C2 U C5 C3 ? ? ? ? ? ? ? C2 U C5 C1 C1 C3 C4 C2 U C5 C3 C4 Proximity Matrix ... p1 p2 p3 p4 p9 p10 p11 p12
  • 14. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Similarity?  MIN  MAX  Group Average  Distance Between Centroids  Other methods driven by an objective function – Ward’s Method uses squared error Proximity Matrix
  • 15. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix  MIN  MAX  Group Average  Distance Between Centroids  Other methods driven by an objective function – Ward’s Method uses squared error
  • 16. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix  MIN  MAX  Group Average  Distance Between Centroids  Other methods driven by an objective function – Ward’s Method uses squared error
  • 17. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix  MIN  MAX  Group Average  Distance Between Centroids  Other methods driven by an objective function – Ward’s Method uses squared error
  • 18. How to Define Inter-Cluster Similarity p1 p3 p5 p4 p2 p1 p2 p3 p4 p5 . . . . . . Proximity Matrix  MIN  MAX  Group Average  Distance Between Centroids  Other methods driven by an objective function – Ward’s Method uses squared error  
  • 19. Single Link – Complete Link • Another way to view the processing of the hierarchical algorithm is that we create links between their elements in order of increasing distance • The MIN – Single Link, will merge two clusters when a single pair of elements is linked • The MAX – Complete Linkage will merge two clusters when all pairs of elements have been linked.
  • 20. Hierarchical Clustering: MIN Nested Clusters Dendrogram 1 2 3 4 5 6 1 2 3 4 5 3 6 2 5 4 1 0 0.05 0.1 0.15 0.2 1 2 3 4 5 6 1 0 .24 .22 .37 .34 .23 2 .24 0 .15 .20 .14 .25 3 .22 .15 0 .15 .28 .11 4 .37 .20 .15 0 .29 .22 5 .34 .14 .28 .29 0 .39 6 .23 .25 .11 .22 .39 0
  • 21. Strength of MIN Original Points Two Clusters • Can handle non-elliptical shapes
  • 22. Limitations of MIN Original Points Two Clusters • Sensitive to noise and outliers
  • 23. Hierarchical Clustering: MAX Nested Clusters Dendrogram 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 0 .24 .22 .37 .34 .23 2 .24 0 .15 .20 .14 .25 3 .22 .15 0 .15 .28 .11 4 .37 .20 .15 0 .29 .22 5 .34 .14 .28 .29 0 .39 6 .23 .25 .11 .22 .39 0
  • 24. Strength of MAX Original Points Two Clusters • Less susceptible to noise and outliers
  • 25. Limitations of MAX Original Points Two Clusters •Tends to break large clusters •Biased towards globular clusters
  • 26. Cluster Similarity: Group Average • Proximity of two clusters is the average of pairwise proximity between points in the two clusters. • Need to use average connectivity for scalability since total proximity favors large clusters | |Cluster | |Cluster ) p , p proximity( ) Cluster , Cluster proximity( j i Cluster p Cluster p j i j i j j i i      1 2 3 4 5 6 1 0 .24 .22 .37 .34 .23 2 .24 0 .15 .20 .14 .25 3 .22 .15 0 .15 .28 .11 4 .37 .20 .15 0 .29 .22 5 .34 .14 .28 .29 0 .39 6 .23 .25 .11 .22 .39 0
  • 27. Hierarchical Clustering: Group Average Nested Clusters Dendrogram 3 6 4 1 2 5 0 0.05 0.1 0.15 0.2 0.25 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 0 .24 .22 .37 .34 .23 2 .24 0 .15 .20 .14 .25 3 .22 .15 0 .15 .28 .11 4 .37 .20 .15 0 .29 .22 5 .34 .14 .28 .29 0 .39 6 .23 .25 .11 .22 .39 0
  • 28. Hierarchical Clustering: Group Average • Compromise between Single and Complete Link • Strengths • Less susceptible to noise and outliers • Limitations • Biased towards globular clusters
  • 29. Cluster Similarity: Ward’s Method • Similarity of two clusters is based on the increase in squared error (SSE) when two clusters are merged • Similar to group average if distance between points is distance squared • Less susceptible to noise and outliers • Biased towards globular clusters • Hierarchical analogue of K-means • Can be used to initialize K-means
  • 30. Hierarchical Clustering: Comparison Group Average Ward’s Method 1 2 3 4 5 6 1 2 5 3 4 MIN MAX 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 3 4 5
  • 31. Hierarchical Clustering: Time and Space requirements • O(N2) space since it uses the proximity matrix. • N is the number of points. • O(N3) time in many cases • There are N steps and at each step the size, N2, proximity matrix must be updated and searched • Complexity can be reduced to O(N2 log(N) ) time for some approaches
  • 32. Hierarchical Clustering: Problems and Limitations • Computational complexity in time and space • Once a decision is made to combine two clusters, it cannot be undone • No objective function is directly minimized • Different schemes have problems with one or more of the following: • Sensitivity to noise and outliers • Difficulty handling different sized clusters and convex shapes • Breaking large clusters
  • 34. DBSCAN: Density-Based Clustering • DBSCAN is a Density-Based Clustering algorithm • Reminder: In density based clustering we partition points into dense regions separated by not-so-dense regions. • Important Questions: • How do we measure density? • What is a dense region? • DBSCAN: • Density at point p: number of points within a circle of radius Eps • Dense Region: A circle of radius Eps that contains at least MinPts points
  • 35. DBSCAN • Characterization of points • A point is a core point if it has more than a specified number of points (MinPts) within Eps • These points belong in a dense region and are at the interior of a cluster • A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point. • A noise point is any point that is not a core point or a border point.
  • 36. DBSCAN: Core, Border, and Noise Points
  • 37. DBSCAN: Core, Border and Noise Points Original Points Point types: core, border and noise Eps = 10, MinPts = 4
  • 38. Density-Connected points • Density edge • We place an edge between two core points q and p if they are within distance Eps. • Density-connected • A point p is density-connected to a point q if there is a path of edges from p to q p q p1 p q o
  • 39. DBSCAN Algorithm • Label points as core, border and noise • Eliminate noise points • For every core point p that has not been assigned to a cluster • Create a new cluster with the point p and all the points that are density-connected to p. • Assign border points to the cluster of the closest core point.
  • 40. DBSCAN: Determining Eps and MinPts • Idea is that for points in a cluster, their kth nearest neighbors are at roughly the same distance • Noise points have the kth nearest neighbor at farther distance • So, plot sorted distance of every point to its kth nearest neighbor • Find the distance d where there is a “knee” in the curve • Eps = d, MinPts = k Eps ~ 7-10 MinPts = 4
  • 41. When DBSCAN Works Well Original Points Clusters • Resistant to Noise • Can handle clusters of different shapes and sizes
  • 42. When DBSCAN Does NOT Work Well Original Points (MinPts=4, Eps=9.75). (MinPts=4, Eps=9.92) • Varying densities • High-dimensional data
  • 43. DBSCAN: Sensitive to Parameters
  • 44. Other algorithms • PAM, CLARANS: Solutions for the k-medoids problem • BIRCH: Constructs a hierarchical tree that acts a summary of the data, and then clusters the leaves. • MST: Clustering using the Minimum Spanning Tree. • ROCK: clustering categorical data by neighbor and link analysis • LIMBO, COOLCAT: Clustering categorical data using information theoretic tools. • CURE: Hierarchical algorithm uses different representation of the cluster • CHAMELEON: Hierarchical algorithm uses closeness and interconnectivity for merging
  • 45. MIXTURE MODELS AND THE EM ALGORITHM
  • 46. Model-based clustering • In order to understand our data, we will assume that there is a generative process (a model) that creates/describes the data, and we will try to find the model that best fits the data. • Models of different complexity can be defined, but we will assume that our model is a distribution from which data points are sampled • Example: the data is the height of all people in Greece • In most cases, a single distribution is not good enough to describe all data points: different parts of the data follow a different distribution • Example: the data is the height of all people in Greece and China • We need a mixture model • Different distributions correspond to different clusters in the data.
  • 47. Gaussian Distribution • Example: the data is the height of all people in Greece • Experience has shown that this data follows a Gaussian (Normal) distribution • Reminder: Normal distribution: • 𝜇 = mean, 𝜎 = standard deviation 𝑃 𝑥 = 1 2𝜋𝜎 𝑒 − 𝑥−𝜇 2 2𝜎2
  • 48. Gaussian Model • What is a model? • A Gaussian distribution is fully defined by the mean 𝜇 and the standard deviation 𝜎 • We define our model as the pair of parameters 𝜃 = (𝜇, 𝜎) • This is a general principle: a model is defined as a vector of parameters 𝜃
  • 49. Fitting the model • We want to find the normal distribution that best fits our data • Find the best values for 𝜇 and 𝜎 • But what does best fit mean?
  • 50. Maximum Likelihood Estimation (MLE) • Suppose that we have a vector 𝑋 = (𝑥1, … , 𝑥𝑛) of values • And we want to fit a Gaussian 𝑁(𝜇, 𝜎) model to the data • Probability of observing point 𝑥𝑖: • Probability of observing all points (assume independence) • We want to find the parameters 𝜃 = (𝜇, 𝜎) that maximize the probability 𝑃(𝑋|𝜃) 𝑃 𝑥𝑖 = 1 2𝜋𝜎 𝑒 − 𝑥𝑖−𝜇 2 2𝜎2 𝑃 𝑋 = 𝑖=1 𝑛 𝑃 𝑥𝑖 = 𝑖=1 𝑛 1 2𝜋𝜎 𝑒 − 𝑥𝑖−𝜇 2 2𝜎2
  • 51. Maximum Likelihood Estimation (MLE) • The probability 𝑃(𝑋|𝜃) as a function of 𝜃 is called the Likelihood function • It is usually easier to work with the Log-Likelihood function • Maximum Likelihood Estimation • Find parameters 𝜇, 𝜎 that maximize 𝐿𝐿(𝜃) 𝐿(𝜃) = 𝑖=1 𝑛 1 2𝜋𝜎 𝑒 − 𝑥𝑖−𝜇 2 2𝜎2 𝐿𝐿 𝜃 = − 𝑖=1 𝑛 𝑥𝑖 − 𝜇 2 2𝜎2 − 1 2 𝑛 log 2𝜋 − 𝑛 log 𝜎 𝜇 = 1 𝑛 𝑖=1 𝑛 𝑥𝑖 = 𝜇𝑋 𝜎2 = 1 𝑛 𝑖=1 𝑛 (𝑥𝑖−𝜇)2 = 𝜎𝑋 2 Sample Mean Sample Variance
  • 52. MLE • Note: these are also the most likely parameters given the data 𝑃 𝜃 𝑋 = 𝑃 𝑋 𝜃 𝑃(𝜃) 𝑃(𝑋) • If we have no prior information about 𝜃, or X, then maximizing 𝑃 𝑋 𝜃 is the same as maximizing 𝑃 𝜃 𝑋
  • 53.
  • 54. Mixture of Gaussians • Suppose that you have the heights of people from Greece and China and the distribution looks like the figure below (dramatization)
  • 55. Mixture of Gaussians • In this case the data is the result of the mixture of two Gaussians • One for Greek people, and one for Chinese people • Identifying for each value which Gaussian is most likely to have generated it will give us a clustering.
  • 56. Mixture model • A value 𝑥𝑖 is generated according to the following process: • First select the nationality • With probability 𝜋𝐺 select Greek, with probability 𝜋𝐶 select China (𝜋𝐺 + 𝜋𝐶 = 1) • Given the nationality, generate the point from the corresponding Gaussian • 𝑃 𝑥𝑖 𝜃𝐺 ~ 𝑁 𝜇𝐺, 𝜎𝐺 if Greece • 𝑃 𝑥𝑖 𝜃𝐺 ~ 𝑁 𝜇𝐺, 𝜎𝐺 if China We can also thing of this as a Hidden Variable Z
  • 57. • Our model has the following parameters Θ = (𝜋𝐺, 𝜋𝐶, 𝜇𝐺, 𝜇𝐶, 𝜎𝐺, 𝜎𝐶) • For value 𝑥𝑖, we have: 𝑃 𝑥𝑖|Θ = 𝜋𝐺𝑃 𝑥𝑖 𝜃𝐺 + 𝜋𝐶𝑃(𝑥𝑖|𝜃𝐶) • For all values 𝑋 = 𝑥1, … , 𝑥𝑛 𝑃 𝑋|Θ = 𝑖=1 𝑛 𝑃(𝑥𝑖|Θ) • We want to estimate the parameters that maximize the Likelihood of the data Mixture Model Mixture probabilities Distribution Parameters
  • 58. Mixture Models • Once we have the parameters Θ = (𝜋𝐺, 𝜋𝐶, 𝜇𝐺, 𝜇𝐶, 𝜎𝐺, 𝜎𝐶) we can estimate the membership probabilities 𝑃 𝐺 𝑥𝑖 and 𝑃 𝐶 𝑥𝑖 for each point 𝑥𝑖: • This is the probability that point 𝑥𝑖 belongs to the Greek or the Chinese population (cluster) 𝑃 𝐺 𝑥𝑖 = 𝑃 𝑥𝑖 𝐺 𝑃(𝐺) 𝑃 𝑥𝑖 𝐺 𝑃 𝐺 + 𝑃 𝑥𝑖 𝐶 𝑃(𝐶) = 𝑃 𝑥𝑖 𝐺 𝜋𝐺 𝑃 𝑥𝑖 𝐺 𝜋𝐺 + 𝑃 𝑥𝑖 𝐶 𝜋𝐶
  • 59. EM (Expectation Maximization) Algorithm • Initialize the values of the parameters in Θ to some random values • Repeat until convergence • E-Step: Given the parameters Θ estimate the membership probabilities 𝑃 𝐺 𝑥𝑖 and 𝑃 𝐶 𝑥𝑖 • M-Step: Compute the parameter values that (in expectation) maximize the data likelihood 𝜇𝐶 = 𝑖=1 𝑛 𝑃 𝐶 𝑥𝑖 𝑛 ∗ 𝜋𝐶 𝑥𝑖 𝜋𝐶 = 1 𝑛 𝑖=1 𝑛 𝑃(𝐶|𝑥𝑖) 𝜋𝐺 = 1 𝑛 𝑖=1 𝑛 𝑃(𝐺|𝑥𝑖) 𝜇𝐺 = 𝑖=1 𝑛 𝑃 𝐺 𝑥𝑖 𝑛 ∗ 𝜋𝐺 𝑥𝑖 𝜎𝐶 2 = 𝑖=1 𝑛 𝑃 𝐶 𝑥𝑖 𝑛 ∗ 𝜋𝐶 𝑥𝑖 − 𝜇𝐶 2 𝜎𝐺 2 = 𝑖=1 𝑛 𝑃 𝐺 𝑥𝑖 𝑛 ∗ 𝜋𝐺 𝑥𝑖 − 𝜇𝐺 2 MLE Estimates if 𝜋’s were fixed Fraction of population in G,C
  • 60. Relationship to K-means • E-Step: Assignment of points to clusters • K-means: hard assignment, EM: soft assignment • M-Step: Computation of centroids • K-means assumes common fixed variance (spherical clusters) • EM: can change the variance for different clusters or different dimensions (elipsoid clusters) • If the variance is fixed then both minimize the same error function