SlideShare a Scribd company logo
Prof. Pier Luca Lanzi
Representative-Based Clustering
Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
Prof. Pier Luca Lanzi
Readings
• Mining of Massive Datasets (Chapter 7)
• Data Mining and Analysis (Section 13.3)
2
Prof. Pier Luca Lanzi
How can we represent clusters?
Prof. Pier Luca Lanzi
Representation-Based Algorithms
• Given a dataset of N instances, and a desired number of clusters
k, this class of algorithms generates a partition C of N in k clusters
{C1, C2, …, Ck}
• For each cluster there is a point that summarizes the cluster
• The common choice being the mean of the points in the cluster
where ni = |Ci| and μi is the centroid
4
Prof. Pier Luca Lanzi
Representation-Based Algorithms
• The goal of the clustering process is to select the best partition according to
some scoring function
• Sum of squared errors is the most common scoring function
• The goal of the clustering process is thus to find
• Brute-force Approach
§ Generate all the possible clustering C = {C1, C2, …, Ck} and select the
best one. Unfortunately, there are O(kN/k!) possible partitions
5
Prof. Pier Luca Lanzi
k-Means Algorithm
• Most widely known representative-based algorithm
• Assumes an Euclidean space but can be easily extended to the
non-Euclidean case
• Employs a greedy iterative approaches that minimizes the SSE
objective. Accordingly it can converge to a local optimal instead
of a globally optimal clustering.
6
Prof. Pier Luca Lanzi
1. Initially choose k points that are
likely to be in different clusters;
2. Make these points the centroids of
their clusters;
3. FOR each remaining point p DO
Find the centroid to which p is closest;
Add p to the cluster of that centroid;
Adjust the centroid of that
cluster to account for p;
END;
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Prof. Pier Luca Lanzi
Initializing Clusters
• Solution 1
§Pick points that are as far away from one another as possible.
• Variation of solution 1
Pick the first point at random;
WHILE there are fewer than k points DO
Add the point whose minimum distance
from the selected points is as large as
possible;
END;
• Solution 2
§Cluster a sample of the data, perhaps hierarchically, so there
are k clusters. Pick a point from each cluster, perhaps that
point closest to the centroid of the cluster.
23
Prof. Pier Luca Lanzi
Two different K-means Clusterings 24
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Sub-optimal Clustering
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Optimal Clustering
Original Points
Prof. Pier Luca Lanzi
Importance of Choosing the Initial
Centroids
25
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
xy
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 6
Prof. Pier Luca Lanzi
Importance of Choosing the Initial
Centroids
26
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 3
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
0
0.5
1
1.5
2
2.5
3
x
y
Iteration 5
Prof. Pier Luca Lanzi
27Why Selecting the Best Initial
Centroids is Difficult?
• If there are K ‘real’ clusters then the chance of selecting one
centroid from each cluster is small.
• Chance is relatively small when K is large
• If clusters are the same size, n, then
• For example, if K = 10, then probability = 10!/1010 = 0.00036
• Sometimes the initial centroids will readjust themselves in ‘right’
way, and sometimes they don’t
• Consider an example of five pairs of clusters
Prof. Pier Luca Lanzi
Ten Clusters Example 28
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 4
Starting with two initial centroids in one cluster of each pair of clusters
Prof. Pier Luca Lanzi
10 Clusters Example 29
Starting with some pairs of clusters having three initial centroids, while other have only one.
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 1
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 2
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 3
0 5 10 15 20
-6
-4
-2
0
2
4
6
8
x
y
Iteration 4
Prof. Pier Luca Lanzi
30Dealing with the Initial
Centroids Issue
• Multiple runs, helps, but probability is not on your side
• Sample and use another clustering method (hierarchical?) to
determine initial centroids
• Select more than k initial centroids and then select among these
initial centroids
• Postprocessing
• Bisecting K-means, not as susceptible to initialization issues
Prof. Pier Luca Lanzi
31Updating Centers Incrementally
• In the basic K-means algorithm, centroids are updated after all
points are assigned to a centroid
• An alternative is to update the centroids after each assignment
(incremental approach)
§Each assignment updates zero or two centroids
§More expensive
§Introduces an order dependency
§Never get an empty cluster
§Can use “weights” to change the impact
Prof. Pier Luca Lanzi
32Pre-processing and Post-processing
• Pre-processing
§Normalize the data
§Eliminate outliers
• Post-processing
§Eliminate small clusters that may represent outliers
§Split ‘loose’ clusters, i.e., clusters with relatively high SSE
§Merge clusters that are ‘close’ and
that have relatively low SSE
§These steps can be used during the clustering process
Prof. Pier Luca Lanzi
Bisecting K-means
• Variant of K-means that can produce
a partitional or a hierarchical clustering
33
Prof. Pier Luca Lanzi
Bisecting K-means Example 34
Prof. Pier Luca Lanzi
Limitation of k-Means
35
Prof. Pier Luca Lanzi
36Limitations of K-means
• K-means has problems when clusters are of differing
§Sizes
§Densities
§Non-globular shapes
• K-means has also problems when the data contains outliers.
Prof. Pier Luca Lanzi
Limitations of K-means:
Differing Sizes
37
Original Points K-means (3 Clusters)
Prof. Pier Luca Lanzi
Limitations of K-means:
Differing Density
38
Original Points K-means (3 Clusters)
Prof. Pier Luca Lanzi
Limitations of K-means:
Non-globular Shapes
39
Original Points K-means (2 Clusters)
Prof. Pier Luca Lanzi
Overcoming K-means Limitations 40
Original Points K-means Clusters
One solution is to use many clusters.
Find parts of clusters, but need to put together.
Prof. Pier Luca Lanzi
Overcoming K-means Limitations 41
Original Points K-means Clusters
Prof. Pier Luca Lanzi
Overcoming K-means Limitations 42
Original Points K-means Clusters
Prof. Pier Luca Lanzi
43K-Means Clustering Summary
• Strength
§Relatively efficient
§Often terminates at a local optimum
§The global optimum may be found using techniques such as:
deterministic annealing and genetic algorithms
• Weakness
§Applicable only when mean is defined, then what about
categorical data?
§Need to specify k, the number of clusters, in advance
§Unable to handle noisy data and outliers
§Not suitable to discover clusters with non-convex shapes
Prof. Pier Luca Lanzi
44K-Means Clustering Summary
• Advantages
§Simple, understandable
§Items automatically assigned to clusters
• Disadvantages
§Must pick number of clusters before hand
§All items forced into a cluster
§Too sensitive to outliers
Prof. Pier Luca Lanzi
45Variations of the K-Means Method
• A few variants of the k-means which differ in
§Selection of the initial k means
§Dissimilarity calculations
§Strategies to calculate cluster means
• Handling categorical data: k-modes
§Replacing means of clusters with modes
§Using new dissimilarity measures
to deal with categorical objects
§Using a frequency-based method
to update modes of clusters
§A mixture of categorical and numerical data:
k-prototype method
Prof. Pier Luca Lanzi
46Variations of the K-Means Method
• A few variants of the k-means which differ in
§Selection of the initial k means
§Dissimilarity calculations
§Strategies to calculate cluster means
• Handling categorical data: k-modes
§Replacing means of clusters with modes
§Using new dissimilarity measures
to deal with categorical objects
§Using a frequency-based method
to update modes of clusters
§A mixture of categorical and numerical data:
k-prototype method
Prof. Pier Luca Lanzi
The BFR Algorithm
Prof. Pier Luca Lanzi
The BFR Algorithm
• BFR [Bradley-Fayyad-Reina] is a variant of k-means designed to
handle very large (disk-resident) data sets
• Assumes that clusters are normally distributed around a centroid
in a Euclidean space
• Standard deviations in different dimensions may vary
• Clusters are axis-aligned ellipses
• Efficient way to summarize clusters (want
memory required O(clusters) and not O(data))
48
Prof. Pier Luca Lanzi
The BFR Algorithm
• Points are read from disk one chunk at the time (so to fit into
main memory)
• Most points from previous memory loads are summarized by
simple statistics
• To begin, from the initial load we select the initial k centroids by
some sensible approach
§Take k random points
§Take a small random sample and cluster optimally
§Take a sample; pick a random point, and then
k–1 more points, each as far from the previously selected
points as possible
49
Prof. Pier Luca Lanzi
Three Classes of Points
• Discard set (DS)
§Points close enough to a centroid to be summarized
• Compression set (CS)
§Groups of points that are close together but not close to any
existing centroid
§These points are summarized, but not assigned to a cluster
• Retained set (RS)
§Isolated points waiting to be assigned to a compression set
50
Prof. Pier Luca Lanzi
The Status of BFR Algorithm 51
A cluster. Its points
are in the DS.
The centroid
Compressed sets.
Their points are in
the CS.
Points in
the RS
Discard set (DS): Close enough to a centroid to be summarized
Compression set (CS): Summarized, but not assigned to a cluster
Retained set (RS): Isolated points
Prof. Pier Luca Lanzi
Summarizing Sets of Points
• For each cluster, the discard set (DS) is summarized by:
• The number of points, N
• The vector SUM, whose component SUM(i) is the sum of the
coordinates of the points in the ith dimension
• The vector SUMSQ whose component SUMSQ(i) is the sum of
squares of coordinates in ith dimension
52
A cluster.
All its points are in the DS.
The centroid
Prof. Pier Luca Lanzi
Summarizing Points: Comments
• 2d + 1 values represent any size cluster
(d is the number of dimensions)
• Average in each dimension (the centroid) can be calculated as
SUM(i)/N
• Variance of a cluster’s discard set in dimension i is computed as
(SUMSQ(i)/N) – (SUM(i)/N)2
• And standard deviation is the square root of that variance
53
Prof. Pier Luca Lanzi
Processing Data in the BFR Algorithm
1. First, all points that are “sufficiently close” to the centroid of a cluster are added to
that cluster (by updating its parameters) then the point is discharged
2. The points that are not “sufficiently close” to any centroid are clustered along with the
points in the retained set. Any algorithm can be used even the hierarchical one in this
step.
3. The miniclusters derived for new points and the old retained set are merged (e.g., by
using the same criteria used for hierarchical clustering)
4. Any point outside a cluster or a minicluster are dropped.
When the last chunk of data is processed, the remaining miniclusters and the points in the
retained set which might be labeled as outliers or alternatively can be assigned to one of
the centroids (as k-means would do).
Note that for miniclusters we only have N, SUM and SUMSQ so it is easier to used criteria
based on variance and similar statistics. So we might combine two clusters if their combined
variance is below some threshold.
54
Prof. Pier Luca Lanzi
“Sufficiently Close”
• Two approaches have been proposed to determine whether a point is
sufficiently close to a cluster
• Add p to a cluster if
§ It has the centroid closest to p
§ It is also very unlikely that, after all the points have been processed, some
other cluster centroid will be found to be nearer to p
• We can measure the probability that, if p belongs to a cluster, it would be
found as far as it is from the centroid of that cluster
§ This is where the assumption about the clusters containing normally
distributed points aligned with the axes of the space is used
55
Prof. Pier Luca Lanzi
Mahalanobis Distance
• It is used to decide whether a point is closed enough to a cluster
• It is computed as the distance between a point and the centroid of a cluster,
normalized by the standard deviation of the cluster in each dimension.
• Given p = (p1, … pd) and c = (c1, … cd), the Mahalanobis distance between p
and c is computed as
• We assign p to the cluster with the least Mahalanobis from p provided that the
distance is below a certain threshold. A threshold of 4 means that we have
only a chance in a million not to include something that belongs to the cluster
56
Prof. Pier Luca Lanzi
k-Means for Arbitrary Shapes
(the CURE algorithm)
Prof. Pier Luca Lanzi
The CURE Algorithm
• Problem with BFR/k-means:
§Assumes clusters are normally
distributed in each dimension
§And axes are fixed – ellipses at
an angle are not OK
• CURE (Clustering Using REpresentatives):
§Assumes a Euclidean distance
§Allows clusters to assume any shape
§Uses a collection of representative
points to represent clusters
58
Vs.
Prof. Pier Luca Lanzi
k-means BFR
and
these?
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
Prof. Pier Luca Lanzi
Starting CURE – Pass 1 of 2
• Pick a random sample of points that fit into main memory
• Cluster sample points to create initial clusters (e.g. using
hierarchical clustering)
• Pick representative points
§For each cluster pick k representative points
(as disperse as possible)
§Create synthetic representative points by moving
the k points toward the centroid of the cluster (e.g. 20%)
62
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
Prof. Pier Luca Lanzi
e e
e
e
e e
e
e e
e
e
h
h
h
h
h
h
h h
h
h
h
h h
salary
age
salary of humanities vs engineering
synthetic
representative
points
Prof. Pier Luca Lanzi
Starting CURE – Pass 2 of 2
• Rescan the whole dataset (from secondary memory) and for
each point p
• Place p in the “closest cluster” that is the cluster that has a
representative that is closest to p
65
Prof. Pier Luca Lanzi
Expectation Maximization
Prof. Pier Luca Lanzi
Expectation-Maximization (EM)
Clustering
• k-means assigns each point to only one cluster (hard assignment)
• The approach can be extended to consider soft assignment of points to
clusters, so that each point has a probability of belonging to each cluster
• We assume that each cluster Ci is characterized by a multivariate normal
distribution and thus identified by
§ The mean vector μi
§ The covariance matrix Σi
• A clustering is identified by a vector of parameter θ defined as
θ = {μi Σi P(Ci)}
where P(Ci) are the prior probability of all the clusters Ci which sum up to one
67
Prof. Pier Luca Lanzi
Expectation-Maximization (EM)
Clustering
• The goal of maximum likelihood estimation (MLE) is to choose the parameters
θ that maximize the likelihood, that is
• General idea
§ Starts with an initial estimate of the parameter vector
§ Iteratively rescores the patterns against the mixture density produced by
the parameter vector
§ The rescored patterns are used to update the parameter updates
§ Patterns belonging to the same cluster, if they are placed by their scores in
a particular component
68
Prof. Pier Luca Lanzi
The EM (Expectation Maximization)
Algorithm
• Initially, randomly assign k cluster centers
• Iteratively refine the clusters based on two steps
• Expectation step
§ Assign each data point xi to cluster Ci with the following probability
where p(xi|Ck) follows the normal distribution.
• This step calculates the probability of cluster membership of xi for each Ck
• Maximization step
§ The model parameters are estimated from the updated probabilities.
§ For instance, for the mean,
69
Prof. Pier Luca Lanzi
Run the Python notebooks for the
algorithms included in this lecture
Prof. Pier Luca Lanzi
Examples using R
Prof. Pier Luca Lanzi
k-Means Clustering in R
set.seed(1234)
# random generated points
x<-rnorm(12, mean=rep(1:3,each=4), sd=0.2)
y<-rnorm(12, mean=rep(c(1,2,1),each=4), sd=0.2)
plot(x,y,pch=19,cex=2,col="blue")
# distance matrix
d <- data.frame(x,y)
km <- kmeans(d, 3)
names(km)
plot(x,y,pch=19,cex=2,col="blue")
par(new=TRUE)
plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red")
72
Prof. Pier Luca Lanzi
k-Means Clustering in R
# generate other random centroids to start with
km <- kmeans(d, 3, centers=cbind(runif(3,0,3),runif(3,0,2)))
plot(x,y,pch=19,cex=2,col="blue")
par(new=TRUE)
plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red")
73
Prof. Pier Luca Lanzi
Evaluation on k-Means & Number of
Clusters
###
### Evaluate clustering in kmeans using elbow/knee analysis
###
library(foreign)
library(GMD)
iris = read.arff("iris.arff")
# init two vectors that will contain the evaluation
# in terms of within and between sum of squares
plot_wss = rep(0,12)
plot_bss = rep(0,12)
# evaluate every clustering
for(i in 1:12)
{
cl <- kmeans(iris[,1:4],i)
plot_wss[i] <- cl$tot.withinss
plot_bss[i] <- cl$betweenss;
}
74
Prof. Pier Luca Lanzi
Evaluation on k-Means & Number of
Clusters
# plot the results
x = 1:12
plot(x, y=plot_bss, main="Within/Between Cluster Sum-of-square", cex=2,
pch=18, col="blue", xlab="Number of Clusters", ylab="Evaluation",
ylim=c(0,700))
lines(x, plot_bss, col="blue")
par(new=TRUE)
plot(x, y=plot_wss, cex=2, pch=19, col="red", ylab="", xlab="",
ylim=c(0,700))
lines(x,plot_wss, col="red");
75
Prof. Pier Luca Lanzi
Elbow & Knee Analysis 76
Prof. Pier Luca Lanzi
http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/K-Means
http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/Expectation_Maximization_(EM)
Software Packages

More Related Content

What's hot

Decision trees & random forests
Decision trees & random forestsDecision trees & random forests
Decision trees & random forests
SC5.io
 
Ensemble methods in machine learning
Ensemble methods in machine learningEnsemble methods in machine learning
Ensemble methods in machine learning
SANTHOSH RAJA M G
 
[Solutions] data mining question paper 2018 tutorialsduniya.com
[Solutions] data mining question paper 2018   tutorialsduniya.com[Solutions] data mining question paper 2018   tutorialsduniya.com
[Solutions] data mining question paper 2018 tutorialsduniya.com
TutorialsDuniya.com
 
Faster R-CNN - PR012
Faster R-CNN - PR012Faster R-CNN - PR012
Faster R-CNN - PR012
Jinwon Lee
 
Mean shift and Hierarchical clustering
Mean shift and Hierarchical clustering Mean shift and Hierarchical clustering
Mean shift and Hierarchical clustering
Yan Xu
 
Decision trees and random forests
Decision trees and random forestsDecision trees and random forests
Decision trees and random forests
Debdoot Sheet
 
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Simplilearn
 
Replacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDBReplacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDB
ScyllaDB
 
SLIQ
SLIQSLIQ
Deep Learning for Recommender Systems RecSys2017 Tutorial
Deep Learning for Recommender Systems RecSys2017 Tutorial Deep Learning for Recommender Systems RecSys2017 Tutorial
Deep Learning for Recommender Systems RecSys2017 Tutorial
Alexandros Karatzoglou
 
Machine Learning for Recommender Systems MLSS 2015 Sydney
Machine Learning for Recommender Systems MLSS 2015 SydneyMachine Learning for Recommender Systems MLSS 2015 Sydney
Machine Learning for Recommender Systems MLSS 2015 Sydney
Alexandros Karatzoglou
 
Clustering
ClusteringClustering
The cutoff phenomenon in diffusion processes
The cutoff phenomenon in diffusion processesThe cutoff phenomenon in diffusion processes
The cutoff phenomenon in diffusion processes
Carlo Lancia
 
Introduction to NOSQL databases
Introduction to NOSQL databasesIntroduction to NOSQL databases
Introduction to NOSQL databases
Ashwani Kumar
 
Multiclass classification of imbalanced data
Multiclass classification of imbalanced dataMulticlass classification of imbalanced data
Multiclass classification of imbalanced data
SaurabhWani6
 
Introduction to XGBoost
Introduction to XGBoostIntroduction to XGBoost
Introduction to XGBoost
Joonyoung Yi
 
Apriori and Eclat algorithm in Association Rule Mining
Apriori and Eclat algorithm in Association Rule MiningApriori and Eclat algorithm in Association Rule Mining
Apriori and Eclat algorithm in Association Rule MiningWan Aezwani Wab
 
K means clustering
K means clusteringK means clustering
K means clustering
Ahmedasbasb
 

What's hot (20)

Decision trees & random forests
Decision trees & random forestsDecision trees & random forests
Decision trees & random forests
 
Ensemble methods in machine learning
Ensemble methods in machine learningEnsemble methods in machine learning
Ensemble methods in machine learning
 
[Solutions] data mining question paper 2018 tutorialsduniya.com
[Solutions] data mining question paper 2018   tutorialsduniya.com[Solutions] data mining question paper 2018   tutorialsduniya.com
[Solutions] data mining question paper 2018 tutorialsduniya.com
 
Faster R-CNN - PR012
Faster R-CNN - PR012Faster R-CNN - PR012
Faster R-CNN - PR012
 
Mean shift and Hierarchical clustering
Mean shift and Hierarchical clustering Mean shift and Hierarchical clustering
Mean shift and Hierarchical clustering
 
Decision trees and random forests
Decision trees and random forestsDecision trees and random forests
Decision trees and random forests
 
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
 
Replacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDBReplacing Your Cache with ScyllaDB
Replacing Your Cache with ScyllaDB
 
SLIQ
SLIQSLIQ
SLIQ
 
Deep Learning for Recommender Systems RecSys2017 Tutorial
Deep Learning for Recommender Systems RecSys2017 Tutorial Deep Learning for Recommender Systems RecSys2017 Tutorial
Deep Learning for Recommender Systems RecSys2017 Tutorial
 
No sql
No sqlNo sql
No sql
 
Machine Learning for Recommender Systems MLSS 2015 Sydney
Machine Learning for Recommender Systems MLSS 2015 SydneyMachine Learning for Recommender Systems MLSS 2015 Sydney
Machine Learning for Recommender Systems MLSS 2015 Sydney
 
Clustering
ClusteringClustering
Clustering
 
The cutoff phenomenon in diffusion processes
The cutoff phenomenon in diffusion processesThe cutoff phenomenon in diffusion processes
The cutoff phenomenon in diffusion processes
 
SPADE -
SPADE - SPADE -
SPADE -
 
Introduction to NOSQL databases
Introduction to NOSQL databasesIntroduction to NOSQL databases
Introduction to NOSQL databases
 
Multiclass classification of imbalanced data
Multiclass classification of imbalanced dataMulticlass classification of imbalanced data
Multiclass classification of imbalanced data
 
Introduction to XGBoost
Introduction to XGBoostIntroduction to XGBoost
Introduction to XGBoost
 
Apriori and Eclat algorithm in Association Rule Mining
Apriori and Eclat algorithm in Association Rule MiningApriori and Eclat algorithm in Association Rule Mining
Apriori and Eclat algorithm in Association Rule Mining
 
K means clustering
K means clusteringK means clustering
K means clustering
 

Similar to DMTM Lecture 13 Representative based clustering

DMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringDMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based Clustering
Pier Luca Lanzi
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clustering
Pier Luca Lanzi
 
Training machine learning k means 2017
Training machine learning k means 2017Training machine learning k means 2017
Training machine learning k means 2017
Iwan Sofana
 
Selection K in K-means Clustering
Selection K in K-means ClusteringSelection K in K-means Clustering
Selection K in K-means ClusteringJunghoon Kim
 
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptxData Mining Lecture_7.pptx
Data Mining Lecture_7.pptx
Subrata Kumer Paul
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical Clustering
Pier Luca Lanzi
 
clustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdfclustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdf
p_manimozhi
 
Mathematics online: some common algorithms
Mathematics online: some common algorithmsMathematics online: some common algorithms
Mathematics online: some common algorithms
Mark Moriarty
 
Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25
MapR Technologies
 
Pattern recognition binoy k means clustering
Pattern recognition binoy  k means clusteringPattern recognition binoy  k means clustering
Pattern recognition binoy k means clustering
108kaushik
 
Advanced database and data mining & clustering concepts
Advanced database and data mining & clustering conceptsAdvanced database and data mining & clustering concepts
Advanced database and data mining & clustering concepts
NithyananthSengottai
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012
Ted Dunning
 
machine learning - Clustering in R
machine learning - Clustering in Rmachine learning - Clustering in R
machine learning - Clustering in R
Sudhakar Chavan
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
Ted Dunning
 
Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford
MapR Technologies
 
Clustering.pdf
Clustering.pdfClustering.pdf
Clustering.pdf
nadimhossain24
 
Sudoku solver
Sudoku solverSudoku solver
Sudoku solver
Pankti Fadia
 
Clustering.pptx
Clustering.pptxClustering.pptx
Clustering.pptx
Mukul Kumar Singh Chauhan
 
3.Unsupervised Learning.ppt presenting machine learning
3.Unsupervised Learning.ppt presenting machine learning3.Unsupervised Learning.ppt presenting machine learning
3.Unsupervised Learning.ppt presenting machine learning
PriyankaRamavath3
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
Nandhini S
 

Similar to DMTM Lecture 13 Representative based clustering (20)

DMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based ClusteringDMTM 2015 - 08 Representative-Based Clustering
DMTM 2015 - 08 Representative-Based Clustering
 
DMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clusteringDMTM Lecture 12 Hierarchical clustering
DMTM Lecture 12 Hierarchical clustering
 
Training machine learning k means 2017
Training machine learning k means 2017Training machine learning k means 2017
Training machine learning k means 2017
 
Selection K in K-means Clustering
Selection K in K-means ClusteringSelection K in K-means Clustering
Selection K in K-means Clustering
 
Data Mining Lecture_7.pptx
Data Mining Lecture_7.pptxData Mining Lecture_7.pptx
Data Mining Lecture_7.pptx
 
DMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical ClusteringDMTM 2015 - 07 Hierarchical Clustering
DMTM 2015 - 07 Hierarchical Clustering
 
clustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdfclustering_hierarchical ckustering notes.pdf
clustering_hierarchical ckustering notes.pdf
 
Mathematics online: some common algorithms
Mathematics online: some common algorithmsMathematics online: some common algorithms
Mathematics online: some common algorithms
 
Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25Clustering - ACM 2013 02-25
Clustering - ACM 2013 02-25
 
Pattern recognition binoy k means clustering
Pattern recognition binoy  k means clusteringPattern recognition binoy  k means clustering
Pattern recognition binoy k means clustering
 
Advanced database and data mining & clustering concepts
Advanced database and data mining & clustering conceptsAdvanced database and data mining & clustering concepts
Advanced database and data mining & clustering concepts
 
Oxford 05-oct-2012
Oxford 05-oct-2012Oxford 05-oct-2012
Oxford 05-oct-2012
 
machine learning - Clustering in R
machine learning - Clustering in Rmachine learning - Clustering in R
machine learning - Clustering in R
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
 
Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford Fast Single-pass K-means Clusterting at Oxford
Fast Single-pass K-means Clusterting at Oxford
 
Clustering.pdf
Clustering.pdfClustering.pdf
Clustering.pdf
 
Sudoku solver
Sudoku solverSudoku solver
Sudoku solver
 
Clustering.pptx
Clustering.pptxClustering.pptx
Clustering.pptx
 
3.Unsupervised Learning.ppt presenting machine learning
3.Unsupervised Learning.ppt presenting machine learning3.Unsupervised Learning.ppt presenting machine learning
3.Unsupervised Learning.ppt presenting machine learning
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
 

More from Pier Luca Lanzi

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
Pier Luca Lanzi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
Pier Luca Lanzi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
Pier Luca Lanzi
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
Pier Luca Lanzi
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
Pier Luca Lanzi
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
Pier Luca Lanzi
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparation
Pier Luca Lanzi
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data exploration
Pier Luca Lanzi
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph mining
Pier Luca Lanzi
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text mining
Pier Luca Lanzi
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
Pier Luca Lanzi
 
DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluation
Pier Luca Lanzi
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
Pier Luca Lanzi
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 Clustering
Pier Luca Lanzi
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensembles
Pier Luca Lanzi
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
Pier Luca Lanzi
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
Pier Luca Lanzi
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision trees
Pier Luca Lanzi
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluation
Pier Luca Lanzi
 

More from Pier Luca Lanzi (20)

11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi11 Settembre 2021 - Giocare con i Videogiochi
11 Settembre 2021 - Giocare con i Videogiochi
 
Breve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei VideogiochiBreve Viaggio al Centro dei Videogiochi
Breve Viaggio al Centro dei Videogiochi
 
Global Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning WelcomeGlobal Game Jam 19 @ POLIMI - Morning Welcome
Global Game Jam 19 @ POLIMI - Morning Welcome
 
Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018Data Driven Game Design @ Campus Party 2018
Data Driven Game Design @ Campus Party 2018
 
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
GGJ18 al Politecnico di Milano - Presentazione che precede la presentazione d...
 
GGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di aperturaGGJ18 al Politecnico di Milano - Presentazione di apertura
GGJ18 al Politecnico di Milano - Presentazione di apertura
 
Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018Presentation for UNITECH event - January 8, 2018
Presentation for UNITECH event - January 8, 2018
 
DMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparationDMTM Lecture 20 Data preparation
DMTM Lecture 20 Data preparation
 
DMTM Lecture 19 Data exploration
DMTM Lecture 19 Data explorationDMTM Lecture 19 Data exploration
DMTM Lecture 19 Data exploration
 
DMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph miningDMTM Lecture 18 Graph mining
DMTM Lecture 18 Graph mining
 
DMTM Lecture 17 Text mining
DMTM Lecture 17 Text miningDMTM Lecture 17 Text mining
DMTM Lecture 17 Text mining
 
DMTM Lecture 16 Association rules
DMTM Lecture 16 Association rulesDMTM Lecture 16 Association rules
DMTM Lecture 16 Association rules
 
DMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluationDMTM Lecture 15 Clustering evaluation
DMTM Lecture 15 Clustering evaluation
 
DMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clusteringDMTM Lecture 14 Density based clustering
DMTM Lecture 14 Density based clustering
 
DMTM Lecture 11 Clustering
DMTM Lecture 11 ClusteringDMTM Lecture 11 Clustering
DMTM Lecture 11 Clustering
 
DMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensemblesDMTM Lecture 10 Classification ensembles
DMTM Lecture 10 Classification ensembles
 
DMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethodsDMTM Lecture 09 Other classificationmethods
DMTM Lecture 09 Other classificationmethods
 
DMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rulesDMTM Lecture 08 Classification rules
DMTM Lecture 08 Classification rules
 
DMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision treesDMTM Lecture 07 Decision trees
DMTM Lecture 07 Decision trees
 
DMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluationDMTM Lecture 06 Classification evaluation
DMTM Lecture 06 Classification evaluation
 

Recently uploaded

Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
GeoBlogs
 
The Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonThe Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve Thomason
Steve Thomason
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
AzmatAli747758
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
MIRIAMSALINAS13
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdf
Vivekanand Anglo Vedic Academy
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
Nguyen Thanh Tu Collection
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 

Recently uploaded (20)

Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
 
The Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve ThomasonThe Art Pastor's Guide to Sabbath | Steve Thomason
The Art Pastor's Guide to Sabbath | Steve Thomason
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdf
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 

DMTM Lecture 13 Representative based clustering

  • 1. Prof. Pier Luca Lanzi Representative-Based Clustering Data Mining andText Mining (UIC 583 @ Politecnico di Milano)
  • 2. Prof. Pier Luca Lanzi Readings • Mining of Massive Datasets (Chapter 7) • Data Mining and Analysis (Section 13.3) 2
  • 3. Prof. Pier Luca Lanzi How can we represent clusters?
  • 4. Prof. Pier Luca Lanzi Representation-Based Algorithms • Given a dataset of N instances, and a desired number of clusters k, this class of algorithms generates a partition C of N in k clusters {C1, C2, …, Ck} • For each cluster there is a point that summarizes the cluster • The common choice being the mean of the points in the cluster where ni = |Ci| and μi is the centroid 4
  • 5. Prof. Pier Luca Lanzi Representation-Based Algorithms • The goal of the clustering process is to select the best partition according to some scoring function • Sum of squared errors is the most common scoring function • The goal of the clustering process is thus to find • Brute-force Approach § Generate all the possible clustering C = {C1, C2, …, Ck} and select the best one. Unfortunately, there are O(kN/k!) possible partitions 5
  • 6. Prof. Pier Luca Lanzi k-Means Algorithm • Most widely known representative-based algorithm • Assumes an Euclidean space but can be easily extended to the non-Euclidean case • Employs a greedy iterative approaches that minimizes the SSE objective. Accordingly it can converge to a local optimal instead of a globally optimal clustering. 6
  • 7. Prof. Pier Luca Lanzi 1. Initially choose k points that are likely to be in different clusters; 2. Make these points the centroids of their clusters; 3. FOR each remaining point p DO Find the centroid to which p is closest; Add p to the cluster of that centroid; Adjust the centroid of that cluster to account for p; END;
  • 23. Prof. Pier Luca Lanzi Initializing Clusters • Solution 1 §Pick points that are as far away from one another as possible. • Variation of solution 1 Pick the first point at random; WHILE there are fewer than k points DO Add the point whose minimum distance from the selected points is as large as possible; END; • Solution 2 §Cluster a sample of the data, perhaps hierarchically, so there are k clusters. Pick a point from each cluster, perhaps that point closest to the centroid of the cluster. 23
  • 24. Prof. Pier Luca Lanzi Two different K-means Clusterings 24 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Sub-optimal Clustering -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Optimal Clustering Original Points
  • 25. Prof. Pier Luca Lanzi Importance of Choosing the Initial Centroids 25 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 xy Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 6
  • 26. Prof. Pier Luca Lanzi Importance of Choosing the Initial Centroids 26 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 1 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 3 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 4 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 3 x y Iteration 5
  • 27. Prof. Pier Luca Lanzi 27Why Selecting the Best Initial Centroids is Difficult? • If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small. • Chance is relatively small when K is large • If clusters are the same size, n, then • For example, if K = 10, then probability = 10!/1010 = 0.00036 • Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t • Consider an example of five pairs of clusters
  • 28. Prof. Pier Luca Lanzi Ten Clusters Example 28 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 1 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 2 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 3 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 4 Starting with two initial centroids in one cluster of each pair of clusters
  • 29. Prof. Pier Luca Lanzi 10 Clusters Example 29 Starting with some pairs of clusters having three initial centroids, while other have only one. 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 1 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 2 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 3 0 5 10 15 20 -6 -4 -2 0 2 4 6 8 x y Iteration 4
  • 30. Prof. Pier Luca Lanzi 30Dealing with the Initial Centroids Issue • Multiple runs, helps, but probability is not on your side • Sample and use another clustering method (hierarchical?) to determine initial centroids • Select more than k initial centroids and then select among these initial centroids • Postprocessing • Bisecting K-means, not as susceptible to initialization issues
  • 31. Prof. Pier Luca Lanzi 31Updating Centers Incrementally • In the basic K-means algorithm, centroids are updated after all points are assigned to a centroid • An alternative is to update the centroids after each assignment (incremental approach) §Each assignment updates zero or two centroids §More expensive §Introduces an order dependency §Never get an empty cluster §Can use “weights” to change the impact
  • 32. Prof. Pier Luca Lanzi 32Pre-processing and Post-processing • Pre-processing §Normalize the data §Eliminate outliers • Post-processing §Eliminate small clusters that may represent outliers §Split ‘loose’ clusters, i.e., clusters with relatively high SSE §Merge clusters that are ‘close’ and that have relatively low SSE §These steps can be used during the clustering process
  • 33. Prof. Pier Luca Lanzi Bisecting K-means • Variant of K-means that can produce a partitional or a hierarchical clustering 33
  • 34. Prof. Pier Luca Lanzi Bisecting K-means Example 34
  • 35. Prof. Pier Luca Lanzi Limitation of k-Means 35
  • 36. Prof. Pier Luca Lanzi 36Limitations of K-means • K-means has problems when clusters are of differing §Sizes §Densities §Non-globular shapes • K-means has also problems when the data contains outliers.
  • 37. Prof. Pier Luca Lanzi Limitations of K-means: Differing Sizes 37 Original Points K-means (3 Clusters)
  • 38. Prof. Pier Luca Lanzi Limitations of K-means: Differing Density 38 Original Points K-means (3 Clusters)
  • 39. Prof. Pier Luca Lanzi Limitations of K-means: Non-globular Shapes 39 Original Points K-means (2 Clusters)
  • 40. Prof. Pier Luca Lanzi Overcoming K-means Limitations 40 Original Points K-means Clusters One solution is to use many clusters. Find parts of clusters, but need to put together.
  • 41. Prof. Pier Luca Lanzi Overcoming K-means Limitations 41 Original Points K-means Clusters
  • 42. Prof. Pier Luca Lanzi Overcoming K-means Limitations 42 Original Points K-means Clusters
  • 43. Prof. Pier Luca Lanzi 43K-Means Clustering Summary • Strength §Relatively efficient §Often terminates at a local optimum §The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms • Weakness §Applicable only when mean is defined, then what about categorical data? §Need to specify k, the number of clusters, in advance §Unable to handle noisy data and outliers §Not suitable to discover clusters with non-convex shapes
  • 44. Prof. Pier Luca Lanzi 44K-Means Clustering Summary • Advantages §Simple, understandable §Items automatically assigned to clusters • Disadvantages §Must pick number of clusters before hand §All items forced into a cluster §Too sensitive to outliers
  • 45. Prof. Pier Luca Lanzi 45Variations of the K-Means Method • A few variants of the k-means which differ in §Selection of the initial k means §Dissimilarity calculations §Strategies to calculate cluster means • Handling categorical data: k-modes §Replacing means of clusters with modes §Using new dissimilarity measures to deal with categorical objects §Using a frequency-based method to update modes of clusters §A mixture of categorical and numerical data: k-prototype method
  • 46. Prof. Pier Luca Lanzi 46Variations of the K-Means Method • A few variants of the k-means which differ in §Selection of the initial k means §Dissimilarity calculations §Strategies to calculate cluster means • Handling categorical data: k-modes §Replacing means of clusters with modes §Using new dissimilarity measures to deal with categorical objects §Using a frequency-based method to update modes of clusters §A mixture of categorical and numerical data: k-prototype method
  • 47. Prof. Pier Luca Lanzi The BFR Algorithm
  • 48. Prof. Pier Luca Lanzi The BFR Algorithm • BFR [Bradley-Fayyad-Reina] is a variant of k-means designed to handle very large (disk-resident) data sets • Assumes that clusters are normally distributed around a centroid in a Euclidean space • Standard deviations in different dimensions may vary • Clusters are axis-aligned ellipses • Efficient way to summarize clusters (want memory required O(clusters) and not O(data)) 48
  • 49. Prof. Pier Luca Lanzi The BFR Algorithm • Points are read from disk one chunk at the time (so to fit into main memory) • Most points from previous memory loads are summarized by simple statistics • To begin, from the initial load we select the initial k centroids by some sensible approach §Take k random points §Take a small random sample and cluster optimally §Take a sample; pick a random point, and then k–1 more points, each as far from the previously selected points as possible 49
  • 50. Prof. Pier Luca Lanzi Three Classes of Points • Discard set (DS) §Points close enough to a centroid to be summarized • Compression set (CS) §Groups of points that are close together but not close to any existing centroid §These points are summarized, but not assigned to a cluster • Retained set (RS) §Isolated points waiting to be assigned to a compression set 50
  • 51. Prof. Pier Luca Lanzi The Status of BFR Algorithm 51 A cluster. Its points are in the DS. The centroid Compressed sets. Their points are in the CS. Points in the RS Discard set (DS): Close enough to a centroid to be summarized Compression set (CS): Summarized, but not assigned to a cluster Retained set (RS): Isolated points
  • 52. Prof. Pier Luca Lanzi Summarizing Sets of Points • For each cluster, the discard set (DS) is summarized by: • The number of points, N • The vector SUM, whose component SUM(i) is the sum of the coordinates of the points in the ith dimension • The vector SUMSQ whose component SUMSQ(i) is the sum of squares of coordinates in ith dimension 52 A cluster. All its points are in the DS. The centroid
  • 53. Prof. Pier Luca Lanzi Summarizing Points: Comments • 2d + 1 values represent any size cluster (d is the number of dimensions) • Average in each dimension (the centroid) can be calculated as SUM(i)/N • Variance of a cluster’s discard set in dimension i is computed as (SUMSQ(i)/N) – (SUM(i)/N)2 • And standard deviation is the square root of that variance 53
  • 54. Prof. Pier Luca Lanzi Processing Data in the BFR Algorithm 1. First, all points that are “sufficiently close” to the centroid of a cluster are added to that cluster (by updating its parameters) then the point is discharged 2. The points that are not “sufficiently close” to any centroid are clustered along with the points in the retained set. Any algorithm can be used even the hierarchical one in this step. 3. The miniclusters derived for new points and the old retained set are merged (e.g., by using the same criteria used for hierarchical clustering) 4. Any point outside a cluster or a minicluster are dropped. When the last chunk of data is processed, the remaining miniclusters and the points in the retained set which might be labeled as outliers or alternatively can be assigned to one of the centroids (as k-means would do). Note that for miniclusters we only have N, SUM and SUMSQ so it is easier to used criteria based on variance and similar statistics. So we might combine two clusters if their combined variance is below some threshold. 54
  • 55. Prof. Pier Luca Lanzi “Sufficiently Close” • Two approaches have been proposed to determine whether a point is sufficiently close to a cluster • Add p to a cluster if § It has the centroid closest to p § It is also very unlikely that, after all the points have been processed, some other cluster centroid will be found to be nearer to p • We can measure the probability that, if p belongs to a cluster, it would be found as far as it is from the centroid of that cluster § This is where the assumption about the clusters containing normally distributed points aligned with the axes of the space is used 55
  • 56. Prof. Pier Luca Lanzi Mahalanobis Distance • It is used to decide whether a point is closed enough to a cluster • It is computed as the distance between a point and the centroid of a cluster, normalized by the standard deviation of the cluster in each dimension. • Given p = (p1, … pd) and c = (c1, … cd), the Mahalanobis distance between p and c is computed as • We assign p to the cluster with the least Mahalanobis from p provided that the distance is below a certain threshold. A threshold of 4 means that we have only a chance in a million not to include something that belongs to the cluster 56
  • 57. Prof. Pier Luca Lanzi k-Means for Arbitrary Shapes (the CURE algorithm)
  • 58. Prof. Pier Luca Lanzi The CURE Algorithm • Problem with BFR/k-means: §Assumes clusters are normally distributed in each dimension §And axes are fixed – ellipses at an angle are not OK • CURE (Clustering Using REpresentatives): §Assumes a Euclidean distance §Allows clusters to assume any shape §Uses a collection of representative points to represent clusters 58 Vs.
  • 59. Prof. Pier Luca Lanzi k-means BFR and these?
  • 60. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  • 61. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  • 62. Prof. Pier Luca Lanzi Starting CURE – Pass 1 of 2 • Pick a random sample of points that fit into main memory • Cluster sample points to create initial clusters (e.g. using hierarchical clustering) • Pick representative points §For each cluster pick k representative points (as disperse as possible) §Create synthetic representative points by moving the k points toward the centroid of the cluster (e.g. 20%) 62
  • 63. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering
  • 64. Prof. Pier Luca Lanzi e e e e e e e e e e e h h h h h h h h h h h h h salary age salary of humanities vs engineering synthetic representative points
  • 65. Prof. Pier Luca Lanzi Starting CURE – Pass 2 of 2 • Rescan the whole dataset (from secondary memory) and for each point p • Place p in the “closest cluster” that is the cluster that has a representative that is closest to p 65
  • 66. Prof. Pier Luca Lanzi Expectation Maximization
  • 67. Prof. Pier Luca Lanzi Expectation-Maximization (EM) Clustering • k-means assigns each point to only one cluster (hard assignment) • The approach can be extended to consider soft assignment of points to clusters, so that each point has a probability of belonging to each cluster • We assume that each cluster Ci is characterized by a multivariate normal distribution and thus identified by § The mean vector μi § The covariance matrix Σi • A clustering is identified by a vector of parameter θ defined as θ = {μi Σi P(Ci)} where P(Ci) are the prior probability of all the clusters Ci which sum up to one 67
  • 68. Prof. Pier Luca Lanzi Expectation-Maximization (EM) Clustering • The goal of maximum likelihood estimation (MLE) is to choose the parameters θ that maximize the likelihood, that is • General idea § Starts with an initial estimate of the parameter vector § Iteratively rescores the patterns against the mixture density produced by the parameter vector § The rescored patterns are used to update the parameter updates § Patterns belonging to the same cluster, if they are placed by their scores in a particular component 68
  • 69. Prof. Pier Luca Lanzi The EM (Expectation Maximization) Algorithm • Initially, randomly assign k cluster centers • Iteratively refine the clusters based on two steps • Expectation step § Assign each data point xi to cluster Ci with the following probability where p(xi|Ck) follows the normal distribution. • This step calculates the probability of cluster membership of xi for each Ck • Maximization step § The model parameters are estimated from the updated probabilities. § For instance, for the mean, 69
  • 70. Prof. Pier Luca Lanzi Run the Python notebooks for the algorithms included in this lecture
  • 71. Prof. Pier Luca Lanzi Examples using R
  • 72. Prof. Pier Luca Lanzi k-Means Clustering in R set.seed(1234) # random generated points x<-rnorm(12, mean=rep(1:3,each=4), sd=0.2) y<-rnorm(12, mean=rep(c(1,2,1),each=4), sd=0.2) plot(x,y,pch=19,cex=2,col="blue") # distance matrix d <- data.frame(x,y) km <- kmeans(d, 3) names(km) plot(x,y,pch=19,cex=2,col="blue") par(new=TRUE) plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red") 72
  • 73. Prof. Pier Luca Lanzi k-Means Clustering in R # generate other random centroids to start with km <- kmeans(d, 3, centers=cbind(runif(3,0,3),runif(3,0,2))) plot(x,y,pch=19,cex=2,col="blue") par(new=TRUE) plot(km$centers[,1], km$centers[,2], pch=19, cex=2, col="red") 73
  • 74. Prof. Pier Luca Lanzi Evaluation on k-Means & Number of Clusters ### ### Evaluate clustering in kmeans using elbow/knee analysis ### library(foreign) library(GMD) iris = read.arff("iris.arff") # init two vectors that will contain the evaluation # in terms of within and between sum of squares plot_wss = rep(0,12) plot_bss = rep(0,12) # evaluate every clustering for(i in 1:12) { cl <- kmeans(iris[,1:4],i) plot_wss[i] <- cl$tot.withinss plot_bss[i] <- cl$betweenss; } 74
  • 75. Prof. Pier Luca Lanzi Evaluation on k-Means & Number of Clusters # plot the results x = 1:12 plot(x, y=plot_bss, main="Within/Between Cluster Sum-of-square", cex=2, pch=18, col="blue", xlab="Number of Clusters", ylab="Evaluation", ylim=c(0,700)) lines(x, plot_bss, col="blue") par(new=TRUE) plot(x, y=plot_wss, cex=2, pch=19, col="red", ylab="", xlab="", ylim=c(0,700)) lines(x,plot_wss, col="red"); 75
  • 76. Prof. Pier Luca Lanzi Elbow & Knee Analysis 76
  • 77. Prof. Pier Luca Lanzi http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/K-Means http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Clustering/Expectation_Maximization_(EM) Software Packages