This document summarizes a presentation on cluster analysis given by Tekendra Nath Yogi. It defines cluster analysis and describes several clustering methods and algorithms, including k-means clustering. It also discusses applications of cluster analysis in fields like business intelligence, image recognition, web search, and biology. Requirements for effective clustering algorithms are outlined.
3. July 5, 2019 By:Tekendra Nath Yogi 3
Introduction
• Cluster is a group of similar objects.
• Clustering is the process of finding groups of objects such that the objects in a
group will be similar (or related) to one another and different from (or
unrelated to) the objects in other groups
4. July 5, 2019 By:Tekendra Nath Yogi 4
Contd…
• Dissimilarities and similarities are assessed based on the attribute values
describing the objects and often involve distance measures (e.g., Euclidean
distance).
• Clustering, falling under the category of unsupervised machine
learning, because it uses unlabeled input data and allows the algorithm to act
on that information without guidance.
• Different clustering methods may generate different clusters on the same data
set.
• The partitioning is not performed by humans, but by the clustering algorithm.
5. July 5, 2019 By:Tekendra Nath Yogi 5
Contd…
• a good clustering algorithm aims to create clusters whose:
– intra-cluster similarity is high (The data that is present inside the cluster is
similar to one another)
– inter-cluster similarity is less (Each cluster holds data that isn’t similar to
the other)
6. 7/5/2019 By:Tekendra Nath Yogi 6
Some Applications of Clustering
• Cluster analysis has been widely used in numerous applications
such as:
– In business intelligence
– In image reorganization
– In web search
– In Outlier detection
– In biology
7. 7/5/2019 By:Tekendra Nath Yogi 7
Contd..
• In Business intelligence:
– clustering can help marketers discover distinct groups in their customer
bases and characterize customer groups based on purchasing patterns so
that, for example, advertising can be appropriately targeted..
8. 7/5/2019 By:Tekendra Nath Yogi 8
Contd..
• In image recognization:
– In image recognition, clustering can be used to discover clusters or
“subclasses” in handwritten character recognition systems.
– For example: Some people may write it with a small circle at the left
bottom part, while some others may not. We can use clustering to
determine subclasses for “2,” each of which represents a variation on the
way in which 2 can be written.
9. 7/5/2019 By:Tekendra Nath Yogi 9
Contd..
• In web search
– document grouping: Clustering can be used to organize the search results
into groups and present the results in a concise and easily accessible way.
– cluster Weblog data to discover groups of similar access patterns.
10. 7/5/2019 By:Tekendra Nath Yogi 10
Contd..
• In Outlier detection
– Clustering can also be used for outlier detection, where outliers (values
that are “far away” from any cluster) may be more interesting than
common cases.
– Applications of outlier detection include the detection of credit card fraud
and the monitoring of criminal activities in electronic commerce.
11. 7/5/2019 By:Tekendra Nath Yogi 11
Contd..
• In biology:
– In biology, it can be used to derive plant and animal taxonomies,
categorize genes with similar functionality, and gain insight into structures
inherent in populations.
12. July 5, 2019 By:Tekendra Nath Yogi 12
Requirements of Clustering in Data Mining
• The following are typical requirements of clustering in data
mining.
– Scalability
– Ability to deal with different types of attributes
– Discovery of clusters with arbitrary shape
– Minimal requirements for domain knowledge to determine input parameters
– Ability to deal with noisy data
– Incremental clustering and insensitivity to input order
– Capability of clustering high-dimensionality data
– Constraint-based clustering
– Interpretability and usability
13. 7/5/2019 By:Tekendra Nath Yogi 13
Contd..
• Scalability:
– Many clustering algorithms work well on small data sets containing fewer
than several hundred data objects; however, a large database may contain
millions of objects.
– Clustering on a sample of a given large data set may lead to biased results.
– Highly scalable clustering algorithms are needed.
14. 7/5/2019 By:Tekendra Nath Yogi 14
Contd..
• Ability to deal with different types of attributes:
– Many algorithms are designed to cluster interval-based (numerical) data.
– However, applications may require clustering other types of data, such as
binary, categorical (nominal), and ordinal data, or mixtures of these data
types.
15. 7/5/2019 By:Tekendra Nath Yogi 15
Contd..
• Discovery of clusters with arbitrary shape:
– Many clustering algorithms determine clusters based on Euclidean
distance measures.
– Algorithms based on such distance measures tend to find spherical
clusters with similar size and density.
– However, a cluster could be of any shape.
– It is important to develop algorithms that can detect clusters of arbitrary
shape.
16. 7/5/2019 By:Tekendra Nath Yogi 16
Contd..
• Minimal requirements for domain knowledge to determine
input parameters:
– Many clustering algorithms require users to input certain parameters in
cluster analysis (such as the number of desired clusters).
– The clustering results can be quite sensitive to input parameters.
– Parameters are often difficult to determine, especially for data sets
containing high-dimensional objects.
– This not only burdens users, but it also makes the quality of clustering
difficult to control.
17. 7/5/2019 By:Tekendra Nath Yogi 17
Contd..
• Ability to deal with noisy data:
– Most real-world databases contain outliers or missing, unknown, or
erroneous data.
– Some clustering algorithms are sensitive to such data and may lead to
clusters of poor quality.
18. 7/5/2019 By:Tekendra Nath Yogi 18
Contd..
• Incremental clustering and insensitivity to the order of input
records:
– Some clustering algorithms cannot incorporate newly inserted data (i.e.,
database updates) into existing clustering structures and, instead, must
determine a new clustering from scratch.
– Some clustering algorithms are sensitive to the order of input data. That is,
given a set of data objects, such an algorithm may return dramatically
different clustering depending on the order of presentation of the input
objects.
– It is important to develop incremental clustering algorithms and algorithms
that are insensitive to the order of input.
19. 7/5/2019 By:Tekendra Nath Yogi 19
Contd..
• High dimensionality:
– A database or a data warehouse can contain several dimensions or
attributes.
– Many clustering algorithms are good at handling low-dimensional data,
involving only two to three dimensions.
– Human eyes are good at judging the quality of clustering for up to three
dimensions.
– Finding clusters of data objects in high dimensional space is challenging,
especially considering that such data can be sparse and highly skewed.
20. 7/5/2019 By:Tekendra Nath Yogi 20
Contd..
• Constraint-based clustering:
– Real-world applications may need to perform clustering under various
kinds of constraints.
– Suppose that your job is to choose the locations for a given number of
new Automated Teller Machines (ATMs) in a city.
– To decide upon this, you may cluster households while considering
constraints such as the city’s rivers and highway networks, and the type
and number of customers per cluster.
– A challenging task is to find groups of data with good clustering behavior
that satisfy specified constraints.
21. 7/5/2019 By:Tekendra Nath Yogi 21
Contd..
• Interpretability and usability:
– Users expect clustering results to be interpretable, comprehensible, and
usable.
– That is, clustering may need to be tied to specific semantic interpretations
and applications.
– It is important to study how an application goal may influence the
selection of clustering features and methods.
22. July 5, 2019 By:Tekendra Nath Yogi 22
Major Clustering Methods:
• In general, the major fundamental clustering methods can be
classified into the following categories:
– Partitioning Methods
– Hierarchical Methods
– Density-Based Methods
– Grid-Based Methods
23. July 5, 2019 By:Tekendra Nath Yogi 23
Contd..
• Partitioning Methods:
– Given a data set, D, of n objects, and k, the number of clusters to form, a
partitioning method constructs k partitions of the data, where each partition
represents a cluster and k <= n.
– That is, it classifies the data into k groups, which together satisfy the following
requirements:
• Each group must contain at least one object, and
• Each object must belong to exactly one group.
24. July 5, 2019 By:Tekendra Nath Yogi 24
Contd…
– A partitioning method creates an initial partitioning. It then uses an iterative relocation
technique that attempts to improve the partitioning by moving objects from one group
to another.
– The general criterion of a good partitioning is that objects in the same cluster are close
or related to each other, whereas objects of different clusters are far apart or very
different.
25. July 5, 2019 By:Tekendra Nath Yogi 25
k-Means: A Centroid-Based Technique
• A Centroid-based partitioning technique uses the centroid of a cluster, Ci , to
represent that cluster.
• The centroid of a cluster is its center point such as the mean of the objects (or
points) assigned to the cluster.
• The distance between an object and ci, the representative of the
cluster, is measured by dist(p, ci),
• where dist(i, j) is the Euclidean distance between two points
26. July 5, 2019 By:Tekendra Nath Yogi 26
Contd..
• The k-means algorithm defines the centroid of a cluster as the mean value of
the points within the cluster. It proceeds as follows:
– First, it randomly selects k of the objects in D, each of which initially
represents a cluster mean or center.
– For each of the remaining objects, an object is assigned to the cluster to
which it is the most similar, based on the Euclidean distance between the
object and the cluster mean.
– The k-means algorithm then iteratively improves the within-cluster
variation. For each cluster, it computes the new mean using the objects
assigned to the cluster in the previous iteration. All the objects are then
reassigned using the updated means as the new cluster centers.
– The iterations continue until the assignment is stable, that is, the clusters
formed in the current round are the same as those formed in the previous
round.
27. July 5, 2019 By:Tekendra Nath Yogi 27
Contd..
• Algorithm:
– The k-means algorithm for partitioning, where each cluster’s center is
represented by the mean value of the objects in the cluster.
28. 28
Contd…
• Example1: Clusters the following instances of given data (2-
Dimensional form) with the help of K means algorithm (Take K
= 2)
Instance X Y
1 1 1.5
2 1 4.5
3 2 1.5
4 2 3.5
5 3 2.5
6 3 4
29. July 5, 2019 By:Tekendra Nath Yogi 29
Contd…
• Solution:
– Given, number of clusters to be created (k)=2.
– Initially choose two points randomly as a initial cluster center, say objects
1 and 3 are chosen
– i.e., c1=(1, 1.5) and c2= (2, 1.5)
30. July 5, 2019 By:Tekendra Nath Yogi 30
Contd…
• Iteration1:
– Now calculating similarity by using Euclidean distance measure as:
– dist(c1,2) = √(1 - 1)² + (1.5- 4.5)²=3
– dist(c2, 2) = √(2 - 1)² + (1.5 – 4.5)²=3.163
– Here, dist(c1, 2)< dist(c2,2)
– So, data point 2 belongs to c1.
31. July 5, 2019 By:Tekendra Nath Yogi 31
Contd…
– dist(c1,4) = √(1 - 2)² + (1.5- 3.5)²=2.236
– dist(c2, 4) = √(2 - 2)² + (1.5 – 3.5)²=2
– Here, dist(c2, 4)< dist(c1,4)
– So, data point 4 belongs to c2.
– dist(c1,5) = √(1 - 3)² + (1.5- 2.5)²=2.236
– dist(c2, 5) = √(2 - 3)² + (1.5 – 2.5)²=1.4143
– Here, dist(c2, 5)< dist(c1,5)
– So, data point 5 belongs to c2.
32. July 5, 2019 By:Tekendra Nath Yogi 32
Contd…
– dist(c1,6) = √(1 - 3)² + (1.5- 4)²=3.2
– dist(c2, 6) = √(2 - 3)² + (1.5 – 4)²=2.7
– Here, dist(c2, 6)< dist(c1,6)
– So, data point 6 belongs to c2.
– The resulting cluster after 1st iteration is:
1, 2
C1
3,4,5,6
C2
33. July 5, 2019 By:Tekendra Nath Yogi 33
Contd…
• Iteration 2:
• Now calculating centroid for each cluster:
– Centroid for c1=(1+1/2, 1.5+4.5/2)=( 1, 3)
– Centroid for c3=((2+2+3+3)/4, ( 1.5+3.5+2.5+4)/4)=( 2.5, 2.875)
– Now, again calculating similarity:
– dist(c1,1) = √(1 - 1)² + (3- 1.5)²=1.5
– dist(c2, 1) = √(2.5 - 1)² + (2.875 – 1.5)²=2.035
– Here, dist(c1, 1)< dist(c2,1)
– So, data point 1 belongs to c1.
34. July 5, 2019 By:Tekendra Nath Yogi 34
Contd…
– dist(c1,2) = √(1 - 1)² + (3- 4.5)²=1.5
– dist(c2, 2) = √(2.5 - 1)² + (2.875 – 4.5)²=2.22
– Here, dist(c1, 2)< dist(c2,2)
– So, data point 2 belongs to c1.
– dist(c1,3) = √(1 - 2)² + (3- 1.5)²=1.8
– dist(c2, 3) = √(2.5 - 2)² + (2.875 – 1.5)²=1.463
– Here, dist(c2, 3)< dist(c1,3)
– So, data point 3 belongs to c2.
35. July 5, 2019 By:Tekendra Nath Yogi 35
Contd…
– dist(c1,4) = √(1 - 2)² + (3- 3.5)²=1.12
– dist(c2, 4) = √(2.5 - 2)² + (2.875 – 3.5)²=0.8
– Here, dist(c2, 4)< dist(c1,4)
– So, data point 4 belongs to c2.
– dist(c1,5) = √(1 - 3)² + (3- 2.5)²=2.06
– dist(c2, 5) = √(2.5 - 3)² + (2.875 – 2.5)²=0.625
– Here, dist(c2, 5)< dist(c1,5)
– So, data point 5 belongs to c2.
36. July 5, 2019 By:Tekendra Nath Yogi 36
Contd…
– dist(c1,6) = √(1 - 3)² + (3- 4)²=2.236
– dist(c2, 6) = √(2.5 - 3)² + (2.875 – 4)²=0.718
– Here, dist(c2, 6)< dist(c1,6)
– So, data point 6 belongs to c2.
The resulting cluster after 1st iteration is:
Same as iteration 1, so terminate.
1, 2
C1
3,4,5,6
C2
41. 41
Contd..
• Example 3: Clusters the following instances of given data (2-
Dimensional form) with the help of K means algorithm (Take K
= 2)
Instance X Y
1 1 2.5
2 1 4.5
3 2.5 3
4 2 1.5
5 4.5 1.5
6 4 5
42. July 5, 2019 By:Tekendra Nath Yogi 42
Contd…
• Weakness of K-means:
– Applicable only when mean is defined.
– Need to specify k, the number of cluster in advance.
– Unable to handle outliers.
43. July 5, 2019 By:Tekendra Nath Yogi 43
Hierarchical clustering
• A hierarchical clustering method works by grouping data objects into a
hierarchy or “tree” of clusters.
• Representing data objects in the form of a hierarchy is useful for data
summarization and visualization.
44. July 5, 2019 By:Tekendra Nath Yogi 44
Contd..
• Depending on whether the hierarchical decomposition is formed in a bottom-
up (merging) or top-down (splitting) fashion a hierarchical clustering method
can be classified into two categories:
– Agglomerative Hierarchical Clustering and
– Divisive Hierarchical Clustering
45. July 5, 2019 By:Tekendra Nath Yogi 45
Contd..
• Agglomerative Hierarchical Clustering:
– uses a bottom-up strategy.
– starts by letting each object form its own cluster and iteratively merges
clusters into larger and larger clusters, until all the objects are in a single
cluster or certain termination conditions(desired number of clusters) are
satisfied.
– For the merging step, it finds the two clusters that are closest to each other
(according to some similarity measure), and combines the two to form one
cluster.
46. July 5, 2019 By:Tekendra Nath Yogi 46
Contd..
• Example: a data set of five objects, {a, b, c, d, e}. Initially, AGNES
(AGglomerative NESting), the agglomerative method, places each object into
a cluster of its own. The clusters are then merged step-by-step according to
some criterion (e.g., minimum Euclidean distance).
47. July 5, 2019 By:Tekendra Nath Yogi 47
Contd..
• Divisive hierarchical clustering :
– A divisive hierarchical clustering method employs a top-down strategy.
– It starts by placing all objects in one cluster, which is the hierarchy’s root.
– It then divides the root cluster into several smaller sub-clusters, and
recursively partitions those clusters into smaller ones.
– The partitioning process continues until each cluster at the lowest level
either containing only one object, or the objects within a cluster are
sufficiently similar to each other.
48. July 5, 2019 By:Tekendra Nath Yogi 48
Contd..
• Example: DIANA (DIvisive ANAlysis), a divisive hierarchical clustering
method:
– a data set of five objects, {a, b, c, d, e}. All the objects are used to form
one initial cluster. The cluster is split according to some principle such as
the maximum Euclidean distance between the closest neighboring objects
in the cluster. The cluster-splitting process repeats until, eventually, each
new cluster contains only a single object.
49. July 5, 2019 By:Tekendra Nath Yogi 49
Contd..
• agglomerative versus divisive hierarchical clustering:
– Organize objects into a hierarchy using a bottom-up or top-down strategy,
respectively.
– Agglomerative methods start with individual objects as clusters, which are
iteratively merged to form larger clusters.
– Conversely, divisive methods initially let all the given objects form one
cluster, which they iteratively split into smaller clusters.
50. July 5, 2019 By:Tekendra Nath Yogi 50
Contd..
• Hierarchical clustering methods can encounter difficulties
regarding the selection of merge or split points.
– Such a decision is critical, because once a group of objects is merged or
split, the process at the next step will operate on the newly generated
clusters. It will neither undo what was done previously, nor perform object
swapping between clusters.
– Thus, merge or split decisions, if not well chosen, may lead to low-quality
clusters.
• Moreover, the methods do not scale well because each decision of merge or
split needs to examine and evaluate many objects or clusters.
51. 7/5/2019 By:Tekendra Nath Yogi 51
Density Based Methods
• Partitioning methods and hierarchical clustering are suitable for finding
spherical-shaped clusters.
• Moreover, they are also severely affected by the presence of noise and
outliers in the data.
• Unfortunately, real life data contain:
– Clusters of arbitrary shape such as oval, linear, s-shaped, etc.
– Many noise
• Solution : Density based methods
52. 7/5/2019 By:Tekendra Nath Yogi 52
Contd..
• Basic Idea behind Density based methods:
– Model clusters as dense regions in the data space, separated by sparse
regions.
• Major features:
– Discover clusters of arbitrary shape(e.g., oval, s-shaped, etc)
– Handle noise
– Need density parameters as termination condition
• E.g., : DBSCAN(Density Based Spatial Clustering of Applications with Noise)
53. Density-Based Clustering: Background
• Neighborhood of point p=all points within distance e from p:
– NEps(p)={q | dist(p,q) <= e }
• Two parameters:
– e : Maximum radius of the neighbourhood
– MinPts: Minimum number of points in an e -neighbourhood of that point
• If the number of points in the e -neighborhood of p is at least
MinPts, then p is called a core object.
p
q
MinPts = 5
e = 1 cm
54. Contd..
• Directly density-reachable:
– A point p is directly density-reachable from a point q wrt. e, MinPts if
• 1) p belongs to NEps(q)
• 2) core point condition: |NEps (q)| >= MinPts
p
q
MinPts = 5
e = 1 cm
55. Contd..
• Density-reachable:
– A point p is density-reachable from a point q wrt. Eps, MinPts if there is a
chain of points p1, …, pn, q = p1,….. pn = p such that pi+1 is directly
density-reachable from pi
p
q
p1
56. Contd..
• Density-connected:
– A point p is density-connected to a point q wrt. Eps, MinPts if there is a
point o such that both, p and q are density-reachable from o wrt. Eps and
MinPts.
p q
o
57. 7/5/2019 By:Tekendra Nath Yogi 57
Contd..
• Density = number of points within a specified radius (Eps).
• A point is a core point if it has at least a specified number of points (MinPts)
within Eps.
• These are points that are at the interior of a cluster
• Counts the point itself
• A border point is not a core point, but is in the neighborhood of a core point
• A noise point is any point that is not a core point or a border point
e.g.,: Minpts=7
58. 7/5/2019 By:Tekendra Nath Yogi 58
DBSCAN(Density Based Spatial Clustering of Applications with Noise)
• To find the next cluster, DBSCAN randomly selects an unvisited object
from the remaining ones. The clustering process continues until all
objects are visited.
60. 7/5/2019 By:Tekendra Nath Yogi 60
Contd..
• Example:
– If Epsilon is 2 and minpoint is 2, what are the clusters that DBScan would
discover with the following 8 examples: A1=(2,10), A2=(2,5), A3=(8,4),
A4=(5,8), A5=(7,5), A6=(6,4), A7=(1,2), A8=(4,9).
• Solution :
– d(a,b) denotes the Eucledian distance between a and b. It is obtained
directly from the distance matrix calculated as follows:
– d(a,b)=sqrt((xb-xa)2+(yb-ya)2))
64. 7/5/2019 By:Tekendra Nath Yogi 64
Advantages and Disadvantages of DBSCAN algorithm:
• Advantages:
– DBSCAN does not require one to specify the number of clusters in the
data priori, as opposed to k-means.
– DBSCAN can find arbitrarily shaped clusters
– DBSCAN is robust to outliers.
– DBSCAN is mostly insensitive to the ordering of the points in the
database.
– The parameters minPts and ε can be set by a domain expert, if the data is
well understood.
65. 7/5/2019 By:Tekendra Nath Yogi 65
Contd..
• Disadvantages:
– DBSCAN is not entirely deterministic: border points that are reachable
from more than one cluster can be part of either cluster, depending on the
order the data is processed. Fortunately, this situation does not arise often,
and has little impact on the clustering result: both on core points and noise
points, DBSCAN is deterministic
– DBSCAN cannot cluster data sets well with large differences in densities,
since the minPts-ε combination cannot then be chosen appropriately for all
clusters.
– If the data and scale are not well understood, choosing a meaningful
distance threshold ε can be difficult.
66. 7/5/2019 By:Tekendra Nath Yogi 66
Homework
• Explain the aims of cluster analysis.
• What is clustering? How is it different than supervised classification?
In what situation clustering can be useful?
• List and explain desired features of cluster analysis.
• Explain the different types of cluster analysis methods and discuss their
features.
• Describe the k-means algorithm and write its strengths and
weaknesses.
• Describe the features of Hierarchical clustering methods? In what
situations are these methods useful?