1. Clustering Methods with R
Akira Murakami
Department of English Language and Applied Linguistics
University of Birmingham
a.murakami@bham.ac.uk
2. Cluster Analysis
• Cluster analysis finds groups in data.
• Objects in the same cluster are similar to each other.
• Objects in different clusters are dissimilar.
• A variety of algorithms have been proposed.
• Saying “I ran a cluster analysis” does not mean much.
• Used in data mining or as a statistical analysis.
• Unsupervised machine learning technique.
2
3. Cluster Analysis in SLA
• In SLA, clustering has been applied to identify the typology of
learners’
• motivational profiles (Csizér & Dörnyei, 2005),
• ability/aptitude profiles (Rysiewicz, 2008),
• developmental profiles based on international posture, L2
willingness to communicate, and frequency of communication
in L2 (Yashima & Zenuk-Nishide, 2008),
• cognitive and achievement profiles based on L1 achievement,
intelligence, L2 aptitude, and L2 proficiency (Sparks, Patton,
& Ganschow, 2012).
3
4. Similarity Measure
• Cluster analysis groups the observations that are
“similar”. But how do we measure similarity?
• Let’s suppose that we are interested in clustering L1
groups according to their accuracy of different
linguistic features (i.e., accuracy profile of L1
groups).
• As the measure of accuracy, we use an index that
takes the value between 0 and 1, such as the TLU
score.
4
11. Distance Measures
• Things are simple in 1D, but get more complicated in 2D or above.
• Different measures of distance
• Euclidean distance
• Manhattan distance
• Maximum distance
• Mahalanobis distance
• Hamming distance
• etc
11
12. Distance Measures
• Things are simple in 1D, but get more complicated in 2D or above.
• Different measures of distance
• Euclidean distance
• Manhattan distance
• Maximum distance
• Mahalanobis distance
• Hamming distance
• etc
12
24. Distance Measures
• Things are simple in 1D, but get more complicated in 2D or above.
• Different measures of distance
• Euclidean distance
• Manhattan distance
• Maximum distance
• Mahalanobis distance
• Hamming distance
• etc
24
32. dist()
• In R, dist function is used to obtain dissimilarity
matrices.
• Practicals
32
33. Clustering Methods
• Now that we know the concept of similarity, we
move on to the clustering of objects based on the
similarity.
• A number of methods have been proposed for
clustering. We will look at the following two:
• agglomerative hierarchical cluster analysis
• k-means
33
34. Clustering Methods
• Now that we know the concept of similarity, we
move on to the clustering of objects based on the
similarity.
• A number of methods have been proposed for
clustering. We will look at the following two:
• agglomerative hierarchical cluster analysis
• k-means
34
35. Agglomerative Hierarchical Cluster Analysis
• In agglomerative hierarchical clustering,
observations are clustered in a bottom-up manner.
1. Each observation forms an independent cluster
at the beginning.
2. The two clusters that are most similar are
clustered together.
3. 2 is repeated until all the observations are
clustered in a single cluster.
35
36. Linkage Criteria
• How do we calculate the similarity between clusters
that each includes multiple observations?
• Ward’s criterion (Ward’s method)
• complete-linkage
• single-linkage
• etc.
36
37. Linkage Criteria
• How do we calculate the similarity between clusters
that each includes multiple observations?
• Ward’s criterion (Ward’s method)
• complete-linkage
• single-linkage
• etc.
37
38. Ward’s Method
• Ward’s method leads to the smallest within-cluster
variance.
• At each iteration, two clusters are merged so that it
yields the smallest increase of the sum of squared
errors.
• Sum of Squared Errors (SSE): the sum of the
squared difference between the mean of the cluster
and individual data points.
38
57. Linkage Criteria
• How do we know the similarity between clusters
that each includes multiple observations?
• Ward’s criterion (Ward’s method)
• complete-linkage
• single-linkage
• etc.
57
62. Potential Pitfall of Hierarchical Clustering
• It assumes hierarchical structure in the clustering.
• Let us say that our data included two L1 groups over three
proficiency levels.
• If we group the data into two clusters, the best split may be
between the two L1 groups.
• If we group them into three clusters, the best groups may be by
proficiency groups.
• In this case, three-cluster solution is not nested within two-
cluster solution, and hierarchical clustering may fail to identify
the two clusters.
62
64. k-means Clustering
• K-means clustering does not assume a hierarchical
structure of clusters.
• i.e., no parent/child clusters
• Analysts need to specify the number of clusters.
64
71. k-Means Clustering
• The optimal number of clusters depends on the intended use.
• There is no “correct” or “wrong” choice in the number of
clusters.
• NP hard
• The algorithm only approximates solutions.
• Randomness is involved in the solution. You get different
solutions every time you run it.
• It assumes convex clusters.
71
73. 0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
y1
x
x
xx
x
x
x
x
x
xx x
x
x
x
x x
x
x xx
x
xx
x
x
x
xx
x
x
xx
x
x
x
xx
x
x
x
x
x
x
x
x
x
x
xx
x
x
x
x
x
x
x
x
x
x
x
x
xx
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
xx
x
xx
x
x
x
x
x
x
x
x
x
x
x x
x
x
x
x x
x xx
x
x
xx x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
xx
x
xx
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x
x
x
x
x
x
x
x
xx
x
x
x
x
xx
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
Concave
73
75. Within-Learner Centering
• The mean accuracy value of each learner was subtracted from all the
data points of the learner.
• For example, let's suppose the mean sentence length (MSL) of
Learner A over 10 writings was
• {4.0, 4.2, 4.4, 4.6, 4.8, 5.0, 5.2, 5.4, 5.6, 5.8}
and that of Learner B was
• {8.0, 8.2, 8.4, 8.6, 8.8, 9.0, 9.2, 9.4, 9.6, 9.8}
• The difference in MSL is identical in the two learners (+0.2 per writing).
• But the absolute MSL is widely different.
75
76. Within-Learner Centering
• The mean value of Learner A (4.9) is subtracted from all the data
points of Learner A:
• → {-0.90, -0.70, -0.50, -0.30, -0.10, 0.10, 0.30, 0.50, 0.70,
0.90}.
• Similarly, the mean value of Learner B (8.90) is subtracted from
all the data points of Learner B:
• → {-0.90, -0.70, -0.50, -0.30, -0.10, 0.10, 0.30, 0.50, 0.70,
0.90}.
• It is guaranteed that these two learners are clustered into the
same group as they have exactly the same set of values.
76
78. Cluster Validation/Evaluation
• We got clusters and explored them, but how do we
know how good the clusters are, or whether they
indeed capture signal and not just noise?
• Are the clusters ‘real’?
• Is it the difference in the true learning curve that
the earlier clustering captured or is it just the
random noise?
78
79. Two Types of Validation
• External Validation
• Internal Validation
79
80. External Validation
• If there is a a systematic pattern between clusters
and some external criteria, such as the proficiency
or L1 of learners, then what the cluster analysis
captured is unlikely to be just noise.
80
83. Silhouette Width
• Intuitively, the silhouette value is large if within-
cluster dissimilarity is small (i.e., learners within
each cluster have similar developmental
trajectories) and between-cluster dissimilarity is
large (i.e., learners in different clusters have
different learning curves).
• The silhouette is given to each data point (i.e.,
learner), and all the silhouette values are averaged
to measure the cluster distinctiveness of a cluster
analysis.
83
84. • Let’s say there are three clusters, A through C.
• Let’s further say that i is a member of Cluster A.
• Let a(i) be the average distance between that learner and all the
other learners that belong to the same cluster.
• We also calculate the average distances
1. between the learner and all the other learners that belong to
Cluster B
2. between the learner and all the other learners that belong to
Cluster C
• Let b(i) be the smaller of the two above (1-2).
• s(i) = (b(i) - a(i)) / max(a(i), b(i))
84
Silhouette Width
85. 0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
y1
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x
x
Silhouette Width
85
86. 0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
y1
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x
x
Silhouette Width
86
87. 0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
y1
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x
x
Silhouette Width
87
→ Average = 0.022 (the value of a(i))
88. 0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
y1
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x
x
Silhouette Width
88
→ Average = 0.191
89. 0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.2
0.4
0.6
0.8
1.0
y1
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x x
x
Silhouette Width
89
→ Average = 0.240
90. Silhouette Width
• a(i) = 0.022
• b(i) = 0.191 (the smaller of the other two)
• s(i) = (b(i) - a(i)) / max(a(i), b(i))
• s(i) = (0.191 - 0.022) / 0.191 = 0.882
• This is repeated for all the data points.
• Goodness of clustering: mean silhouette width across
all the data points.
90
91. Bootstrapping
• Now that we have a measure of how good our
clustering is, the next question is whether it is good
enough to be considered non-random.
• We can address this question through the technique
called bootstrapping.
• The idea is similar to the usual hypothesis-testing
procedure.
• We obtain the null distribution of the silhouette value
and see where our value falls.
91
92. • More specific procedure is as follows:
1. For each learner, we sample 30 writings (with replacement).
2. We run a k-means cluster analysis with the data obtained in
1 and calculate the mean silhouette value.
3. 1 and 2 are repeated e.g., 10,000 times, resulting in 10,000
mean silhouette values which we consider as the null
distribution.
4. We examine whether the 95% range of 3 includes our
observed mean silhouette value.
92
Bootstrapping
93. • The idea here is that we practically randomize the order
of the writings within individual learners and follow the
same procedure as our main analysis.
• Since the order of writings is random, there should not
be any systematic pattern of development observed.
• The clusters obtained in this manner thus captures noise
alone. We calculate the mean silhouette value on the
noise-only, random clusters, and obtain its distribution by
repeating the whole procedure a large number of times.
93
Bootstrapping