This document provides an introduction and overview of document clustering techniques in information retrieval. It discusses motivations for clustering documents, such as improving search recall and organizing search results. It covers common clustering algorithms like K-means and hierarchical clustering, how they work, and considerations like choosing the number of clusters. The document uses examples and diagrams to illustrate clustering concepts and algorithms.
Vector space model or term vector model is an algebraic model for representing text documents as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System
The (standard) Boolean model of information retrieval (BIR) is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one. ... The BIR is based on Boolean logic and classical set theory in that both the documents to be searched and the user's query are conceived as sets of terms.
Vector space model or term vector model is an algebraic model for representing text documents as vectors of identifiers, such as, for example, index terms. It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System
The (standard) Boolean model of information retrieval (BIR) is a classical information retrieval (IR) model and, at the same time, the first and most-adopted one. ... The BIR is based on Boolean logic and classical set theory in that both the documents to be searched and the user's query are conceived as sets of terms.
automatic classification in information retrievalBasma Gamal
automatic classification in information retrieval-automatic classification of documents
Chapter 3 from IR_VAN_Book
INFORMATION RETRIEVAL
C. J. van RIJSBERGEN B.Sc., Ph.D., M.B.C.S.
Information retrieval 15 alternative algebraic modelsVaibhav Khanna
A model of information retrieval (IR) selects and ranks the relevant documents with respect to a user's query. ... Most of the IR systems represent document contents by a set of descriptors, called terms, belonging to a vocabulary V
Information retrieval 13 alternative set theoretic modelsVaibhav Khanna
Alternative Set Theoretic Models
Fuzzy Set Model :a set theoretic model of document retrieval based on fuzzy theory.
Extended Boolean Model:a set theoretic model of document retrieval based on an extension of the classic Boolean model. The idea is to interpret partial matches as Euclidean distances represented in a vectorial space of index terms.
Neural Models for Information RetrievalBhaskar Mitra
In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing (NLP) tasks, such as language modelling and machine translation. This suggests that neural models will also yield significant performance improvements on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using semantic rather than lexical matching. IR tasks, however, are fundamentally different from NLP tasks leading to new challenges and opportunities for existing neural representation learning approaches for text.
We begin this talk with a discussion on text embedding spaces for modelling different types of relationships between items which makes them suitable for different IR tasks. Next, we present how topic-specific representations can be more effective than learning global embeddings. Finally, we conclude with an emphasis on dealing with rare terms and concepts for IR, and how embedding based approaches can be augmented with neural models for lexical matching for better retrieval performance. While our discussions are grounded in IR tasks, the findings and the insights covered during this talk should be generally applicable to other NLP and machine learning tasks.
automatic classification in information retrievalBasma Gamal
automatic classification in information retrieval-automatic classification of documents
Chapter 3 from IR_VAN_Book
INFORMATION RETRIEVAL
C. J. van RIJSBERGEN B.Sc., Ph.D., M.B.C.S.
Information retrieval 15 alternative algebraic modelsVaibhav Khanna
A model of information retrieval (IR) selects and ranks the relevant documents with respect to a user's query. ... Most of the IR systems represent document contents by a set of descriptors, called terms, belonging to a vocabulary V
Information retrieval 13 alternative set theoretic modelsVaibhav Khanna
Alternative Set Theoretic Models
Fuzzy Set Model :a set theoretic model of document retrieval based on fuzzy theory.
Extended Boolean Model:a set theoretic model of document retrieval based on an extension of the classic Boolean model. The idea is to interpret partial matches as Euclidean distances represented in a vectorial space of index terms.
Neural Models for Information RetrievalBhaskar Mitra
In the last few years, neural representation learning approaches have achieved very good performance on many natural language processing (NLP) tasks, such as language modelling and machine translation. This suggests that neural models will also yield significant performance improvements on information retrieval (IR) tasks, such as relevance ranking, addressing the query-document vocabulary mismatch problem by using semantic rather than lexical matching. IR tasks, however, are fundamentally different from NLP tasks leading to new challenges and opportunities for existing neural representation learning approaches for text.
We begin this talk with a discussion on text embedding spaces for modelling different types of relationships between items which makes them suitable for different IR tasks. Next, we present how topic-specific representations can be more effective than learning global embeddings. Finally, we conclude with an emphasis on dealing with rare terms and concepts for IR, and how embedding based approaches can be augmented with neural models for lexical matching for better retrieval performance. While our discussions are grounded in IR tasks, the findings and the insights covered during this talk should be generally applicable to other NLP and machine learning tasks.
New proximity estimate for incremental update of non uniformly distributed cl...IJDKP
The conventional clustering algorithms mine static databases and generate a set of patterns in the form of
clusters. Many real life databases keep growing incrementally. For such dynamic databases, the patterns
extracted from the original database become obsolete. Thus the conventional clustering algorithms are not
suitable for incremental databases due to lack of capability to modify the clustering results in accordance
with recent updates. In this paper, the author proposes a new incremental clustering algorithm called
CFICA(Cluster Feature-Based Incremental Clustering Approach for numerical data) to handle numerical
data and suggests a new proximity metric called Inverse Proximity Estimate (IPE) which considers the
proximity of a data point to a cluster representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of a data point into a
cluster.
Chapter 10. Cluster Analysis Basic Concepts and Methods.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
K-Means clustering uses an iterative procedure which is very much sensitive and dependent upon the initial centroids. The initial centroids in the k-means clustering are chosen randomly, and hence the clustering also changes with respect to the initial centroids. This paper tries to overcome this problem of random selection of centroids and hence change of clusters with a premeditated selection of initial centroids. We have used the iris, abalone and wine data sets to demonstrate that the proposed method of finding the initial centroids and using the centroids in k-means algorithm improves the clustering performance. The clustering also remains the same in every run as the initial centroids are not randomly selected but through premeditated method.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
1. Introduction to Information Retrieval
Introduction to
Information Retrieval
CS276: Information Retrieval and Web Search
Pandu Nayak and Prabhakar Raghavan
Lecture 12: Clustering
3. Introduction to Information Retrieval
What is clustering?
Clustering: the process of grouping a set of objects
into classes of similar objects
Documents within a cluster should be similar.
Documents from different clusters should be
dissimilar.
The commonest form of unsupervised learning
Unsupervised learning = learning from raw data, as
opposed to supervised data where a classification of
examples is given
A common and important task that finds many
applications in IR and other places
Ch. 16
4. Introduction to Information Retrieval
A data set with clear cluster structure
How would
you design
an algorithm
for finding
the three
clusters in
this case?
Ch. 16
5. Introduction to Information Retrieval
Applications of clustering in IR
Whole corpus analysis/navigation
Better user interface: search without typing
For improving recall in search applications
Better search results (like pseudo RF)
For better navigation of search results
Effective “user recall” will be higher
For speeding up vector space retrieval
Cluster-based retrieval gives faster search
Sec. 16.1
6. Introduction to Information Retrieval
Yahoo! Hierarchy isn’t clustering but is the kind
of output you want from clustering
dairy
crops
agronomy
forestry
AI
HCI
craft
missions
botany
evolution
cell
magnetism
relativity
courses
agriculture biology physics CS space
... ... ...
… (30)
www.yahoo.com/Science
... ...
7. Introduction to Information Retrieval
Google News: automatic clustering gives an
effective news presentation metaphor
9. Introduction to Information Retrieval
For visualizing a document collection and its
themes
Wise et al, “Visualizing the non-visual” PNNL
ThemeScapes, Cartia
[Mountain height = cluster size]
10. Introduction to Information Retrieval
For improving search recall
Cluster hypothesis - Documents in the same cluster behave similarly
with respect to relevance to information needs
Therefore, to improve search recall:
Cluster docs in corpus a priori
When a query matches a doc D, also return other docs in the
cluster containing D
Hope if we do this: The query “car” will also return docs containing
automobile
Because clustering grouped together docs containing car with
those containing automobile.
Why might this happen?
Sec. 16.1
12. Introduction to Information Retrieval
Issues for clustering
Representation for clustering
Document representation
Vector space? Normalization?
Centroids aren’t length normalized
Need a notion of similarity/distance
How many clusters?
Fixed a priori?
Completely data driven?
Avoid “trivial” clusters - too large or small
If a cluster's too large, then for navigation purposes you've
wasted an extra user click without whittling down the set of
documents much.
Sec. 16.2
13. Introduction to Information Retrieval
Notion of similarity/distance
Ideal: semantic similarity.
Practical: term-statistical similarity
We will use cosine similarity.
Docs as vectors.
For many algorithms, easier to think in
terms of a distance (rather than similarity)
between docs.
We will mostly speak of Euclidean distance
But real implementations use cosine similarity
14. Introduction to Information Retrieval
Clustering Algorithms
Flat algorithms
Usually start with a random (partial) partitioning
Refine it iteratively
K means clustering
(Model based clustering)
Hierarchical algorithms
Bottom-up, agglomerative
(Top-down, divisive)
15. Introduction to Information Retrieval
Hard vs. soft clustering
Hard clustering: Each document belongs to exactly one cluster
More common and easier to do
Soft clustering: A document can belong to more than one
cluster.
Makes more sense for applications like creating browsable
hierarchies
You may want to put a pair of sneakers in two clusters: (i) sports
apparel and (ii) shoes
You can only do that with a soft clustering approach.
We won’t do soft clustering today. See IIR 16.5, 18
16. Introduction to Information Retrieval
Partitioning Algorithms
Partitioning method: Construct a partition of n
documents into a set of K clusters
Given: a set of documents and the number K
Find: a partition of K clusters that optimizes the
chosen partitioning criterion
Globally optimal
Intractable for many objective functions
Ergo, exhaustively enumerate all partitions
Effective heuristic methods: K-means and K-
medoids algorithms
See also Kleinberg NIPS 2002 – impossibility for natural clustering
17. Introduction to Information Retrieval
K-Means
Assumes documents are real-valued vectors.
Clusters based on centroids (aka the center of gravity
or mean) of points in a cluster, c:
Reassignment of instances to clusters is based on
distance to the current cluster centroids.
(Or one can equivalently phrase it in terms of similarities)
c
x
x
c
|
|
1
(c)
μ
Sec. 16.4
18. Introduction to Information Retrieval
K-Means Algorithm
Select K random docs {s1, s2,… sK} as seeds.
Until clustering converges (or other stopping criterion):
For each doc di:
Assign di to the cluster cj such that dist(xi, sj) is minimal.
(Next, update the seeds to the centroid of each cluster)
For each cluster cj
sj = (cj)
Sec. 16.4
19. Introduction to Information Retrieval
K Means Example
(K=2)
Pick seeds
Reassign clusters
Compute centroids
x
x
Reassign clusters
x
x x
x Compute centroids
Reassign clusters
Converged!
Sec. 16.4
20. Introduction to Information Retrieval
Termination conditions
Several possibilities, e.g.,
A fixed number of iterations.
Doc partition unchanged.
Centroid positions don’t change.
Does this mean that the docs in a
cluster are unchanged?
Sec. 16.4
21. Introduction to Information Retrieval
Convergence
Why should the K-means algorithm ever reach a
fixed point?
A state in which clusters don’t change.
K-means is a special case of a general procedure
known as the Expectation Maximization (EM)
algorithm.
EM is known to converge.
Number of iterations could be large.
But in practice usually isn’t
Sec. 16.4
22. Introduction to Information Retrieval
Convergence of K-Means
Define goodness measure of cluster k as sum of
squared distances from cluster centroid:
Gk = Σi (di – ck)2 (sum over all di in cluster k)
G = Σk Gk
Reassignment monotonically decreases G since
each vector is assigned to the closest centroid.
Lower case!
Sec. 16.4
23. Introduction to Information Retrieval
Convergence of K-Means
Recomputation monotonically decreases each Gk
since (mk is number of members in cluster k):
Σ (di – a)2 reaches minimum for:
Σ –2(di – a) = 0
Σ di = Σ a
mK a = Σ di
a = (1/ mk) Σ di = ck
K-means typically converges quickly
Sec. 16.4
24. Introduction to Information Retrieval
Time Complexity
Computing distance between two docs is O(M)
where M is the dimensionality of the vectors.
Reassigning clusters: O(KN) distance computations,
or O(KNM).
Computing centroids: Each doc gets added once to
some centroid: O(NM).
Assume these two steps are each done once for I
iterations: O(IKNM).
Sec. 16.4
25. Introduction to Information Retrieval
Seed Choice
Results can vary based on
random seed selection.
Some seeds can result in poor
convergence rate, or
convergence to sub-optimal
clusterings.
Select good seeds using a heuristic
(e.g., doc least similar to any
existing mean)
Try out multiple starting points
Initialize with the results of another
method.
In the above, if you start
with B and E as centroids
you converge to {A,B,C}
and {D,E,F}
If you start with D and F
you converge to
{A,B,D,E} {C,F}
Example showing
sensitivity to seeds
Sec. 16.4
26. Introduction to Information Retrieval
K-means issues, variations, etc.
Recomputing the centroid after every assignment
(rather than after all points are re-assigned) can
improve speed of convergence of K-means
Assumes clusters are spherical in vector space
Sensitive to coordinate changes, weighting etc.
Disjoint and exhaustive
Doesn’t have a notion of “outliers” by default
But can add outlier filtering
Sec. 16.4
Dhillon et al. ICDM 2002 – variation to fix some issues with small
document clusters
27. Introduction to Information Retrieval
How Many Clusters?
Number of clusters K is given
Partition n docs into predetermined number of clusters
Finding the “right” number of clusters is part of the
problem
Given docs, partition into an “appropriate” number of
subsets.
E.g., for query results - ideal value of K not known up front
- though UI may impose limits.
Can usually take an algorithm for one flavor and
convert to the other.
28. Introduction to Information Retrieval
K not specified in advance
Say, the results of a query.
Solve an optimization problem: penalize having
lots of clusters
application dependent, e.g., compressed summary
of search results list.
Tradeoff between having more clusters (better
focus within each cluster) and having too many
clusters
29. Introduction to Information Retrieval
K not specified in advance
Given a clustering, define the Benefit for a
doc to be the cosine similarity to its
centroid
Define the Total Benefit to be the sum of
the individual doc Benefits.
Why is there always a clustering of Total Benefit n?
30. Introduction to Information Retrieval
Penalize lots of clusters
For each cluster, we have a Cost C.
Thus for a clustering with K clusters, the Total Cost is
KC.
Define the Value of a clustering to be =
Total Benefit - Total Cost.
Find the clustering of highest value, over all choices
of K.
Total benefit increases with increasing K. But can stop
when it doesn’t increase by “much”. The Cost term
enforces this.
31. Introduction to Information Retrieval
Hierarchical Clustering
Build a tree-based hierarchical taxonomy
(dendrogram) from a set of documents.
One approach: recursive application of a
partitional clustering algorithm.
animal
vertebrate
fish reptile amphib. mammal worm insect crustacean
invertebrate
Ch. 17
32. Introduction to Information Retrieval
Dendrogram: Hierarchical Clustering
Clustering obtained
by cutting the
dendrogram at a
desired level: each
connected
component forms a
cluster.
32
33. Introduction to Information Retrieval
Hierarchical Agglomerative Clustering
(HAC)
Starts with each doc in a separate cluster
then repeatedly joins the closest pair of
clusters, until there is only one cluster.
The history of merging forms a binary tree
or hierarchy.
Sec. 17.1
Note: the resulting clusters are still “hard” and induce a partition
34. Introduction to Information Retrieval
Closest pair of clusters
Many variants to defining closest pair of clusters
Single-link
Similarity of the most cosine-similar (single-link)
Complete-link
Similarity of the “furthest” points, the least cosine-similar
Centroid
Clusters whose centroids (centers of gravity) are the most
cosine-similar
Average-link
Average cosine between pairs of elements
Sec. 17.2
35. Introduction to Information Retrieval
Single Link Agglomerative Clustering
Use maximum similarity of pairs:
Can result in “straggly” (long and thin) clusters
due to chaining effect.
After merging ci and cj, the similarity of the
resulting cluster to another cluster, ck, is:
)
,
(
max
)
,
(
,
y
x
sim
c
c
sim
j
i c
y
c
x
j
i
))
,
(
),
,
(
max(
)
),
(( k
j
k
i
k
j
i c
c
sim
c
c
sim
c
c
c
sim
Sec. 17.2
37. Introduction to Information Retrieval
Complete Link
Use minimum similarity of pairs:
Makes “tighter,” spherical clusters that are typically
preferable.
After merging ci and cj, the similarity of the resulting
cluster to another cluster, ck, is:
)
,
(
min
)
,
(
,
y
x
sim
c
c
sim
j
i c
y
c
x
j
i
))
,
(
),
,
(
min(
)
),
(( k
j
k
i
k
j
i c
c
sim
c
c
sim
c
c
c
sim
Ci Cj Ck
Sec. 17.2
39. Introduction to Information Retrieval
Computational Complexity
In the first iteration, all HAC methods need to
compute similarity of all pairs of N initial instances,
which is O(N2).
In each of the subsequent N2 merging iterations,
compute the distance between the most recently
created cluster and all other existing clusters.
In order to maintain an overall O(N2) performance,
computing similarity to each other cluster must be
done in constant time.
Often O(N3) if done naively or O(N2 log N) if done more
cleverly
Sec. 17.2.1
40. Introduction to Information Retrieval
Group Average
Similarity of two clusters = average similarity of all pairs
within merged cluster.
Compromise between single and complete link.
Two options:
Averaged across all ordered pairs in the merged cluster
Averaged over all pairs between the two original clusters
No clear difference in efficacy
)
( :
)
(
)
,
(
)
1
(
1
)
,
(
j
i j
i
c
c
x x
y
c
c
y
j
i
j
i
j
i y
x
sim
c
c
c
c
c
c
sim
Sec. 17.3
41. Introduction to Information Retrieval
Computing Group Average Similarity
Always maintain sum of vectors in each cluster.
Compute similarity of clusters in constant time:
j
c
x
j x
c
s
)
(
)
1
|
|
|
|)(|
|
|
(|
|)
|
|
(|
))
(
)
(
(
))
(
)
(
(
)
,
(
j
i
j
i
j
i
j
i
j
i
j
i
c
c
c
c
c
c
c
s
c
s
c
s
c
s
c
c
sim
Sec. 17.3
42. Introduction to Information Retrieval
What Is A Good Clustering?
Internal criterion: A good clustering will produce
high quality clusters in which:
the intra-class (that is, intra-cluster) similarity is
high
the inter-class similarity is low
The measured quality of a clustering depends on
both the document representation and the
similarity measure used
Sec. 16.3
43. Introduction to Information Retrieval
External criteria for clustering quality
Quality measured by its ability to discover some
or all of the hidden patterns or latent classes in
gold standard data
Assesses a clustering with respect to ground truth
… requires labeled data
Assume documents with C gold standard classes,
while our clustering algorithms produce K clusters,
ω1, ω2, …, ωK with ni members.
Sec. 16.3
44. Introduction to Information Retrieval
External Evaluation of Cluster Quality
Simple measure: purity, the ratio between the
dominant class in the cluster πi and the size of
cluster ωi
Biased because having n clusters maximizes
purity
Others are entropy of classes in clusters (or
mutual information between classes and
clusters)
C
j
n
n
Purity ij
j
i
i
)
(
max
1
)
(
Sec. 16.3
45. Introduction to Information Retrieval
Cluster I Cluster II Cluster III
Cluster I: Purity = 1/6 (max(5, 1, 0)) = 5/6
Cluster II: Purity = 1/6 (max(1, 4, 1)) = 4/6
Cluster III: Purity = 1/5 (max(2, 0, 3)) = 3/5
Purity example
Sec. 16.3
46. Introduction to Information Retrieval
Rand Index measures between pair
decisions. Here RI = 0.68
Number of
points
Same Cluster
in clustering
Different
Clusters in
clustering
Same class in
ground truth 20 24
Different
classes in
ground truth
20 72
Sec. 16.3
47. Introduction to Information Retrieval
Rand index and Cluster F-measure
B
A
A
P
D
C
B
A
D
A
RI
C
A
A
R
Compare with standard Precision and Recall:
People also define and use a cluster F-measure,
which is probably a better measure.
Sec. 16.3
48. Introduction to Information Retrieval
Final word and resources
In clustering, clusters are inferred from the data without
human input (unsupervised learning)
However, in practice, it’s a bit less clear: there are many
ways of influencing the outcome of clustering: number of
clusters, similarity measure, representation of documents, .
. .
Resources
IIR 16 except 16.5
IIR 17.1–17.3