Measuring Semantic Similarity and
Relatedness in the Biomedical Domain
: Methods and Applications
Ted Pedersen, Ph.D.
Department of Computer Science
University of Minnesota, Duluth
tpederse@d.umn.edu
http://www.d.umn.edu/~tpederse
2
Topics
● Semantic similarity vs. semantic relatedness
● How to measure similarity
– With ontologies and corpora
● How to measure relatedness
– With definitions and corpora
● Applications?
– Word Sense Disambiguation
– Sentiment Classification
3
What are we measuring?
● Concept pairs
– Assign a numeric value that quantifies how
similar or related two concepts are
● Not words
– Must know concept underlying a word form
– Cold may be temperature or illness
● Concept Mapping
● Word Sense Disambiguation
– This tutorial assumes that's been resolved
4
Why?
● Being able to organize concepts by their
similarity or relatedness to each other is a
fundamental operation in the human mind,
and in many problems in Natural Language
Processing and Artificial Intelligence
● If we know a lot about X, and if we know Y is
similar to X, then a lot of what we know about
X may apply to Y
– Use X to explain or categorize Y
5
GOOD NEWS!
Free Open Source Software!
● WordNet::Similarity
– http://wn-similarity.sourcforge.net
– General English
– Widely used (+750 citations)
● UMLS::Similarity
– http://umls-similarity.sourceforge.net
– Unified Medical Language System
– Spun off from WordNet::Similarity
● But has added a whole lot!
6
Similar or Related?
● Similarity based on is-a relations
– How much is X like Y?
– Share ancestor in is-a hierarchy
● LCS : least common subsumer
● Closer / deeper the ancestor the more similar
● Tetanus and strep throat are similar
– both are kinds-of bacterial infections
7
Least Common Subsumer (LCS)
8
Similar or Related?
● Relatedness more general
– How much is X related to Y?
– Many ways to be related
● is-a, part-of, treats, affects, symptom-of, ...
● Tetanus and deep cuts are related but they
really aren't similar
– (deep cuts can cause tetanus)
● All similar concepts are related, but not all
related concepts are similar
9
Measures of Similarity
(WordNet::Similarity & UMLS::Similarity )
● Path Based
– Rada et al., 1989 (path)
– Caviedes & Cimino, 2004 (cdist)*
● cdist only in UMLS::Similarity
● Path + Depth
– Wu & Palmer, 1994 (wup)
– Leacock & Chodorow, 1998 (lch)
– Zhong et al., 2002 (zhong)*
– Nguyen & Al-Mubaid, 2006 (nam)*
● zhong and nam only in UMLS::Similarity
10
Measures of Similarity
(WordNet::Similarity & UMLS::Similarity)
● Path + Information Content
– Resnik, 1995 (res)
– Jiang & Conrath, 1997 (jcn)
– Lin, 1998 (lin)
11
Path Based Measures
● Distance between concepts (nodes) in tree
intuitively appealing
● Spatial orientation, good for networks or maps
but not is-a hierarchies
– Reasonable approximation sometimes
– Assumes all paths have same “weight”
– But, more specific (deeper) paths tend to
travel less semantic distance
● Shortest path a good start, but needs
corrections
12
Shortest is-a Path
1
● path(a,b) = ------------------------------
shortest is-a path(a,b)
13
We count nodes...
● Maximum = 1
– self similarity
– path(tetanus,tetanus) = 1
● Minimum = 1 / (longest path in is-a tree)
– path(typhoid, oral thrush) = 1/7
– path(moccasin athlete's foot, strep throat) = 1/7
– etc...
14
path(strep throat, tetanus) = .25
15
path (bacterial infection, yeast infection) = .25
16
?
● Are bacterial infection and yeast infection
similar to the same degree as are tetanus and
strep throat ?
● The path measure says “yes, they are.”
17
Path + Depth
● Path only doesn't account for specificity
● Deeper concepts more specific
● Paths between deeper concepts travel less
semantic distance
18
Wu and Palmer, 1994
2 * depth (LCS (a,b))
● wup(a,b) = ----------------------------
depth (a) + depth (b)
● depth(x) = shortest is-a path(root,x)
19
wup(strep throat, tetanus) = (2*2)/(4+3) = .57
20
wup (bacterial infections, yeast infections) = (2*1)/(2+3) = .4
21
?
● Wu and Palmer say that strep throat and
tetanus (.57) are more similar than are
bacterial infections and yeast infections (.4)
● Path says that strep throat and tetanus (.25)
are equally similar as are bacterial infections
and yeast infections (.25)
22
Information Content
● ic(concept) = -log p(concept) [Resnik 1995]
– Need to count concepts
– Term frequency +Inherited frequency
– p(concept) = tf + if / N
● Depth shows specificity but not frequency
● Low frequency concepts often much more
specific than high frequency ones
– Related to Zipf's Law of Meaning? (more
frequent word have more senses)
23
Information Content
term frequency (tf)
24
Information Content
inherited frequency (if)
25
Information Content (IC = -log (f/N)
final count (f = tf + if, N = 365,820)
26
Lin, 1998
2 * IC (LCS (a,b))
● lin(a,b) = --------------------------
IC (a) + IC (b)
● Look familiar?
27
Lin, 1998
2 * IC (LCS (a,b))
● lin(a,b) = --------------------------
IC (a) + IC (b)
● Look familiar?
2* depth (LCS (a,b) )
● wup(a,b) = ------------------------------
depth(a) + depth (b)
28
lin (strep throat, tetanus) =
2 * 2.26 / (5.21 + 4.11) = 0.485
29
lin (bacterial infection, yeast infection) =
2 * 0.71 / (2.26+2.81) = 0.280
30
?
● Lin says that strep throat and tetanus (.49) are
more similar than are bacterial infection and
yeast infection (.28)
● Wu and Palmer say that strep throat and
tetanus (.57) are more similar than are
bacterial infection and yeast infection (.4)
● Path says that strep throat and tetanus (.25)
are equally similar as are bacterial infection
and yeast infection (.25)
31
How to decide??
● Hierarchies best suited for nouns
● If you have a hierarchy of concepts, shortest
path can be distorted/misleading
● If the hierarchy is carefully developed and well
balanced, then wup can perform well
● If the hierarchy is not balanced or unevenly
developed, the information content measures
can help correct that
32
What about concepts
not connected via is-a relations?
● Connected via other relations?
– Part-of, treatment-of, causes, etc.
● Not connected at all?
– In different sections (axes) of an ontology
(infections and treatments)
– In different ontologies entirely (SNOMEDCT
and FMA)
● Relatedness!
– Use definition information
– No is-a relations so can't be similarity
33
Measures of relatedness
● Path based
– Hirst & St-Onge, 1998 (hso)
● Definition based
– Lesk, 1986
– Adapted lesk (lesk)
● Banerjee & Pedersen, 2003
● Definition + corpus
– Gloss Vector (vector)
● Patwardhan & Pedersen, 2006
34
Path based relatedness
● Ontologies include relations other than is-a
● These can be used to find shortest paths
between concepts
– However, a path made up of different kinds
of relations can lead to big semantic jumps
– Aspirin treats headaches which are a
symptom of the flu which can be prevented
by a flu vaccine which is recommend for
children
● …. so aspirin and children are related ??
35
Measuring relatedness with definitions
● Related concepts defined using many of the
same terms
● But, definitions are short, inconsistent
● Concepts don't need to be connected via
relations or paths to measure them
– Lesk, 1986
– Adapted Lesk, Banerjee & Pedersen, 2003
36
Two separate ontologies...
37
Could join them together … ?
38
Each concept has definition
39
Find overlaps in definitions...
40
Overlaps
● Oral Thrush and Alopecia
– side effect of chemotherapy
● Can't see this in structure of is-a hierarchies
● Oral thrush and folliculitis just as similar
● Alopecia and Folliculitis
– hair disorder & hair
● Reflects structure of is-a hierarchies
● If you start with text like this maybe you can
build is-a hierarchies automatically!
– Future work...
41
Lesk and Adapted Lesk
● Lesk, 1986 : measure overlaps in definitions to
assign senses to words
– The more overlaps between two senses
(concepts), the more related
● Banerjee & Pedersen, 2003, Adapted Lesk
– Augment definition of each concept with
definitions of related concepts
● Build a super gloss
– Increase chance of finding overlaps
● lesk in WordNet::Similarity & UMLS::Similarity
42
The problem with definitions ...
● Definitions contain variations of terminology
that make it impossible to find exact overlaps
● Alopecia : … a result of cancer treatment
● Thrush : … a side effect of chemotherapy
– Real life example, I modified the alopecia
definition to work better with Lesk!!!
– NO MATCHES!!
● How can we see that “result” and “side effect”
are similar, as are “cancer treatment” and
“chemotherapy” ?
43
Gloss Vector Measure
of Semantic Relatedness
● Rely on co-occurrences of terms
– Terms that occur within some given number
of terms of each other
● Allows for a fuzzier notion of matching
● Exploits second order co-occurrences
– Friend of a friend relation
– Suppose cancer_treatment and
chemotherapy don't occur in text with each
other. But, suppose that “survival” occurs
with each.
– cancer_treatment and chemotherapy are
second order co-occurrences via “survival”
44
Gloss Vector Measure
of Semantic Relatedness
● Replace words or terms in definitions with
vector of co-occurrences observed in corpus
● Defined concept now represented by an
averaged vector of co-occurrences
● Measure relatedness of concepts via cosine
between their respective vectors
● Patwardhan and Pedersen, 2006 (EACL)
– Inspired by Schutze, 1998 (CL)
● vector in WordNet::Similarity & UMLS::Similarity
45
Experimental Results
● Vector > Lesk > Info Content > Depth > Path
– Clear trend across various studies
● Dramatic differences when comparing to
human reference standards (Vector > Lesk >>
Info Content > Depth > Path)
– Banerjee and Pedersen, 2003 (IJCAI)
– Pedersen, et al. 2007 (JBI)
● Differences less extreme in extrinsic task-
based evaluations
– Human raters mix up similarity &
relatedness?
46
So far we've shown that ...
● … we can quantify the similarity and
relatedness between concepts using a variety
of sources of information
– Paths
– Depths
– Information content
– Definitions
– Co-occurrence / corpus data
● There is open source software to help you!
47
Sounds great! What now?
● SenseRelate Hypothesis : Most words in text
will have multiple possible senses and will
often be used with the sense most related to
those of surrounding words
– He either has a cold or the flu
● Cold not likely to mean air temperature
● The underlying sentiment of a text can be
discovered by determining which emotion is
most related to the words in that text
– I cried a lot after my mother died.
● Happy?
48
SenseRelate!
● In coherent text words will be used in similar
or related senses, and these will also be
related to the overall topic or mood of a text
● First applied to WSD in 2002
– Banerjee and Pedersen, 2002 (WordNet)
– Patwardhan et al., 2003 (WordNet)
– Pedersen and Kolhatkar 2009 (WordNet)
– McInnes et al., 2011 (UMLS)
● Recently applied to emotion classification
– Pedersen, 2012 (i2b2 suicide notes
challenge)
49
GOOD NEWS!
Free Open Source Software!
● WordNet::SenseRelate
– AllWords, TargetWord, WordToSet
– http://senserelate.sourceforge.net
● UMLS::SenseRelate
– AllWords
– http://search.cpan.org/dist/UMLS-
SenseRelate/
50
SenseRelate for WSD
● Assign each word the sense which is most
similar or related to one or more of its
neighbors
– Pairwise
– 2 or more neighbors
● Pairwise algorithm results in a trellis much like
in HMMs
– More neighbors adds lots of information and
a lot of computational complexity
51
SenseRelate - pairwise
52
SenseRelate – 2 neighbors
53
General Observations on WSD Results
● Nouns more accurate; verbs, adjectives, and
adverbs less so
● Increasing the window size nearly always
improves performance
● Jiang-Conrath measure often a high performer
for nouns (e.g., Patwardhan et al. 2003)
● Info content measures perform well with
clinical text (McInnes et al. 2011)
● Vector and lesk have coverage advantage
– handle mixed pairs while others don't
54
Recent Specific Experiment
● Compare efficacy of different measures when
performing WSD using UMLS::SenseRelate
● Evaluate on MSH-WSD data (from NLM)
● Information Content based on concept counts
from Medline (UMLSonMedline, from NLM)
● More details available
– McInnes, et al. 2011 (AMIA)
– McInnes & Pedersen, in review
55
MSH-WSD data set
● Contains 203 ambiguous terms and acronyms
– Instances are from Medline
– CUIs from 2009 AB version of UMLS
– Each word has avg. 187 instances, 2.08
possible senses, and 54.5% majority sense
● Leverages fact that MedLine is manually
indexed with Medical Subject Headings
(associated with CUIs)
● http://wsd.nlm.nih.gov/collaboration.shtml
56
Results
Window
size
Path based Information Content Relatedness
path wup jcn lin lesk vector
2 .63 .63 .65 .65 .67 .68
5 .66 .67 .68 .69 .68 .68
10 .68 .69 .70 .71 .68 .67
25 .70 .70 .73 .74 .68 .65
57
SenseRelate for
Sentiment Classification
● Find emotion most related to context
– Similarity less effective since many words
can be related an emotion, but fewer are
similar
● Related to happy? : love, food, success, ...
● Similar to happy? : joyful, ecstatic, pleased, …
– Pairwise comparisons between emotion and
senses of words in context
● Same form as Naive Bayesian model or
Latent Variable model
– WordNet::SenseRelate::WordToSet
58
SenseRelate - WordToSet
59
Experimental Results
● Sentiment classification results in 2011 i2b2
suicide notes challenge were disappointing
(Pedersen, 2012)
– Suicide notes not very emotional!
– In many cases reflect a decision made and
focus on settling affairs
60
Future Work
● Find new domains and types of problems
– EHR, clinical records, …
● Integrate Unsupervised Clustering with
WordNet::Similarity and UMLS::Similarity
– http://senseclusters.sourceforge.net
● Exploit graphical nature of of SenseRelate
– e.g., Minimal Spanning Trees / Viterbi
Algorithm to solve larger problem spaces?
● Attract and support users for all of these tools!
61
UMLS::Similarity Collaborators
● Serguei Pakhomov :
– Assoc. Professor, UMTC
● Bridget McInnes :
– PhD UMTC, 2009
– Post-doc UMTC, 2009 - 2011
– Now at Securboration, NC
● Ying Liu :
– PhD UAB, 2007
– Post-doc UMTC 2009 – 2011
– Until recently at City of Hope, LA
62
Acknowledgments
● This work on semantic similarity and
relatedness has been supported by a National
Science Foundation CAREER award (2001 –
2007, #0092784, PI Pedersen) and by the
National Library of Medicine, National
Institutes of Health (2008 – 2012,
1R01LM009623-01A2, PI Pakhomov)
● The contents of this talk are solely my
responsibility and do not necessarily represent
the o cial views of the National Scienceffi
Foundation or the National Institutes of Health.
63
Conclusion
● Measures of semantic similarity and
relatedness are supported by a rich body of
theory, and open source software
– http://wn-similarity.sourceforge.net
– http://umls-similarity.sourceforge.net
● http://atlas.ahc.umn.edu
● These measures can be used as building
blocks for many NLP and AI applications
– Word sense disambiguation
– Sentiment classification
64
References
● S. Banerjee and T. Pedersen. An adapted Lesk algorithm for
word sense disambiguation using WordNet. In Proceedings of
the Third International Conference on Intelligent Text
Processing and Computational Linguistics, pages 136—145,
Mexico City, February 2002.
● S. Banerjee and T. Pedersen. Extended gloss overlaps as a
measure of semantic relatedness. In Proceedings of the
Eighteenth International Joint Conference on Artificial
Intelligence, pages 805-810, Acapulco, August 2003.
● J. Caviedes and J. Cimino. Towards the development of a
conceptual distance metric for the UMLS. Journal of
Biomedical Informatics, 37(2):77-85, April 2004.
● J. Jiang and D. Conrath. Semantic similarity based on corpus
statistics and lexical taxonomy. In Proceedings on
International Conference on Research in Computational
Linguistics, pages 19-33, Taiwan, 1997.
65
References
● C. Leacock and M. Chodorow. Combining local context and
WordNet similarity for word sense identification. In C.
Fellbaum, editor, WordNet: An electronic lexical database,
pages 265-283. MIT Press, 1998.
● M.E. Lesk. Automatic sense disambiguation using machine
readable dictionaries: how to tell a pine code from an ice cream
cone. In Proceedings of the 5th annual international conference on
Systems documentation, pages 24-26. ACM Press, 1986.
● D. Lin. An information-theoretic definition of similarity. In
Proceedings of the International Conference on Machine Learning,
Madison, August 1998.
● B. McInnes, T. Pedersen, Y. Liu, G. Melton and S. Pakhomov.
Knowledge-based Method for Determining the Meaning of
Ambiguous Biomedical Terms Using Information Content Measures
of Similarity. Appears in the Proceedings of the Annual Symposium
of the American Medical Informatics Association, pages 895-904,
Washington, DC, October 2011.
66
References
● H.A. Nguyen and H. Al-Mubaid. New ontology-based semantic
similarity measure for the biomedical domain. In Proceedings of the
IEEE International Conference on Granular Computing, pages 623-
628, Atlanta, GA, May 2006.
● S. Patwardhan, S. Banerjee, and T. Pedersen. Using measures of
semantic relatedness for word sense disambiguation. In roceedings
of the Fourth International Conference on Intelligent Text
Processing and Computational Linguistics, pages 241—257,
Mexico City, February 2003.
● S. Patwardhan and T. Pedersen. Using WordNet-based Context
Vectors to Estimate the Semantic Relatedness of Concepts. In
Proceedings of the EACL 2006 Workshop on Making Sense of
Sense: Bringing Computational Linguistics and Psycholinguistics
Together, pages 1-8, Trento, Italy, April 2006.
● T. Pedersen. Rule-based and lightly supervised methods to
predict emotions in suicide notes. Biomedical Informatics
Insights, 2012:5 (Suppl. 1):185-193, January 2012.
67
References
● T. Pedersen and V. Kolhatkar. WordNet :: SenseRelate ::
AllWords - a broad coverage word sense tagger that
maximizes semantic relatedness. In Proceedings of the North
American Chapter of the Association for Computational
Linguistics - Human Language Technologies 2009
Conference, pages 17-20, Boulder, CO, June 2009.
● T. Pedersen, S. Pakhomov, S. Patwardhan, and C. Chute.
Measures of semantic similarity and relatedness in the
biomedical domain. Journal of Biomedical Informatics, 40(3) :
288-299, June 2007.
● R. Rada, H. Mili, E. Bicknell, and M. Blettner. Development
and application of a metric on semantic nets. IEEE
Transactions on Systems, Man and Cybernetics, 19(1):17-30,
1989.
68
References
● P. Resnik. Using information content to evaluate semantic
similarity in a taxonomy. In Proceedings of the 14th
International Joint Conference on Artificial Intelligence, pages
448-453, Montreal, August 1995.
● H. Schütze. Automatic word sense discrimination.
Computational Linguistics, 24(1):97-123, 1998.
● J. Zhong, H. Zhu, J. Li, and Y. Yu. Conceptual graph matching
for semantic search. Proceedings of the 10th International
Conference on Conceptual Structures, pages 92-106, 2002.

Talk at UAB, April 12, 2013

  • 1.
    Measuring Semantic Similarityand Relatedness in the Biomedical Domain : Methods and Applications Ted Pedersen, Ph.D. Department of Computer Science University of Minnesota, Duluth tpederse@d.umn.edu http://www.d.umn.edu/~tpederse
  • 2.
    2 Topics ● Semantic similarityvs. semantic relatedness ● How to measure similarity – With ontologies and corpora ● How to measure relatedness – With definitions and corpora ● Applications? – Word Sense Disambiguation – Sentiment Classification
  • 3.
    3 What are wemeasuring? ● Concept pairs – Assign a numeric value that quantifies how similar or related two concepts are ● Not words – Must know concept underlying a word form – Cold may be temperature or illness ● Concept Mapping ● Word Sense Disambiguation – This tutorial assumes that's been resolved
  • 4.
    4 Why? ● Being ableto organize concepts by their similarity or relatedness to each other is a fundamental operation in the human mind, and in many problems in Natural Language Processing and Artificial Intelligence ● If we know a lot about X, and if we know Y is similar to X, then a lot of what we know about X may apply to Y – Use X to explain or categorize Y
  • 5.
    5 GOOD NEWS! Free OpenSource Software! ● WordNet::Similarity – http://wn-similarity.sourcforge.net – General English – Widely used (+750 citations) ● UMLS::Similarity – http://umls-similarity.sourceforge.net – Unified Medical Language System – Spun off from WordNet::Similarity ● But has added a whole lot!
  • 6.
    6 Similar or Related? ●Similarity based on is-a relations – How much is X like Y? – Share ancestor in is-a hierarchy ● LCS : least common subsumer ● Closer / deeper the ancestor the more similar ● Tetanus and strep throat are similar – both are kinds-of bacterial infections
  • 7.
  • 8.
    8 Similar or Related? ●Relatedness more general – How much is X related to Y? – Many ways to be related ● is-a, part-of, treats, affects, symptom-of, ... ● Tetanus and deep cuts are related but they really aren't similar – (deep cuts can cause tetanus) ● All similar concepts are related, but not all related concepts are similar
  • 9.
    9 Measures of Similarity (WordNet::Similarity& UMLS::Similarity ) ● Path Based – Rada et al., 1989 (path) – Caviedes & Cimino, 2004 (cdist)* ● cdist only in UMLS::Similarity ● Path + Depth – Wu & Palmer, 1994 (wup) – Leacock & Chodorow, 1998 (lch) – Zhong et al., 2002 (zhong)* – Nguyen & Al-Mubaid, 2006 (nam)* ● zhong and nam only in UMLS::Similarity
  • 10.
    10 Measures of Similarity (WordNet::Similarity& UMLS::Similarity) ● Path + Information Content – Resnik, 1995 (res) – Jiang & Conrath, 1997 (jcn) – Lin, 1998 (lin)
  • 11.
    11 Path Based Measures ●Distance between concepts (nodes) in tree intuitively appealing ● Spatial orientation, good for networks or maps but not is-a hierarchies – Reasonable approximation sometimes – Assumes all paths have same “weight” – But, more specific (deeper) paths tend to travel less semantic distance ● Shortest path a good start, but needs corrections
  • 12.
    12 Shortest is-a Path 1 ●path(a,b) = ------------------------------ shortest is-a path(a,b)
  • 13.
    13 We count nodes... ●Maximum = 1 – self similarity – path(tetanus,tetanus) = 1 ● Minimum = 1 / (longest path in is-a tree) – path(typhoid, oral thrush) = 1/7 – path(moccasin athlete's foot, strep throat) = 1/7 – etc...
  • 14.
  • 15.
    15 path (bacterial infection,yeast infection) = .25
  • 16.
    16 ? ● Are bacterialinfection and yeast infection similar to the same degree as are tetanus and strep throat ? ● The path measure says “yes, they are.”
  • 17.
    17 Path + Depth ●Path only doesn't account for specificity ● Deeper concepts more specific ● Paths between deeper concepts travel less semantic distance
  • 18.
    18 Wu and Palmer,1994 2 * depth (LCS (a,b)) ● wup(a,b) = ---------------------------- depth (a) + depth (b) ● depth(x) = shortest is-a path(root,x)
  • 19.
    19 wup(strep throat, tetanus)= (2*2)/(4+3) = .57
  • 20.
    20 wup (bacterial infections,yeast infections) = (2*1)/(2+3) = .4
  • 21.
    21 ? ● Wu andPalmer say that strep throat and tetanus (.57) are more similar than are bacterial infections and yeast infections (.4) ● Path says that strep throat and tetanus (.25) are equally similar as are bacterial infections and yeast infections (.25)
  • 22.
    22 Information Content ● ic(concept)= -log p(concept) [Resnik 1995] – Need to count concepts – Term frequency +Inherited frequency – p(concept) = tf + if / N ● Depth shows specificity but not frequency ● Low frequency concepts often much more specific than high frequency ones – Related to Zipf's Law of Meaning? (more frequent word have more senses)
  • 23.
  • 24.
  • 25.
    25 Information Content (IC= -log (f/N) final count (f = tf + if, N = 365,820)
  • 26.
    26 Lin, 1998 2 *IC (LCS (a,b)) ● lin(a,b) = -------------------------- IC (a) + IC (b) ● Look familiar?
  • 27.
    27 Lin, 1998 2 *IC (LCS (a,b)) ● lin(a,b) = -------------------------- IC (a) + IC (b) ● Look familiar? 2* depth (LCS (a,b) ) ● wup(a,b) = ------------------------------ depth(a) + depth (b)
  • 28.
    28 lin (strep throat,tetanus) = 2 * 2.26 / (5.21 + 4.11) = 0.485
  • 29.
    29 lin (bacterial infection,yeast infection) = 2 * 0.71 / (2.26+2.81) = 0.280
  • 30.
    30 ? ● Lin saysthat strep throat and tetanus (.49) are more similar than are bacterial infection and yeast infection (.28) ● Wu and Palmer say that strep throat and tetanus (.57) are more similar than are bacterial infection and yeast infection (.4) ● Path says that strep throat and tetanus (.25) are equally similar as are bacterial infection and yeast infection (.25)
  • 31.
    31 How to decide?? ●Hierarchies best suited for nouns ● If you have a hierarchy of concepts, shortest path can be distorted/misleading ● If the hierarchy is carefully developed and well balanced, then wup can perform well ● If the hierarchy is not balanced or unevenly developed, the information content measures can help correct that
  • 32.
    32 What about concepts notconnected via is-a relations? ● Connected via other relations? – Part-of, treatment-of, causes, etc. ● Not connected at all? – In different sections (axes) of an ontology (infections and treatments) – In different ontologies entirely (SNOMEDCT and FMA) ● Relatedness! – Use definition information – No is-a relations so can't be similarity
  • 33.
    33 Measures of relatedness ●Path based – Hirst & St-Onge, 1998 (hso) ● Definition based – Lesk, 1986 – Adapted lesk (lesk) ● Banerjee & Pedersen, 2003 ● Definition + corpus – Gloss Vector (vector) ● Patwardhan & Pedersen, 2006
  • 34.
    34 Path based relatedness ●Ontologies include relations other than is-a ● These can be used to find shortest paths between concepts – However, a path made up of different kinds of relations can lead to big semantic jumps – Aspirin treats headaches which are a symptom of the flu which can be prevented by a flu vaccine which is recommend for children ● …. so aspirin and children are related ??
  • 35.
    35 Measuring relatedness withdefinitions ● Related concepts defined using many of the same terms ● But, definitions are short, inconsistent ● Concepts don't need to be connected via relations or paths to measure them – Lesk, 1986 – Adapted Lesk, Banerjee & Pedersen, 2003
  • 36.
  • 37.
    37 Could join themtogether … ?
  • 38.
  • 39.
    39 Find overlaps indefinitions...
  • 40.
    40 Overlaps ● Oral Thrushand Alopecia – side effect of chemotherapy ● Can't see this in structure of is-a hierarchies ● Oral thrush and folliculitis just as similar ● Alopecia and Folliculitis – hair disorder & hair ● Reflects structure of is-a hierarchies ● If you start with text like this maybe you can build is-a hierarchies automatically! – Future work...
  • 41.
    41 Lesk and AdaptedLesk ● Lesk, 1986 : measure overlaps in definitions to assign senses to words – The more overlaps between two senses (concepts), the more related ● Banerjee & Pedersen, 2003, Adapted Lesk – Augment definition of each concept with definitions of related concepts ● Build a super gloss – Increase chance of finding overlaps ● lesk in WordNet::Similarity & UMLS::Similarity
  • 42.
    42 The problem withdefinitions ... ● Definitions contain variations of terminology that make it impossible to find exact overlaps ● Alopecia : … a result of cancer treatment ● Thrush : … a side effect of chemotherapy – Real life example, I modified the alopecia definition to work better with Lesk!!! – NO MATCHES!! ● How can we see that “result” and “side effect” are similar, as are “cancer treatment” and “chemotherapy” ?
  • 43.
    43 Gloss Vector Measure ofSemantic Relatedness ● Rely on co-occurrences of terms – Terms that occur within some given number of terms of each other ● Allows for a fuzzier notion of matching ● Exploits second order co-occurrences – Friend of a friend relation – Suppose cancer_treatment and chemotherapy don't occur in text with each other. But, suppose that “survival” occurs with each. – cancer_treatment and chemotherapy are second order co-occurrences via “survival”
  • 44.
    44 Gloss Vector Measure ofSemantic Relatedness ● Replace words or terms in definitions with vector of co-occurrences observed in corpus ● Defined concept now represented by an averaged vector of co-occurrences ● Measure relatedness of concepts via cosine between their respective vectors ● Patwardhan and Pedersen, 2006 (EACL) – Inspired by Schutze, 1998 (CL) ● vector in WordNet::Similarity & UMLS::Similarity
  • 45.
    45 Experimental Results ● Vector> Lesk > Info Content > Depth > Path – Clear trend across various studies ● Dramatic differences when comparing to human reference standards (Vector > Lesk >> Info Content > Depth > Path) – Banerjee and Pedersen, 2003 (IJCAI) – Pedersen, et al. 2007 (JBI) ● Differences less extreme in extrinsic task- based evaluations – Human raters mix up similarity & relatedness?
  • 46.
    46 So far we'veshown that ... ● … we can quantify the similarity and relatedness between concepts using a variety of sources of information – Paths – Depths – Information content – Definitions – Co-occurrence / corpus data ● There is open source software to help you!
  • 47.
    47 Sounds great! Whatnow? ● SenseRelate Hypothesis : Most words in text will have multiple possible senses and will often be used with the sense most related to those of surrounding words – He either has a cold or the flu ● Cold not likely to mean air temperature ● The underlying sentiment of a text can be discovered by determining which emotion is most related to the words in that text – I cried a lot after my mother died. ● Happy?
  • 48.
    48 SenseRelate! ● In coherenttext words will be used in similar or related senses, and these will also be related to the overall topic or mood of a text ● First applied to WSD in 2002 – Banerjee and Pedersen, 2002 (WordNet) – Patwardhan et al., 2003 (WordNet) – Pedersen and Kolhatkar 2009 (WordNet) – McInnes et al., 2011 (UMLS) ● Recently applied to emotion classification – Pedersen, 2012 (i2b2 suicide notes challenge)
  • 49.
    49 GOOD NEWS! Free OpenSource Software! ● WordNet::SenseRelate – AllWords, TargetWord, WordToSet – http://senserelate.sourceforge.net ● UMLS::SenseRelate – AllWords – http://search.cpan.org/dist/UMLS- SenseRelate/
  • 50.
    50 SenseRelate for WSD ●Assign each word the sense which is most similar or related to one or more of its neighbors – Pairwise – 2 or more neighbors ● Pairwise algorithm results in a trellis much like in HMMs – More neighbors adds lots of information and a lot of computational complexity
  • 51.
  • 52.
  • 53.
    53 General Observations onWSD Results ● Nouns more accurate; verbs, adjectives, and adverbs less so ● Increasing the window size nearly always improves performance ● Jiang-Conrath measure often a high performer for nouns (e.g., Patwardhan et al. 2003) ● Info content measures perform well with clinical text (McInnes et al. 2011) ● Vector and lesk have coverage advantage – handle mixed pairs while others don't
  • 54.
    54 Recent Specific Experiment ●Compare efficacy of different measures when performing WSD using UMLS::SenseRelate ● Evaluate on MSH-WSD data (from NLM) ● Information Content based on concept counts from Medline (UMLSonMedline, from NLM) ● More details available – McInnes, et al. 2011 (AMIA) – McInnes & Pedersen, in review
  • 55.
    55 MSH-WSD data set ●Contains 203 ambiguous terms and acronyms – Instances are from Medline – CUIs from 2009 AB version of UMLS – Each word has avg. 187 instances, 2.08 possible senses, and 54.5% majority sense ● Leverages fact that MedLine is manually indexed with Medical Subject Headings (associated with CUIs) ● http://wsd.nlm.nih.gov/collaboration.shtml
  • 56.
    56 Results Window size Path based InformationContent Relatedness path wup jcn lin lesk vector 2 .63 .63 .65 .65 .67 .68 5 .66 .67 .68 .69 .68 .68 10 .68 .69 .70 .71 .68 .67 25 .70 .70 .73 .74 .68 .65
  • 57.
    57 SenseRelate for Sentiment Classification ●Find emotion most related to context – Similarity less effective since many words can be related an emotion, but fewer are similar ● Related to happy? : love, food, success, ... ● Similar to happy? : joyful, ecstatic, pleased, … – Pairwise comparisons between emotion and senses of words in context ● Same form as Naive Bayesian model or Latent Variable model – WordNet::SenseRelate::WordToSet
  • 58.
  • 59.
    59 Experimental Results ● Sentimentclassification results in 2011 i2b2 suicide notes challenge were disappointing (Pedersen, 2012) – Suicide notes not very emotional! – In many cases reflect a decision made and focus on settling affairs
  • 60.
    60 Future Work ● Findnew domains and types of problems – EHR, clinical records, … ● Integrate Unsupervised Clustering with WordNet::Similarity and UMLS::Similarity – http://senseclusters.sourceforge.net ● Exploit graphical nature of of SenseRelate – e.g., Minimal Spanning Trees / Viterbi Algorithm to solve larger problem spaces? ● Attract and support users for all of these tools!
  • 61.
    61 UMLS::Similarity Collaborators ● SergueiPakhomov : – Assoc. Professor, UMTC ● Bridget McInnes : – PhD UMTC, 2009 – Post-doc UMTC, 2009 - 2011 – Now at Securboration, NC ● Ying Liu : – PhD UAB, 2007 – Post-doc UMTC 2009 – 2011 – Until recently at City of Hope, LA
  • 62.
    62 Acknowledgments ● This workon semantic similarity and relatedness has been supported by a National Science Foundation CAREER award (2001 – 2007, #0092784, PI Pedersen) and by the National Library of Medicine, National Institutes of Health (2008 – 2012, 1R01LM009623-01A2, PI Pakhomov) ● The contents of this talk are solely my responsibility and do not necessarily represent the o cial views of the National Scienceffi Foundation or the National Institutes of Health.
  • 63.
    63 Conclusion ● Measures ofsemantic similarity and relatedness are supported by a rich body of theory, and open source software – http://wn-similarity.sourceforge.net – http://umls-similarity.sourceforge.net ● http://atlas.ahc.umn.edu ● These measures can be used as building blocks for many NLP and AI applications – Word sense disambiguation – Sentiment classification
  • 64.
    64 References ● S. Banerjeeand T. Pedersen. An adapted Lesk algorithm for word sense disambiguation using WordNet. In Proceedings of the Third International Conference on Intelligent Text Processing and Computational Linguistics, pages 136—145, Mexico City, February 2002. ● S. Banerjee and T. Pedersen. Extended gloss overlaps as a measure of semantic relatedness. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pages 805-810, Acapulco, August 2003. ● J. Caviedes and J. Cimino. Towards the development of a conceptual distance metric for the UMLS. Journal of Biomedical Informatics, 37(2):77-85, April 2004. ● J. Jiang and D. Conrath. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings on International Conference on Research in Computational Linguistics, pages 19-33, Taiwan, 1997.
  • 65.
    65 References ● C. Leacockand M. Chodorow. Combining local context and WordNet similarity for word sense identification. In C. Fellbaum, editor, WordNet: An electronic lexical database, pages 265-283. MIT Press, 1998. ● M.E. Lesk. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine code from an ice cream cone. In Proceedings of the 5th annual international conference on Systems documentation, pages 24-26. ACM Press, 1986. ● D. Lin. An information-theoretic definition of similarity. In Proceedings of the International Conference on Machine Learning, Madison, August 1998. ● B. McInnes, T. Pedersen, Y. Liu, G. Melton and S. Pakhomov. Knowledge-based Method for Determining the Meaning of Ambiguous Biomedical Terms Using Information Content Measures of Similarity. Appears in the Proceedings of the Annual Symposium of the American Medical Informatics Association, pages 895-904, Washington, DC, October 2011.
  • 66.
    66 References ● H.A. Nguyenand H. Al-Mubaid. New ontology-based semantic similarity measure for the biomedical domain. In Proceedings of the IEEE International Conference on Granular Computing, pages 623- 628, Atlanta, GA, May 2006. ● S. Patwardhan, S. Banerjee, and T. Pedersen. Using measures of semantic relatedness for word sense disambiguation. In roceedings of the Fourth International Conference on Intelligent Text Processing and Computational Linguistics, pages 241—257, Mexico City, February 2003. ● S. Patwardhan and T. Pedersen. Using WordNet-based Context Vectors to Estimate the Semantic Relatedness of Concepts. In Proceedings of the EACL 2006 Workshop on Making Sense of Sense: Bringing Computational Linguistics and Psycholinguistics Together, pages 1-8, Trento, Italy, April 2006. ● T. Pedersen. Rule-based and lightly supervised methods to predict emotions in suicide notes. Biomedical Informatics Insights, 2012:5 (Suppl. 1):185-193, January 2012.
  • 67.
    67 References ● T. Pedersenand V. Kolhatkar. WordNet :: SenseRelate :: AllWords - a broad coverage word sense tagger that maximizes semantic relatedness. In Proceedings of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies 2009 Conference, pages 17-20, Boulder, CO, June 2009. ● T. Pedersen, S. Pakhomov, S. Patwardhan, and C. Chute. Measures of semantic similarity and relatedness in the biomedical domain. Journal of Biomedical Informatics, 40(3) : 288-299, June 2007. ● R. Rada, H. Mili, E. Bicknell, and M. Blettner. Development and application of a metric on semantic nets. IEEE Transactions on Systems, Man and Cybernetics, 19(1):17-30, 1989.
  • 68.
    68 References ● P. Resnik.Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, pages 448-453, Montreal, August 1995. ● H. Schütze. Automatic word sense discrimination. Computational Linguistics, 24(1):97-123, 1998. ● J. Zhong, H. Zhu, J. Li, and Y. Yu. Conceptual graph matching for semantic search. Proceedings of the 10th International Conference on Conceptual Structures, pages 92-106, 2002.