Classifying Concept Mappings using Instance Similarity Features
1. Introduction Three classifiers Experiments and results Summary
Similarity Features, and their Role in Concept
Alignment Learning
Shenghui Wang1 Gwenn Englebienne2 Christophe Gu´eret 1
Stefan Schlobach1 Antoine Isaac1 Martijn Schut1
1 Vrije Universiteit Amsterdam
2 Universiteit van Amsterdam
SEMAPRO 2010
Florence
2. Introduction Three classifiers Experiments and results Summary
Outline
1 Introduction
Classification of concept mappings based on instance
similarity
2 Three classifiers
Markov Random Field
Multi-objective Evolution Strategy
Support Vector Machine
3 Experiments and results
4 Summary
3. Introduction Three classifiers Experiments and results Summary
Thesaurus mapping
SemanTic Interoperability To access Cultural Heritage
(STITCH) through mappings between thesauri
Scope of the problem:
Big thesauri with tens of thousands of concepts
Huge collections (e.g., National Library of the Neterlands:
80km of books in one collection)
Heterogeneous (e.g., books, manuscripts, illustrations, etc.)
Multi-lingual problem
Solving matching problems is one step to the solution of the
interoperability problem.
e.g., “plankzeilen” vs. “surfsport”
e.g., “archeology” vs. “excavation”
4. Introduction Three classifiers Experiments and results Summary
Thesaurus mapping
SemanTic Interoperability To access Cultural Heritage
(STITCH) through mappings between thesauri
Scope of the problem:
Big thesauri with tens of thousands of concepts
Huge collections (e.g., National Library of the Neterlands:
80km of books in one collection)
Heterogeneous (e.g., books, manuscripts, illustrations, etc.)
Multi-lingual problem
Solving matching problems is one step to the solution of the
interoperability problem.
e.g., “plankzeilen” vs. “surfsport”
e.g., “archeology” vs. “excavation”
5. Introduction Three classifiers Experiments and results Summary
Thesaurus mapping
SemanTic Interoperability To access Cultural Heritage
(STITCH) through mappings between thesauri
Scope of the problem:
Big thesauri with tens of thousands of concepts
Huge collections (e.g., National Library of the Neterlands:
80km of books in one collection)
Heterogeneous (e.g., books, manuscripts, illustrations, etc.)
Multi-lingual problem
Solving matching problems is one step to the solution of the
interoperability problem.
e.g., “plankzeilen” vs. “surfsport”
e.g., “archeology” vs. “excavation”
6. Introduction Three classifiers Experiments and results Summary
Automatic alignment techniques
Lexical
labels and textual information of entities
Structural
structure of the formal definitions of entities, position in the
hierarchy
Extensional
statistical information of instances, i.e., objects indexed with
entities
Background knowledge
using a shared conceptual reference to find links indirectly
10. Introduction Three classifiers Experiments and results Summary
Pros and cons
Advantages
Simple to implement
Interesting results
Disadvantages
Requires sufficient amounts of common instances
Only uses part of the available information
14. Introduction Three classifiers Experiments and results Summary
Representing concepts and the similarity between them
Instance features Concept features Pair features
Cos. dist.
Bag of words
Bag of words
Bag of words
Bag of words
Bag of words
Bag of words
Creator
Title
Publisher
...
Creator
Title
Publisher
...
Creator
Title
Publisher
...
...
f1
f2
f3
Concept1Concept2
{{
{
{
{
Creator
Title
Publisher
...
Creator
Title
Publisher
...
Creator
Term 1: 4
Term 2: 1
Term 3: 0
...
Title
Term 1: 0
Term 2: 3
Term 3: 0
...
Publisher
Term 1: 2
Term 2: 1
Term 3: 3
...
Creator
Term 1: 2
Term 2: 0
Term 3: 0
...
Title
Term 1: 0
Term 2: 4
Term 3: 1
...
Publisher
Term 1: 4
Term 2: 1
Term 3: 1
...
Cos. dist.
Cos. dist.
15. Introduction Three classifiers Experiments and results Summary
Classification of concept mappings based on instance similarity
Classification based on instance similarity
Each pair of concepts is treated as a point in a “similarity
space”
Its position is defined by the features of the pair.
The features of the pair are the different measures of similarity
between the concepts’ instances.
Hypothesis: the label of a point — which represents whether
the pair is a positive mapping or negative one — is correlated
with the position of this point in this space.
With already labelled points and the actual similarity values of
concepts involved, it is possible to classify a point, i.e., to give
it a right label, based on its location given by the actual
similarity values.
16. Introduction Three classifiers Experiments and results Summary
Classification of concept mappings based on instance similarity
Research questions
How do different classifiers perform on this instance-based
mapping task?
What are the benefits of using a machine learning algorithm
to determine the importance of features?
Are there regularities wrt. the relative importance given to
specific features for similarity computation? Are these weights
related to application data characteristics?
17. Introduction Three classifiers Experiments and results Summary
Three classifiers used
Markov Random Field (MRF)
Evolutionary Strategy (ES)
Support Vector Machine (SVM)
18. Introduction Three classifiers Experiments and results Summary
Markov Random Field
Markov Random Field
Let T = { (x(i), y(i)) }N
i=1 be the training set
x(i)
∈ RK
, the features
y(i)
∈ Y = {positive, negative}, the label
The conditional probability of a label given the input is
modelled as
p(y(i)
|xi , θ) =
1
Z(xi , θ)
exp
K
j=1
λj φj (y(i)
, x(i)
) , (1)
where θ = { λj }K
j=1 are the weights associated to the feature
functions φ and Z(xi , θ) is a normalisation constant
19. Introduction Three classifiers Experiments and results Summary
Markov Random Field
The classifier used: Markov Random Field (cont’)
The likelihood of the data set for given model parameters
p(T|θ) is given by:
p(T|θ) =
N
i=1
p(y(i)
|x(i)
) (2)
During learning, our objective is to find the most likely values
for θ for the given training data.
The decision criterion for assigning a label y(i) to a new pair
of concepts i is then simply given by:
y(i)
= argmax
y
p(y|x(i)
) (3)
20. Introduction Three classifiers Experiments and results Summary
Multi-objective Evolution Strategy
Multi-objective Evolution Strategy
Evolutionary strategies (ES) have two characteristic properties:
firstly, they are used for continuous value optimisation, and,
secondly, they are self-adaptive.
An ES individual is a direct model of the searched solution,
defined by Λ and some evolution strategy parameters:
Λ, Σ ↔ λ1, . . . , λK , σ1, . . . , σK (4)
The fitness function is related to the decision criterion for the
ES, which is sign-based:
LES
i =
1 if K
j=1 λi Fij > 0
0 otherwise
(5)
21. Introduction Three classifiers Experiments and results Summary
Multi-objective Evolution Strategy
Multi-objective Evolution Strategy (cont’)
Maximising the number of positive results and negative results
are two opposite goals.
f1(Λ | F, L) = #{Fi |
K
j=1
λi Fij > 0 ∧ Li = 1} (6)
f2(Λ | F, L) = #{Fi |
K
j=1
λi Fij ≤ 0 ∧ Li = 0} (7)
22. Introduction Three classifiers Experiments and results Summary
Multi-objective Evolution Strategy
Multi-objective Evolution Strategy (cont’)
Evolution process
Recombination: Two parent individuals are combined using
different weighting, producing two new individuals
Mutation: One parent individual changes itself into a new child
individual
Survivor selection: NSGA-II
23. Introduction Three classifiers Experiments and results Summary
Support Vector Machine
Support Vector Machine
Support Vector Machine (SVM) is used as a maximum margin
classifier whose task consists in finding an hyperplane separating
the two classes.
The objective is to maximise the margin separating the two
classes whilst minimizing classification error risk.
24. Introduction Three classifiers Experiments and results Summary
Experiments and results
Thesauri to match: GTT (35K) and Brinkman (5K)
Instances: 1 million books
GTT annotated books: 307K
Brinkman annotated books: 490K
Dually annotated books: 222K
25. Introduction Three classifiers Experiments and results Summary
Feature slection for similarity calculation
λj Feature
1 Lexical
2 Jaccard
3 Date
4 ISBN
5 NBN
6 PPN
7 SelSleutel
8 abstract
9 alternative
10 annotation
λj Feature
11 author
12 contributor
13 creator
14 dateCopyrighted
15 description
16 extent
17 hasFormat
18 hasPart
19 identifier
20 isVersionOf
λj Feature
21 issued
22 language
23 mods:edition
24 publisher
25 refNBN
26 relation
27 spatial
28 subject
29 temporal
30 title
Table: List of the features
26. Introduction Three classifiers Experiments and results Summary
Quality of learning
0
0.2
0.4
0.6
0.8
1
Precision Recall F-measure
MRF 1-30
MRF 3-30
ES
SVM
0
0.2
0.4
0.6
0.8
1
Precision Recall F-measure
0
0.2
0.4
0.6
0.8
1
Precision Recall F-measure
0
0.2
0.4
0.6
0.8
1
Precision Recall F-measure
Figure: Precision, recall and F-Measure for mappings with a positive
label (top) and a negative label (bottom). Error bars indicate one
standard deviation over the 10 folds of cross-validation.
27. Introduction Three classifiers Experiments and results Summary
Relative importance of features
Which features of our instances are important for mapping?
Figure: Mutual information between features and labels
28. Introduction Three classifiers Experiments and results Summary
Relative importance of features
ES lambdas are not really conclusive
ES lambdas that are most inconclusive correspond to the least
informative features
29. Introduction Three classifiers Experiments and results Summary
Relative importance of features
Important features in terms of mutual information are
associated to large MRF weights
30. Introduction Three classifiers Experiments and results Summary
A more detailed analysis
Expected important features:
Label similarity (1), instance overlap (2), subject (28), etc.
Expected unimportant features:
Size of the book (16), format description (17) and language
(22), etc.
Surprisingly important features:
Date (14)
Surprisingly unimportant features:
Description (15) and abstract (8)
31. Introduction Three classifiers Experiments and results Summary
A more detailed analysis
Expected important features:
Label similarity (1), instance overlap (2), subject (28), etc.
Expected unimportant features:
Size of the book (16), format description (17) and language
(22), etc.
Surprisingly important features:
Date (14)
Surprisingly unimportant features:
Description (15) and abstract (8)
32. Introduction Three classifiers Experiments and results Summary
Summary
We tried three machine learning classifiers on the
instance-based mapping task, among which MRF and ES can
automatically identify meaningful features.
The MRF and the ES, result in a performance in the
neighbourhood of 90%, showing the validity of the approach.
Our analysis suggests that when many different description
features interact, there is no systematic correlation between
what a learning method could find and what an application
expert may anticipate.