Formally Measuring Agreement and Disagreement in Ontologies - K-CAP 09
Upcoming SlideShare
Loading in...5
×
 

Formally Measuring Agreement and Disagreement in Ontologies - K-CAP 09

on

  • 1,322 views

Presentation at the K-CAP 09 conference. Defines measures of agreement and disagreement of ontologies with statements. These measures are extended into measures of agreement and disagreement between ...

Presentation at the K-CAP 09 conference. Defines measures of agreement and disagreement of ontologies with statements. These measures are extended into measures of agreement and disagreement between ontologies, and measures of consensus and controversy concerning a statement in an ontology repository. Experiments are realized using the Watson collection of ontologies.

Statistics

Views

Total Views
1,322
Views on SlideShare
1,321
Embed Views
1

Actions

Likes
0
Downloads
8
Comments
0

1 Embed 1

http://www.slideshare.net 1

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • First, quick presentation: Semantic web, ontologies, etc. (big vision, but we are mainly talking about making real things out of it…)Using the semantic web? (what is there to reuse… ???) Put need for a gateway… so Watson… applications Also, use it for … euh evaluating things:: agreement/disagreement (would be useful)This is passive… contributing change from watson to cupboard (image from ontolog) + them provide QUALITY semantic web stuff (metadata, reviews, etc.)But that is still quite some effort  trust in the watsonplugin (and poweraqua?)

Formally Measuring Agreement and Disagreement in Ontologies - K-CAP 09 Formally Measuring Agreement and Disagreement in Ontologies - K-CAP 09 Presentation Transcript

  • Formally Measuring Agreement and Disagreement in Ontologies
    Mathieu d’Aquin
    KMi, The Open University – m.daquin@open.ac.uk
  • Ontologies are knowledge artifacts…
    …. and knowledge is subjective
    What do we mean?
  • What do we mean?
    Therefore, two different ontologies can express two different views (=disagree)
  • What do we mean?
    Therefore, two different ontologies can express two different view (=disagree)
    Or the same/similar view(s) (=agree)
  • What do we mean?
    Similarly, an ontology can agree or disagree with a single ontology statement
    Seafood subClassOf Meat
    No, don’t think so…
    Yes, of course!
    ?
  • And why is that interesting?
    Being able to measure these (dis)agreements could help in choosing the right ontology, in understanding what exist and in making sense of a collection of ontologies
    ?
  • A naïve approach…
    To detect disagreements, one could “simply” merge ontolologies and check for incoherence/inconsitency
    SeaFooddisjointWith
    Meat
    SeaFoodsubClassOf Meat
    DISAGREEMENT
  • A naïve approach, but…
    … a bit limited
    Animal subClassOf Human
    ?
    Human subClassOf Animal
    Lion subClassOf Species
    ?
    Lion type Species
    ?
    Car subClassOf Vehicle
    EricCantona type FootballPlayer
  • Requirements
    R1:Ontologies agree with themselves
    Kind of obvious
    R2: Covering different domains is not agreeing
    Car vs Footballer example.
    R3: There are different levels of agreements and disagreements
    Human subClassOf Animal vsHuman disjointWith Animal
    Human subClassOf Animal vsAnimal subClassOf Human
    R4: (dis)agreement measures should be independent from matching techniques
    Matching is necessary, but not part of the measure
    R5: It is possible to agree and disagree at the same time
    Lion type Species vsLion subClassOf Species
  • Basic framework
    The clever bit: using 2 measures instead of one…
    Agreement(s, O)  [0..1]
    Disagreement(s, O)  [0..1]
    With s a statement and O an ontology
    Interpretation:
    A (s, O) = 1, D(s, O) = 0, O fully agrees with s
    A (s, O) = 0, D(s, O) = 1,O fully disagrees with s
    A (s, O) = 0, D(s, O) = 0,O doesn’t care about s
    A (s, O) > 0, D(s, O) > 0,O agrees to a certain extent with s or disagrees to a certain extent with s, or both
  • But how to calculate that?
    Considering a statement <subject, relation, object>, an ontology might agree or disagree with the relation between entities corresponding to subject and object.
    Extracting information about the relation between matching entities in an ontology:
    O=
    Animal
    Human
    subClassOf
    Animal
    Matching
    s=
    LivingBeing
    Human
    Bird
    R-Module:Human subClassOf Animal, Animal subClassOf Human, Animal equivalentClass Human
    Minimal RM:Animal equivalentClass Human
  • Simplified representation of MRMs
    With subject’ and object’ the matching entities on O to the subject and object in s, the MRM of O regarding s can be represented as a list of relations:
    subject’ subClassOf object’ subClassOf
    object’ subClassOf subject’  subClassOf-1
    etc.
    Assumptions:
    The MRM is non redundant (part of the definition)
    {equivalenClass}  OK
    {equivalentClass, subClassOf, subClassOf-1}  not OK
    The MRM should be coherent and consistent (guarantied if O is coherent and consistent, in accordance with our 1st requirement: an ontology agrees with itself)
    {subClassOf}  OK
    {subClassOf, disjointWith}  not OK
    The MRM should be homogeneous in terms of modeling, i.e., it should not imply that en entity is at the same time a class and a property for example.
    {fatherOf domain Person, fatherOf range Person}  OK
    {fatherOf domain Person, fatherOfsubClassOf Person}  not OK
  • Nice Property and Measure definitions
    The good news:
    There is a small finite set of possible MRM, whatever is are O and s
    Which means?
    The measures of agreement and disagreement can be entirely defined by providing explicitly the values in two matrixes
    Agreement
    Disagreement
    Relation in s
    0 < A1 < A2 < 1
    MRM
  • So?
    A1/D1
    Animal subClassOf Human
    Human subClassOf Animal
    Lion subClassOf Species
    Lion type Species
    A2/D2
    0/0
    Car subClassOf Vehicle
    EricCantona type FootballPlayer
  • Measuring agreement and disagreement between whole ontologies, to understand a set of ontologies
    The big formulas:
    What to do now…
  • Using 21 ontologies containing a concept SeaFood
    Camp 1: seaFooddisjointWith Meat
    Camp 2: SeaFoodsubClassOf Meat
    Disagreement
    Agreement
  • Measuring consensus and controversy in a collection of ontologies
    R, a repository of ontologies.
    Can be positive (high agreement, low disagreement) or negative (the contrary)
    High controversy means no clear cut between agreement and disagreement
    What else could we do?
  • Watson: Thousands of ontologies automatically crawled from the Web (http://watson.kmi.open.ac.uk)
    a: global agreement, d: global disagreement, cs: consensus, ct: controversy
    Assessing the statements related to SeaFood in Watson
    Example
  • Using a set of 456 evaluated mappings between 2 large thesaurus in the agricultural domain (71.3% precision)
    Conclusion: There is less consensus on incorrect mappings. Controversy indicates mappings that need to be investigated more.
    Can we use it for assessing mappings?
  • We provided definitions of measures of agreement and disagreement in ontologies, including consensus and controversy in ontology repositories.
    We showed that when applied on real Web ontologies, this could help assessing statements and mappings, and getting an overview of a particular set of ontologies.
    We realized an implementation based on the Watson API. We intend to make it available through a Web service.
    Many applications to explore: visualization of ontology collections, ontology selection and reuse, propagation of trust based on agreement, …
    … and new directions: computing explanations for the (dis)agreement, different parameters and matching techniques for different applications, resolving disagreements (decide who’s right), etc.
    Also, complexity and performance are still difficult issues.
    Conclusion
  • Thank You!
    Mathieu d’Aquin
    @mdaquin
    m.daquin@open.ac.uk
    http://people.kmi.open.ac.uk/mathieu