Your SlideShare is downloading. ×
0
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Evaluating the Stability and Credibility of Ontology Matching Methods
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Evaluating the Stability and Credibility of Ontology Matching Methods

452

Published on

Ontology Track @ ESWC2011

Ontology Track @ ESWC2011

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
452
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
4
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Evaluating the Stability and Credibility of Ontology Matching Methods Xing Niu, Haofen Wang, Gang Wu, Guilin Qi, and Yong Yu 2011.5.31
  • 2.
    • Introduction
    • Basic Concepts
      • Confidence Threshold, relaxedCT
      • Test Unit
    • Evaluation Measures and Their Usages
      • Comprehensive F-measure
      • STD score
      • ROC-AUC score
    • Conclusions and Future Work
    Agenda Page 
  • 3. Introduction
    • Stability
      • Reference matches are scarce
      • Training proper parameters
      • High stability : a matching method performs consistently on the data of different domains or scales
    • Credibility
      • Candidate matches sorted by their matching confidence values
      • High credibility : a matching method generate true positive matches with high confidence values while return false positive ones with low values.
    Page 
  • 4. Introduction (con’t)
    • Judging the basic compliance is not sufficient
      • Precision
      • Recall
      • F-measure
    • Measurements
      • Stability : Comprehensive F-measure and STD (STandard Deviation) score
      • Credibility : ROC-AUC (Area Under Curve) score
    Page 
  • 5. Confidence Threshold
    • A match can be represented as a 5-tuple
    • Confidence Threshold (CT)
    Page  CT
  • 6. Relaxed CT Page  CT relaxedCT
  • 7. Test Unit
    • Test Unit
      • A set of similar datasets describing the same domain and sharing many resemblances, but differing on details.
    • Examples
      • Benchmark 20X: structural information remains ● another language, synonyms, naming conventions
      • Conference Track: conference organization domain ● built by different groups
      • Cyclopedia: same data source ● different categories
      • Others: same data source ● random N-folds
    Page 
  • 8. Comprehensive F-measure
    • Maximum F-measure
      • maxF-measure reflects the theoretical optimal matching quality of matching methods
    • Uniform F-measure
      • uniF-measure simulates the practical application and evaluates such stability of matching methods.
    • Comprehensive F-measure
    Page 
  • 9. Usage of comF-measure
    • Draw histograms to find out ‘who’ holds the comF-measure back
    Page  Falcon-AO and RiMOM in Benchmark-20X Test *another language, synonyms
  • 10. Usage of comF-measure (con’t)
    • Use the comF-measure value
      • as a indicator of matching quality
      • as it reflects both theoretical and practical results
    • Use the comF-measure function
      • as the objective function of a optimization problem
      • as it conceals a multi-objective optimization problem
    Page 
  • 11. STD Score Page 
  • 12.
    • Reflects the stability of a matching method
    Usage of STD Score Page  Falcon-AO & Lily in Benchmark Test *lexical information, structural information
  • 13. ROC-AUC score Page 
  • 14. Usage of ROC-AUC Score Page  Spider Chart for Conference Test Unit
  • 15. Conclusions and Future Work
    • Conclusions
      • New evaluation measures
        • Comprehensive F-measure
        • STD score
        • ROC-AUC score
      • Deep analysis
    • Future Work
      • Extend our evaluation measures to a comprehensive strategy
      • Test more matching methods under other datasets
      • Make both stability and credibility as standard evaluation measures for ontology matching
    Page 
  • 16. APEX Data & Knowledge Management Lab

×