Your SlideShare is downloading. ×
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

1st SEALS evaluation campaign results: a worldwide evaluation of semantic technologies

2,628

Published on

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
2,628
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • The SEALS platform is composed of seven components, four vertical components each one devoted to deal with the management of a SEALS entitiy, and two horizontal components, the first one devoted to the integration and coordination of the rest of the components, and the other devoted to the execution of the evaluations. The las t one is the SEALS portal that will be the UI of the platform. And this is how the different components are meant to be related to each other. The users will use the SEALS portal to submit entitiy management requests to the SEALS service manager, which will redirect to the appropriate entity management component. The user will also use the portal to submit evaluation requests to the SEALS service manager, which will forward the request to the Runtime Evaluation Service.
  • Transcript

    • 1. Results of the first worldwide evaluation campaign for semantic tools Dr Lyndon Nixon, STI International On behalf of the SEALS Project http://www.seals-project.eu 06.09.11
    • 2. Overview
      • SEALS Project
      • SEALS Platform
      • SEALS Evaluation Campaign: the first worldwide campaign evaluating semantic tools
        • Results
        • Coming soon: White paper
        • The next campaign in 2011
      06.09.11
    • 3. SEALS (Semantic Evaluation At Large Scale) Universidad Politécnica de Madrid, Spain (Coordinator) University of Sheffield, UK Forschungszentrum Informatik, Germany University of Innsbruck, Austria Institut National de Recherche en Informatique et en Automatique, France University of Mannheim, Germany University of Zurich, Switzerland STI International, Austria Open University, UK Oxford University, UK Contact person: Asunción Gómez Pérez <asun@fi.upm.es> http://www.seals-project.eu/ EC contribution: 3.500.000 € Duration: June 2009-May 2012 1 3 1 2 1 2
    • 4. SEALS Objectives
      • The SEALS Platform
      • A lasting reference infrastructure for semantic technology evaluation
      • The evaluations to be executed on-demand at the SEALS Platform
      • The SEALS Evaluation Campaigns
      • Two public evaluation campaigns including the best-in-class semantic technologies:
        • Ontology engineering tools
        • Ontology storage and reasoning systems
        • Ontology matching tools
        • Semantic search tools
        • Semantic Web Service tools
      • Semantic technology roadmaps
      • The SEALS Community
      • Around the evaluation of semantic technologies
      Service Activities Research Activities Networking Activities
    • 5. The SEALS Platform
      • Provides the infrastructure for evaluating semantic technologies
      • Open (everybody can use it)
      • Scalable (to users, data size)
      • Extensible (to more tests, different technology, more measures)
      • Sustainable (beyond SEALS)
      • Independent (unbiased)
      • Repeatable (evaluations can be reproduced)
      • A platform for remote evaluation of semantic technology:
      • Ontology engineering tools
      • Storage systems and reasoners
      • Ontology matching
      • Semantic search
      • Semantic web services
      • According to criteria:
      • Interoperability
      • Scalability
      • Specific measures (e.g., completeness of query answers, matching precision)
    • 6. Overall SEALS Platform Architecture SEALS Service Manager Runtime Evaluation Service SEALS Portal Test Data Repository Service Tools Repository Service Results Repository Service Evaluation Descriptions Repository Service SEALS Repositories Technology Providers Evaluation Organisers Technology Adopters Software agents, i.e., technology evaluators Entity management requests Evaluation requests
    • 7. The SEALS Evaluation Campaigns 2010 Next evaluation campaigns during 2011 - 2012 Ontology Engineering Tools Reasoning Tools Ontology Matching Tools Semantic Search Tools Semantic Web Service Tools OET Conformance 2010 Conformance OET Interoperability 2010 Interoperability OET Scalability 2010 Efficiency Scalability DLBS Classification 2010 DLBS Class satisfiability 2010 DLBS Ontology satisfiability 2010 DLBS Logical entailment 2010 MT Benchmark 2010 Conformance Efficiency Interoperability MT Anatomy 2010 Conformance Efficiency Interoperability MT Conference 2010 Conformance Efficiency Interoperability Alignment coherence SST Automated Search Performance 2010 Search quality SST Automated Performance and Scalability 2010 Resource consumption SST Automated Query Expressiveness 2010 Query expressiveness SST Automated Quality of Documentation 2010 Usability SST User Usability 2010 Usability SST User Query Expressiveness 2010 Query expressiveness SWS Tool Discovery Evaluation 2010 : Performance
    • 8. Ontology Engineering Tools
      • Goal: To evaluate the ontology management capabilities of ontology engineering tools
      • Evaluation services for:
        • Conformance
        • Interoperability
        • Scalability
      • Test data:
        • RDF(S) Import Test Suite
        • OWL Lite Import Test Suite
        • OWL DL Import Test Suite
        • OWL Full Import Test Suite
        • Scalability test data
      06.09.11
    • 9. Ontology Engineering Results
      • three ontology management frameworks (Jena, the OWL API and Sesame) and three ontology editors (the NeOn Toolkit, Protégé 4 and Protégé OWL) were evaluated
      Full results at http://www.seals-project.eu
    • 10. Storage and Reasoning Systems
      • Goal: Evaluating the interoperability and performance of DLBSs
        • Standard reasoning services
          • Classification
          • Class satisfiability
          • Ontology satisfiability
          • Logical entailment
      • Evaluation Criteria
        • Interoperability
        • Performance
      • Metrics
        • The number of tests passed without I/O errors
        • Time
      06.09.11
    • 11. Storage and Reasoning Results
      • Tools
        • HermiT, FaCT++ and jcel
      • The DLBSs systems have shown an acceptable level of performance
      • Most errors were related to features not supported in the current implementations.
        • e.g. jcel does not support entailment inference
      • Full results at http://www.seals-project.eu
      06.09.11
    • 12. Matching Tools
      • Goals:
        • To evaluate the competence of matching systems with respect to different evaluation criteria.
        • To demonstrate the feasibility and benefits of automating matching evaluation.
      • Criteria
        • Conformance
          • standard precision and recall
          • restricted semantic precision and recall
          • alignment coherence
        • Efficiency
          • runtime
          • memory consumption
    • 13. Matching Tools Results
        • 13 participants
        • 11 for the benchmark
        • ASMOV and RiMOM ahead, with AgrMaker as close follower
    • 14. Semantic Search
      • Goals
        • Benchmark effectiveness of search tools
        • Emphasis on tool usability since search is a inherently user-centered activity.
        • Still interested in automated evaluation for other aspects
        • Two phase approach:
          • Automated evaluation : runs on SEALS Platform
          • User-in-the-loop : human experiment
        • Criteria for User-centred search:
          • Query expressiveness
          • Usability (effectiveness, efficiency, satisfaction)
          • Scalability
          • Interoperability
      06.09.11
    • 15. Semantic Search Results
      • Full results on http://www.seals-project.eu
      06.09.11 Tool  Description  UITL  Auto  K-Search  K-Search allows flexible searching of semantic concepts in ontologies and documents using a form-based interface.   x  x Ginseng Guided Input Natural Language Search Engine (Ginseng) is a natural language interface question answering system. x  x  NLP-Reduce NLP-Reduce is a natural language query interface that allows its users to enter full English questions, sentence fragments, and keywords. x  x  Jena Arq ARQ is a query engine for Jena that supports the SPARQL RDF Query language. This tool has been used as a `baseline' for the automated phase.   x  PowerAqua PowerAqua is an open multi-ontology Question Answering (QA) system for the Semantic Web (SW) using a Natural Language (NL) user interface. x  x 
    • 16. Semantic Web Services
      • Goal: To evaluate Semantic Web Service discovery
      • Test data:
        • OWLS Test Collection (OWLS-TC)
        • SAWSDL Test Collection (SAWSDL-TC)
        • Seekda Services
        • OPOSSum Services
    • 17. Semantic Web Services Results
      • Full results will be published online.
      • A number of queries/datasets shows bias towards OWLS-MX (recall = 1.0)
      • Inconclusive between semantic and syntactic settings
      • Variants (different matching algorithms) are behaving the same for same settings!
      06.09.11
    • 18. Coming soon: white paper
      • SEALS will summarize the results and conclusions from the first evaluation campaign in a white paper, due early 2011
      • Don‘t miss being informed when it is available – join the SEALS Community
      • http://www.seals-project.eu/join-the-community
      06.09.11
    • 19. Coming next: 2nd campaign
      • The 2nd evaluation campaign will be launched in July 2011
      • Details will be available before hand on the website
      • A world first – the next campaign will use the SEALS platform for remote evaluation
      • Campaigns are open for all – your semantic tool can and should participate
      06.09.11
    • 20. Thank you for your attention
      • SEALS dissemination manager
      • Dr Lyndon Nixon [email_address]
      • SEALS evaluation campaigns
      • See their webpages at www.seals-project.eu
      06.09.11

    ×