Ontology Evaluation
across the Life Cycle
Key findings from a four-month
collaborative exploration
Amanda Vizedom, Ph.D.
June 3, 3013
amanda.vizedom@gmail.com
@ajvizedom 1
Problem & Approach
• General state of the field:
– No consensus on how to evaluate ontologies, and
– not enough knowledge to ground such consensus
– Pockets of knowledge & experience; not enough sharing & not enough understanding of
bounds of applicability
• Practical upshot:
– very little ontology evaluation
– even less effective ontology evaluation
• Serious consequences:
– Poor quality ontologies
– Poor fit between ontologies used and intended usages
– No effective test of semantic solutions & the ontologies they incorporate until much too far
into effort
– No way to accurately trace successes and failures to ontologies vs other system components
– Additional consequences for distinguishing effective and ineffective ontology techniques,
personnel, management, etc.
• Approach taken over Ontology Summit 2013*
– Some people report good results from applying Software Engineering and Systems
Engineering approaches to Ontologies. How do they do so? How well does it work? What can
we learn by looking at ontology evaluation through these lenses?
2
*Co-organized by Ontolog, NIST, NCOR, NCBO, IAOA & NCO_NITRD
http://ontolog.cim3.net/OntologySummit/2013/
Progress and Recommendations
• Ontology Life Cycle Model
– An ontology lifecycle model that
identifies recognizable, recurring
phases in the lives of ontologies
– Phases = clusters of activities around
recognizable inputs, outputs, and
goals
– No single sequence, timing, etc., that
fits or should fit all ontologies,
– BUT patterns of dependencies
between phases that enable
transformation of general
requirements to phase-appropriate
requirements!
– Evaluation throughout the life cycle
and against those requirements can
and should be practiced.
3
Progress and Recommendations (2)
• Requirements Identification is key
– Ontologies are not one size, or one set of capabilities, fits all
– Ontology requirements depend on range of usage factors
– To skip careful & informed requirements development is to dramatically reduce odds of
successful ontology development and/or deployment
• High-Level Characteristics
– Huge number of potential requirements can be approached by considering smaller number of
high-level characteristics that almost always matter, seeing how they apply to specific case and
/or phase
– We identified & focused on these: intelligibility, fidelity, craftsmanship, fitness, and
deployability
• Integration into Systems Quality Management discipline
– Too often, whole semantic systems will be managed with disciplined requirements specification,
test development, frequent and phase-appropriate testing and evaluation, issue tracking and
unit testing, covering every aspect of semantic system *except the ontology*
• Tools
– Recent rapid acceleration in tools & techniques for ontology evaluation.
– Need more, and need whole-lifecycle tools that incorporate evaluation, requirements, test
development, etc., into native ontology management environments (IDEs, repositories)
4
Action Item and More Details
• Much more detail, and references to supporting materials, are in the
Ontology Summit 2013 Communiqué: Towards Ontology Evaluation
across the Life Cycle, available at:
http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013_Communique
• Call for endorsements is open until 15 June, 2013
• Number of endorsements has already surpassed those of past years’
Communiqués, but don’t stop! Numbers, names, and varied spheres
of influence strengthen message and impact.
• If you want to see more and better evaluation, results-focus, and
engineering discipline in ontologies and semantic technologies,
please read the full Communiqué and consider adding your voice by
endorsing.
5

Vizedom amanda ontology evaluation lightning talk 5322

  • 1.
    Ontology Evaluation across theLife Cycle Key findings from a four-month collaborative exploration Amanda Vizedom, Ph.D. June 3, 3013 amanda.vizedom@gmail.com @ajvizedom 1
  • 2.
    Problem & Approach •General state of the field: – No consensus on how to evaluate ontologies, and – not enough knowledge to ground such consensus – Pockets of knowledge & experience; not enough sharing & not enough understanding of bounds of applicability • Practical upshot: – very little ontology evaluation – even less effective ontology evaluation • Serious consequences: – Poor quality ontologies – Poor fit between ontologies used and intended usages – No effective test of semantic solutions & the ontologies they incorporate until much too far into effort – No way to accurately trace successes and failures to ontologies vs other system components – Additional consequences for distinguishing effective and ineffective ontology techniques, personnel, management, etc. • Approach taken over Ontology Summit 2013* – Some people report good results from applying Software Engineering and Systems Engineering approaches to Ontologies. How do they do so? How well does it work? What can we learn by looking at ontology evaluation through these lenses? 2 *Co-organized by Ontolog, NIST, NCOR, NCBO, IAOA & NCO_NITRD http://ontolog.cim3.net/OntologySummit/2013/
  • 3.
    Progress and Recommendations •Ontology Life Cycle Model – An ontology lifecycle model that identifies recognizable, recurring phases in the lives of ontologies – Phases = clusters of activities around recognizable inputs, outputs, and goals – No single sequence, timing, etc., that fits or should fit all ontologies, – BUT patterns of dependencies between phases that enable transformation of general requirements to phase-appropriate requirements! – Evaluation throughout the life cycle and against those requirements can and should be practiced. 3
  • 4.
    Progress and Recommendations(2) • Requirements Identification is key – Ontologies are not one size, or one set of capabilities, fits all – Ontology requirements depend on range of usage factors – To skip careful & informed requirements development is to dramatically reduce odds of successful ontology development and/or deployment • High-Level Characteristics – Huge number of potential requirements can be approached by considering smaller number of high-level characteristics that almost always matter, seeing how they apply to specific case and /or phase – We identified & focused on these: intelligibility, fidelity, craftsmanship, fitness, and deployability • Integration into Systems Quality Management discipline – Too often, whole semantic systems will be managed with disciplined requirements specification, test development, frequent and phase-appropriate testing and evaluation, issue tracking and unit testing, covering every aspect of semantic system *except the ontology* • Tools – Recent rapid acceleration in tools & techniques for ontology evaluation. – Need more, and need whole-lifecycle tools that incorporate evaluation, requirements, test development, etc., into native ontology management environments (IDEs, repositories) 4
  • 5.
    Action Item andMore Details • Much more detail, and references to supporting materials, are in the Ontology Summit 2013 Communiqué: Towards Ontology Evaluation across the Life Cycle, available at: http://ontolog.cim3.net/cgi-bin/wiki.pl?OntologySummit2013_Communique • Call for endorsements is open until 15 June, 2013 • Number of endorsements has already surpassed those of past years’ Communiqués, but don’t stop! Numbers, names, and varied spheres of influence strengthen message and impact. • If you want to see more and better evaluation, results-focus, and engineering discipline in ontologies and semantic technologies, please read the full Communiqué and consider adding your voice by endorsing. 5