2010 CRC PhD Student Conference
Generating Accessible Natural Language Explanations for OWL
Tu Anh Nguyen
Supervisors Richard Power
Department/Institute Computing Department
Probation Viva Before
Starting date October 2009
This research aims to develop a computational approach to generating accessible natural
language explanations for entailments in OWL ontologies. The purpose of it is to support
non-specialists, people who are not expert in description logic and formal ontology lan-
guages, in understanding why an inference or an inconsistency follows from an ontology.
This would help to further improve the ability of users to successfully debug, diagnose and
repair their ontologies. The research is linked to the Semantic Web Authoring Tool (SWAT)
project, the on-going project aiming to provide a natural language interface for ordinary
users to encode knowledge on the semantic web. The research questions are:
• Do justiﬁcations for entailments in OWL ontologies conform to a relatively small
number of common abstract patterns for which we could generalise the problem to
generating explanations by patterns?
• For a certain entailment and its justiﬁcation, how to produce an explanation in natural
language that is accessible for non-specialists?
An ontology is a formal, explicit speciﬁcation of a shared conceptualisation . An ontology
language is a formal language used to encode ontologies. The Web Ontology Language,
OWL , is a widely used description logic based ontology language. Since OWL became
a W3C standard, there has been a remarkable increase in the number of people trying to
build and use OWL ontologies. Editing environments such as Prot´g´  and Swoop 
were developed in order to support users with editing and creating OWL ontologies.
As ontologies have begun to be widely used in real world applications and more expressive
ontologies have been required, there is a signiﬁcant demand for editing environments that
provide more sophisticated editing and browsing services for debugging and repairing. In
addition to being able to perform standard description logic reasoning services namely sat-
isﬁability checking and subsumption testing, description logic reasoners such as FaCT++
 and Pellet  can compute entailments (e.g., inferences) to improve the users com-
prehension about their ontologies. However, without providing some kind of explanation,
it can be very diﬃcult for users to ﬁgure out why entailments are derived from ontologies.
The generation of justiﬁcations for entailments has proven enormously helpful for identi-
fying and correcting mistakes or errors in ontologies. Kalyanpur and colleagues deﬁned a
Page 65 of 125
2010 CRC PhD Student Conference
justiﬁcation for an entailment of an ontology as the precise subset of logical axioms from
the ontology that are responsible for the entailment to hold . Furthermore, he presented
a user study showing that the availability of justiﬁcations had a remarkable positive impact
on the ability of users to debug and repair their ontologies . Justiﬁcations have also
been recently used for debugging very large ontologies such as SNOMED , which size is
too large to be able to debug and repair manually.
There are several recent studies into capturing justiﬁcations for entailments in OWL ontolo-
gies [12, 21, 9]. Nevertheless, OWL is a semantic markup language based on RDF and XML,
languages that are oriented toward machine processability rather than human readability.
Moreover, while a justiﬁcation gathers together the axioms, or premises, suﬃcient for an
entailment to hold, it is left up to the reader to work out how these premises interplay with
each other to give rise to the entailment in question. Therefore, many users may struggle
to understand how a justiﬁcation supports an entailment since they are either unfamiliar
with OWL syntax and semantics, or lack of knowledge about the logic underpinning the
ontology. In other words, the ability of users to work out how an entailment arises from a
justiﬁcation currently depends on their understanding of OWL and description logic.
In recent years, the development of ontologies has been moving from “the realm of artiﬁcial
intelligence laboratories to the desktops of domain experts”, who have insightful knowledge
of some domain but no expertise in description logic and formal ontology languages .
It is for this reason that the desire to open up OWL ontologies to a wide non-specialist
audience has emerged. Obviously, the wide access to OWL ontologies depends on the devel-
opment of editing environments that use some transparent medium; and natural language
(e.g., English, Italian) text is an appropriate choice since it can be easily comprehended by
the public without training. Rector and colleagues observed common problems that users
frequently encounter in understanding the logical meaning and inferences when working
with OWL-DL ontologies, and expressed the need for a “pedantic but explicit” paraphrase
language to help users grasp the accurate meaning of logical axioms in ontologies .
Several research groups have proposed interfaces to encode knowledge in semantics-based
Controlled Natural Languages (CNLs) [19, 4, 10]. These systems allow users to input sen-
tences conforming with a CNL then parse and tranform them into statements in formal
ontology languages. The SWAT project  introduces an alternative approach based on
Natural Language Generation. In SWAT, users specify the content of an ontology by “di-
rectly manipulating on a generated feedback text” rather than using text interpretation;
therefore, “editing ontologies on the level of meaning, not text” .
Obviously, the above mentioned interfaces are designed for use by non-specialists to build up
ontologies without having to work directly on formal languages and description logic. How-
ever, research on providing more advanced editing and browsing services on these interfaces
to support the debugging and repairing process has not been investigated yet. Despite the
usefulness of providing justiﬁcations in the form of sets of OWL axioms, understanding the
reasons why entailments or inconsistencies are drawn from ontologies is still a key problem
for non-specialists. Even for specialists, having a more user-friendly view of ontology with
accessible explanations can be very helpful. Thus, this project seeks to develop a compu-
tational approach to generating accessible natural language explanations for entailments in
OWL ontologies in order to assist users in debugging and repairing their ontologies.
Page 66 of 125
2010 CRC PhD Student Conference
The research approach is to identify common abstract patterns of justiﬁcations for entail-
ments in OWL ontologies. Having identiﬁed such patterns we will focus on generating
accessible explanations in natural languages for most frequently used patterns. A prelim-
inary study to work out the most common justiﬁcation patterns has been carried out. A
corpus of eighteen real and published OWL ontologies of diﬀerent expressivity has been
collected from the Manchester TONEs reposistory. In addition, the practical module devel-
oped by Matthew Horridge based on the research on ﬁnding all justiﬁcations for OWL-DL
ontologies [12, 7] has been used. Justiﬁcations are computed then analysed to work out the
most common patterns. Results from the study show that over the total 6772 justiﬁcations
collected, more than 70 percent of justiﬁcations belongs to the top 20 patterns. Study on
a larger and more general ontology corpus will be carried out in next steps. Moreover, a
user study is planned to investigate whether non-specialists perform better on a task when
reading accessible explanations rather than justiﬁcations in the form of OWL axioms.
The research on how to create explanations accessible for non-logicians is informed by studies
on proof presentations. In Natural Deduction , how a conclusion is derived from a set of
premises is represented as a series of intermediate statements linking from the premises to
the conclusion. While this approach makes it easy for users to understand how to derive
from one step to the next, it might cause diﬃculty to understand how those steps linked
together to form the overall picture of the proof. Structured derivations , a top-down
calculational proof format that allows inferences to be presented at diﬀerent levels of detail,
seems to be an alternative approach for presenting proof. It was proposed by researchers
as a method for teaching rigorous mathematical reasoning . Research on whether using
structured derivations would help to improve the accessibility of explanations as well as
where and how intermediate inferences should be added is being investigated.
Since the desire to open up OWL ontologies to a wide non-specialist audience has emerged,
several research groups have proposed interfaces to encode knowledge in semantics-based
CNLs. However, research on providing debugging and repairing services on these inter-
faces has not been investigated yet. Thus, this research seeks to develope a computational
approach to generating accessible explanations to help users in understanding why an entail-
ment follows from a justiﬁcation. Research work includes identifying common abstract jus-
tiﬁcation patterns and studying into generating explanations accessible for non-specialists.
 F. Baader and B. Suntisrivaraporn. Debugging SNOMED CT Using Axiom Pinpointing
in the Description Logic EL+. In KR-MED, 2008.
 R. Back, J. Grundy, , and J. von Wright. Structured Calculational Proof. Technical
report, The Australian National University, 1996.
 R.-J. Back and J. von Wright. A Method for Teaching Rigorous Mathematical Rea-
soning. In ICTMT4, 1999.
 A. Bernstein and E. Kaufmann. GINO - A Guided Input Natural Language Ontology
Editor. In ISWC, 2006.
Page 67 of 125
2010 CRC PhD Student Conference
 G. Gentzen. Untersuchungen uber das logische Schließen. II. Mathematische Zeitschrift,
 T. R. Gruber. A translation approach to portable ontology speciﬁcations. Knowledge
Acquisition, 5:199–220, 1993.
 M. Horridge, B. Parsia, and U. Sattler. Laconic and Precise Justiﬁcations in OWL. In
ISWC, pages 323–338, 2008.
 I. Horrocks, P. F. Patel-Schneider, and F. van Harmelen. From SROIQ and RDF to
OWL: The Making of a Web Ontology Language. J. Web Semantics, 1:7–26, 2003.
 Q. Ji, G. Qi, and P. Haase. A Relevance-Directed Algorithm for Finding Justiﬁcations
of DL Entailments. In ASWC, pages 306–320, 2009.
 K. Kaljurand and N. E. Fuchs. Verbalizing OWL in Attempto Controlled English. In
 A. Kalyanpur. Debugging and repair of OWL ontologies. PhD thesis, University of
 A. Kalyanpur, B. Parsia, M. Horridge, and E. Sirin. Finding All Justiﬁcations of OWL
DL Entailments. In ISWC, 2007.
 A. Kalyanpur, B. Parsia, E. Sirin, B. Cuenca-Grau, and J. A. Hendler. Swoop: A Web
Ontology Editing Browser. Journal of Web Semantics, 4:144–153, 2006.
 N. F. Noy and D. L. McGuinness. Ontology Development 101: A Guide to Creating
Your First Ontology. Technical report, Stanford University, 2001.
 N. F. Noy, M. Sintek, S. Decker, M. Crub´zy, R. W. Fergerson, and M. A. Musen.
Creating Semantic Web Contents with Prot´g´-2000. IEEE Intell. Syst., 16:60–71,
 R. Power. Towards a generation-based semantic web authoring tool. In ENLG, pages
 R. Power, R. Stevens, D. Scott, and A. Rector. Editing OWL through generated CNL.
In CNL, 2009.
 A. Rector, N. Drummond, M. Horridge, J. Rogers, H. Knublauch, R. Stevens, H. Wang,
and C. Wroe. OWL Pizzas: Practical Experience of Teaching OWL-DL: Common
Errors & Common Patterns. In EKAW, 2004.
 R. Schwitter and M. Tilbrook. Controlled Natural Language meets the Semantic Web.
In ALTW, pages 55–62, 2004.
 E. Sirin, B. Parsia, B. C. Grau, A. Kalyanpur, and Y. Katz. Pellet: A practical
OWL-DL reasoner. Journal of Web Semantics, 5:51–53, 2007.
 B. Suntisrivaraporn, G. Qi, Q. Ji, and P. Haase. A Modularization-based Approach to
Finding All Justiﬁcations for OWL DL Entailments. In ASWC, pages 1–15, 2008.
 D. Tsarkov and I. Horrocks. FaCT++ Description Logic Reasoner: System Description.
In IJCAR, volume 4130, pages 292–297, 2006.
Page 68 of 125