Formal ontologies have proved to be a very useful tool to manage interoperability among data, systems and knowledge. In this paper we will show how formal ontologies can evolve from a crisp, deterministic framework (ontologies of hard knowledge) to new probabilistic, fuzzy or possibilistic frameworks (ontologies of soft knowledge). This can considerably enlarge the application potential of formal ontologies in geographic analysis and planning, where soft knowledge is intrinsically linked to the complexity
of the phenomena under study.
The paper briefly presents these new uncertainty-based formal ontologies. It then highlights how ontologies are formal tools to define both concepts and relations among concepts. An example from the domain of urban geography finally shows how the cause-to-effect relation between household preferences and urban sprawl can be encoded within a crisp, a probabilistic and a possibilistic ontology, respectively. The ontology formalism will also determine the kind of reasoning that can be developed from available knowledge.
Uncertain ontologies can be seen as the preliminary phase of more complex uncertainty-based models. The advantages of moving to uncertainty-based models is evident: whether it is in the analysis of geographic space or in decision support for planning, reasoning on geographic space is almost always reasoning with uncertain knowledge of geographic phenomena.
From Research Objects to Reproducible Science TalesBertram Ludäscher
University of Southampton. Electronics & Computer Science. Research Seminar (Invited Talk).
TITLE: From Research Objects to Reproducible Science Tales
ABSTRACT. Rumor has it that there is a reproducibility crisis in science. Or maybe there are multiple crises? What do we mean by reproducibility and replicability anyways? In this talk I will first make an attempt at sorting out some of the terminological confusion in this area, focusing on computational aspects. The PRIMAD model is another attempt to describe different aspects of reproducibility studies by focusing on the "delta" between those studies and the original study. In addition to these more theoretical investigations, I will discuss practical efforts to create more reproducible and more transparent computational platforms such as the one developed by the Whole-Tale project: here 'tales' are executable research objects that may combine data, code, runtime environments, and narratives (i.e., the traditional "science story"). I will conclude with some thoughts about the remaining challenges and opportunities to bridge the large conceptual gaps that continue to exist despite the recognition of problems of reproducibility and transparency in science.
ABOUT the Speaker. Bertram Ludäscher is a professor at the School of Information Sciences at the University of Illinois, Urbana-Champaign and a faculty affiliate with the National Center for Supercomputing Applications (NCSA) and the Department of Computer Science at Illinois. Until 2014 he was a professor at the Department of Computer Science at the University of California, Davis. His research interests range from practical questions in scientific data and workflow management, to database theory and knowledge representation and reasoning. Prior to his faculty appointments, he was a research scientist at the San Diego Supercomputer Center (SDSC) and an adjunct faculty at the CSE Department at UC San Diego. He received his M.S. (Dipl.-Inform.) in computer science from the University of Karlsruhe (now K.I.T.), and his PhD (Dr. rer. nat.) from the University of Freiburg, in Germany.
Statistical Physics of Ecological Networks: from patterns to principlesSamir Suweis
Talk that I gave in Leeds at the school of Mathematics on 26/11/2014. It is an overview of my recent on research on mutualistic ecological networks by using tools and approaches from statistical physics.
A measure to evaluate latent variable model fit by sensitivity analysisDaniel Oberski
Latent variable models involve restrictions on the data that can be formulated in terms of "misspecifications": restrictions with a model-based meaning. Examples include zero cross-loadings and local dependencies, as well as “measurement invariance” or “differential item functioning”. If incorrect, misspecifications can potentially disturb the main purpose of the latent variable analysis—seriously so in some cases.
Recently, I proposed to evaluate whether a particular analysis at hand is such a case or not.
To do this, I define a measure based on the likelihood of the restricted model that approximates the change in the parameters of interest if the misspecification were freed, the EPC-interest. The main idea is to examine the EPC-interest and free those misspecifications that are “important” while ignoring those that are not. I have implemented the EPC-interest in the lavaan software for structural equation modeling and the Latent Gold software for latent class analysis.
This approach can resolve several problems and inconsistencies in the current practice of model fit evaluation used in latent variable analysis, something I illustrate using analyses from the “measurement invariance” literature and from item response theory.
From Research Objects to Reproducible Science TalesBertram Ludäscher
University of Southampton. Electronics & Computer Science. Research Seminar (Invited Talk).
TITLE: From Research Objects to Reproducible Science Tales
ABSTRACT. Rumor has it that there is a reproducibility crisis in science. Or maybe there are multiple crises? What do we mean by reproducibility and replicability anyways? In this talk I will first make an attempt at sorting out some of the terminological confusion in this area, focusing on computational aspects. The PRIMAD model is another attempt to describe different aspects of reproducibility studies by focusing on the "delta" between those studies and the original study. In addition to these more theoretical investigations, I will discuss practical efforts to create more reproducible and more transparent computational platforms such as the one developed by the Whole-Tale project: here 'tales' are executable research objects that may combine data, code, runtime environments, and narratives (i.e., the traditional "science story"). I will conclude with some thoughts about the remaining challenges and opportunities to bridge the large conceptual gaps that continue to exist despite the recognition of problems of reproducibility and transparency in science.
ABOUT the Speaker. Bertram Ludäscher is a professor at the School of Information Sciences at the University of Illinois, Urbana-Champaign and a faculty affiliate with the National Center for Supercomputing Applications (NCSA) and the Department of Computer Science at Illinois. Until 2014 he was a professor at the Department of Computer Science at the University of California, Davis. His research interests range from practical questions in scientific data and workflow management, to database theory and knowledge representation and reasoning. Prior to his faculty appointments, he was a research scientist at the San Diego Supercomputer Center (SDSC) and an adjunct faculty at the CSE Department at UC San Diego. He received his M.S. (Dipl.-Inform.) in computer science from the University of Karlsruhe (now K.I.T.), and his PhD (Dr. rer. nat.) from the University of Freiburg, in Germany.
Statistical Physics of Ecological Networks: from patterns to principlesSamir Suweis
Talk that I gave in Leeds at the school of Mathematics on 26/11/2014. It is an overview of my recent on research on mutualistic ecological networks by using tools and approaches from statistical physics.
A measure to evaluate latent variable model fit by sensitivity analysisDaniel Oberski
Latent variable models involve restrictions on the data that can be formulated in terms of "misspecifications": restrictions with a model-based meaning. Examples include zero cross-loadings and local dependencies, as well as “measurement invariance” or “differential item functioning”. If incorrect, misspecifications can potentially disturb the main purpose of the latent variable analysis—seriously so in some cases.
Recently, I proposed to evaluate whether a particular analysis at hand is such a case or not.
To do this, I define a measure based on the likelihood of the restricted model that approximates the change in the parameters of interest if the misspecification were freed, the EPC-interest. The main idea is to examine the EPC-interest and free those misspecifications that are “important” while ignoring those that are not. I have implemented the EPC-interest in the lavaan software for structural equation modeling and the Latent Gold software for latent class analysis.
This approach can resolve several problems and inconsistencies in the current practice of model fit evaluation used in latent variable analysis, something I illustrate using analyses from the “measurement invariance” literature and from item response theory.
Ontologies are used in numerous research disciplines and commercial applications to uniformly and semantically annotate real-world objects. Often there are multiple interrelated ontologies in a domain, and repositories such as BioPortal already provide mappings (links) between these ontologies. Especially manually verified mappings can be reused 1) to create new mappings between so far unconnected sources, and 2) to avoid an expensive re-identification, e.g. when the underlying ontologies change.
New ontology mappings can be determined by reusing and composing previously determined mappings that involve intermediate ontologies. The composition of mappings is very efficient and can achieve mappings of very high quality especially for valuable intermediate ontologies. Moreover, due to a rapid development of application domains, ontologies are frequently changed to include up-to-date knowledge. These changes dramatically influence dependent data as well as applications like ontology mappings and ontology-based annotations. Thus existing mappings may become invalid and need to be migrated to the most recent ontology versions, such that users and dependent applications can consume up-to-date mappings.
In this talk, I will give a brief introduction to ontology mappings and provide an overview on reuse-based approaches for mapping creation and maintenance, currently studied at the Database Group at Leipzig University.
This presentation proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole.
This presentation is based on https://link.springer.com/article/10.1007%2Fs00521-015-1870-7
Storytelling with Data - See | Show | Tell | EngageAmit Kapoor
Stories have been recognized for their power of communication & persuasion for centuries and we need to operate at that intersection of data, visual and stories to fully harness the power of data.
I take your through a short tour of the science and the art of visualization and storytelling. Then give you an introduction through examples and exemplar on the four different layers in a data-story: See - Show - Tell - Engage.
Used in the session on Business Analytics and Intelligence at IIM Bangalore in July 2014.
La résolution de problèmes à l'aide de graphesData2B
- Science des Réseaux
- Réseaux géographiques
- Réseaux temporels
- Le Big Data et la Science des Réseaux
- Les réseaux en Intelligence Analytique
- Réseaux de données sociales et analyse communautaire
- Réseaux de données agroalimentaires et analyse stratégique
- Intelligence émotionnelle
- Intelligence analytique et réseaux de neurones
- De l’apprentissage automatique (machine learning) au raisonnement automatique.
The presentation proposes an microservice-based architectural model for knowledge interchange between story generation systems in the interest of enhancing the interoperability and fostering the co-creation process.
Knowledge – dynamics – landscape - navigation – what have interfaces to digit...Andrea Scharnhorst
When we google, search Wikipedia, and share information on Mendeley, we obviously deal with complex networks of information. But also traditional information spaces – the collections of libraries for instance – and their classification systems are evolving complex systems. This talk explores the possibilities to use concepts and methods from statistical physics to analyze information dynamics. We depart from information dynamics in scholarly communication, and point to current encounters between physics and scientometrics. We discuss more in-depth the evolution of category systems in libraries (Universal Decimal Classification) in comparison to on-line spaces (Wikipedia). The talk closes with an introduction into a new European network – the COST Action KnowEscape – in which information professionals, sociologists, computer scientists, physicists and digital humanities scholars in an unique alliance seek for knowledge maps to better navigate through large information spaces.
Talk on June 11, 2013 by Andrea Scharnhorst at the IMT in Lucca, Italy.
Data science is an area at the interface of statistics, computer science, and mathematics.
• Statisticians contributed a large inferential framework, important Bayesian perspectives, the bootstrap and CART and random forests, and the concepts of sparsity and parsimony.
• Computer scientists contributed an appetite for big, challenging problems.They also pioneered neural networks, boosting, PAC bounds, and developed programming languages, such as Spark and hadoop, for handling Big Data.
• Mathematicians contributed support vector machines, modern optimization, tensor analysis, and (maybe) topological data analysis.
Cuckoo Search Algorithm: An IntroductionXin-She Yang
This presentation explains the fundamental ideas of the standard Cuckoo Search (CS) algorithm, which also contains the links to the free Matlab codes at Mathswork file exchanges and the animations of numerical simulations (video at Youtube). An example of multi-objective cuckoo search (MOCS) is also given with link to the Matlab code.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
More Related Content
Similar to Formal Ontologies and Uncertainty - INPUT2014
Ontologies are used in numerous research disciplines and commercial applications to uniformly and semantically annotate real-world objects. Often there are multiple interrelated ontologies in a domain, and repositories such as BioPortal already provide mappings (links) between these ontologies. Especially manually verified mappings can be reused 1) to create new mappings between so far unconnected sources, and 2) to avoid an expensive re-identification, e.g. when the underlying ontologies change.
New ontology mappings can be determined by reusing and composing previously determined mappings that involve intermediate ontologies. The composition of mappings is very efficient and can achieve mappings of very high quality especially for valuable intermediate ontologies. Moreover, due to a rapid development of application domains, ontologies are frequently changed to include up-to-date knowledge. These changes dramatically influence dependent data as well as applications like ontology mappings and ontology-based annotations. Thus existing mappings may become invalid and need to be migrated to the most recent ontology versions, such that users and dependent applications can consume up-to-date mappings.
In this talk, I will give a brief introduction to ontology mappings and provide an overview on reuse-based approaches for mapping creation and maintenance, currently studied at the Database Group at Leipzig University.
This presentation proposes a novel nature-inspired algorithm called Multi-Verse Optimizer (MVO). The main inspirations of this algorithm are based on three concepts in cosmology: white hole, black hole, and wormhole.
This presentation is based on https://link.springer.com/article/10.1007%2Fs00521-015-1870-7
Storytelling with Data - See | Show | Tell | EngageAmit Kapoor
Stories have been recognized for their power of communication & persuasion for centuries and we need to operate at that intersection of data, visual and stories to fully harness the power of data.
I take your through a short tour of the science and the art of visualization and storytelling. Then give you an introduction through examples and exemplar on the four different layers in a data-story: See - Show - Tell - Engage.
Used in the session on Business Analytics and Intelligence at IIM Bangalore in July 2014.
La résolution de problèmes à l'aide de graphesData2B
- Science des Réseaux
- Réseaux géographiques
- Réseaux temporels
- Le Big Data et la Science des Réseaux
- Les réseaux en Intelligence Analytique
- Réseaux de données sociales et analyse communautaire
- Réseaux de données agroalimentaires et analyse stratégique
- Intelligence émotionnelle
- Intelligence analytique et réseaux de neurones
- De l’apprentissage automatique (machine learning) au raisonnement automatique.
The presentation proposes an microservice-based architectural model for knowledge interchange between story generation systems in the interest of enhancing the interoperability and fostering the co-creation process.
Knowledge – dynamics – landscape - navigation – what have interfaces to digit...Andrea Scharnhorst
When we google, search Wikipedia, and share information on Mendeley, we obviously deal with complex networks of information. But also traditional information spaces – the collections of libraries for instance – and their classification systems are evolving complex systems. This talk explores the possibilities to use concepts and methods from statistical physics to analyze information dynamics. We depart from information dynamics in scholarly communication, and point to current encounters between physics and scientometrics. We discuss more in-depth the evolution of category systems in libraries (Universal Decimal Classification) in comparison to on-line spaces (Wikipedia). The talk closes with an introduction into a new European network – the COST Action KnowEscape – in which information professionals, sociologists, computer scientists, physicists and digital humanities scholars in an unique alliance seek for knowledge maps to better navigate through large information spaces.
Talk on June 11, 2013 by Andrea Scharnhorst at the IMT in Lucca, Italy.
Data science is an area at the interface of statistics, computer science, and mathematics.
• Statisticians contributed a large inferential framework, important Bayesian perspectives, the bootstrap and CART and random forests, and the concepts of sparsity and parsimony.
• Computer scientists contributed an appetite for big, challenging problems.They also pioneered neural networks, boosting, PAC bounds, and developed programming languages, such as Spark and hadoop, for handling Big Data.
• Mathematicians contributed support vector machines, modern optimization, tensor analysis, and (maybe) topological data analysis.
Cuckoo Search Algorithm: An IntroductionXin-She Yang
This presentation explains the fundamental ideas of the standard Cuckoo Search (CS) algorithm, which also contains the links to the free Matlab codes at Mathswork file exchanges and the animations of numerical simulations (video at Youtube). An example of multi-objective cuckoo search (MOCS) is also given with link to the Matlab code.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Formal Ontologies and Uncertainty - INPUT2014
1. FORMAL ONTOLOGIES AND
UNCERTAINTY
Matteo CAGLIONI, Giovanni FUSCO
Université de Nice Sophia Antipolis / CNRS ESPACE UMR7300
Eighth International Conference INPUT
Smart City - Planning for Energy, Transportation
and Sustainability of the Urban System
Naples, June 4th
-6th
2014
3. Research on Uncertainty at UMR ESPACE
• PEPS HuMaIn 2014 (Geography – Computer Science)
• Interdisciplinary WG University of Nice Sophia Antipolis
• WG within laboratory ESPACE (Nice, Avignon, Aix/Marseille)
• COST TD1202 (VGI and mapping uncertainty)
Growing awareness of the importance of Uncertainty
in Geographic Knowledge
4. Why Uncertainty?
• Assumption of knowability of the real value (theory of measurement)
• Artefact of a deterministic (or binary) logic
• Need of numbers to execute a model
• Need of numbers from the experts
• Model overestimation (calibration of model parameters)
• Precise values often used just for rough classifications
• Apparent precision fooling decision makers
Not a problem but a solution, to overcome several
problems:
5. Uncertainty in Geographic Information
But Geographic Information is not everything:
from Information to Knowledge …
7. Ontology: definition
Term borrowed by Artificial Intelligence, in particular in the
Theory of Knowledge.
Computer Science
Ontology: explicit and formal specification of a shared
conceptualisation [Studer, 1998].
• EXPLICIT (concepts, their extent, their significations, explicitly defined)
• FORMAL (machine understandable)
• SHARED (knowledge based on shared agreement in a group)
8. Ontology: content
Surface, Length, …
The domain objects (classes/instances)
Object Proprieties
House, Street, Activity, Theatre, …
IS_A, Near_To, Contain, …
Relations among objects
Opéra Garnier
IS_A
Theatre
Building
IS_A
Av. de l’Opéra
Street
IS_A
Lead_To
9. Ontology: formalisation
Not formal
Ontology
Semi-
formal
Ontology
Formal
Ontology
Protocol on natural language
Ex. City = agglomeration of population and non-agricultural activities
Ex. Thesauri, WordNet
Concept Instance Valeur
relationCATEGORIE
DE RELATION
OWL
(Ontology Web Language)
- Formalism DL (Description
Logic)
- Evolution of xml
owl.org/resources/StarTrek/starship.owl">
<owl:Ontology rdf:about="">
<owl:imports rdf:resource="http://www.pr-owl.org/pr-owl.owl"/>
</owl:Ontology>
<owl:Class rdf:ID="TimeStep">
<owl:disjointWith>
<owl:Class rdf:ID="SensorReport"/>
</owl:disjointWith>
<owl:disjointWith>
<owl:Class rdf:ID="Zone"/>
</owl:disjointWith>
<owl:disjointWith>
<owl:Class rdf:ID="Starship"/>
</owl:disjointWith>
<rdfs:subClassOf rdf:resource="http://www.pr-owl.org/pr-
owl.owl#ObjectEntity"/>
Graphic protocol
10. Reasoner = tool to perform automatic reasoning with
description logic (≈ 1st
order). It allows to:
• classify object in functional hierarchies,
• verify ontology coherence and consistence,
• infer new knowledge.
It uses an ontology language (OWL) for the specification
of the inference rules.
Reasoning automation:
the Semantic Reasoner
11. Why Ontologies ?
To solve the
problem of the
semantic
difference of
information coming
from different
sources.
To allow automatic
reasoning and the
interaction between
human / machine
To reduce/integrate
the uncertainty of
knowledge in a
study field
12. Ontologies and Uncertainty
Traditional goal: reduce uncertainty through ontologies
of hard knowledge (taxonomies with crisp concepts,
knowledge bases with if-then rules, etc.)
New goal: integrate uncertainty through ontologies of
soft knowledge (probabilistic, fuzzy, possibilistic, ...)
Soft Knowledge widespread in geography and planning.
In this context, even automatic reasoning can benefit
from uncertainty-based approaches.
13. Ontology and PROBABILISTC LOGIC
Subjective Bayesian Probability Theory, the first attempt
to overcome the assumption of frequentist probabilities.
Probabilities = degrees of
belief or rational experts
Conditional probabilities to
represent non-deterministic
relations.
Bayesian Networks (BNs) as
complex probabilistic models.
Probability axioms must be
respected.
Ontologies formalizing probabilistic knowledge for the development of
BNs : OWLOntoBayes (Yi Yang), PR-OWL (Paulo Costa)
14. Ontology and FUZZY LOGIC
Theory of gradual
belonging to concepts,
well suited for
geographic knowledge
(Ch. Rolland-May, B.
Plewe)
Fuzzy OWL and Fuzzy
DL (Bobilo and Straccia):
introducing fuzziness in
taxonomies, relations
among concepts and
reasoning
15. Ontology and POSSIBLISTIC LOGIC
Possibility theory: integrating the uncertainty of knowledge from
the point of view of the expert
Possibility (Π) = degree of
surprise of the expert for
an outcome
Necessity (N) = certainty
of the outcome
N (C) = 1 – Π(¬C)
Possibilistic ontologies: reasoning with epistemic uncertainty (ex.
max-min composition of Π and N measures, etc.)
16. Ontology of Uncertain Relations: an example
Does household preference for individual housing cause sprawl?
Preference for
Individual
Housing
Preference for
Individual
Housing
Urban
Sprawl
Urban
Sprawl
Truth Table
Pref. = Ind.
Housing
Pref. = Coll.
Housing
Sprawl = True True True
Sprawl = False False True
Household Preference Urban Sprawl
causes
has value
Individual Housing Collective Housing
has value
True False
A crisp ontology:
Reasoner can infer
truth value of Urban
Sprawl
17. Ontology of Uncertain Relations: an example
Cond. Probab.
Pref. = Ind.
Housing
Pref. = Coll.
Housing
Sprawl = True 0.8 0.5
Sprawl = False 0.2 0.5
A probabilistic ontology:
Household Preference Urban Sprawl
Probably causes
with parameters …
has value with probability
parameters …
Individual Housing Collective Housing True False
has value with probability
parameters …
Cond. Possib.
Pref. = Ind.
Housing
Pref. = Coll.
Housing
Sprawl = True 1 1
Sprawl = False 0.3 1
A possibilistic ontology:
Household Preference Urban Sprawl
Possibly causes
with parameters …
has value with possibility
parameters …
Individual Housing Collective Housing True False
has value with possibility
parameters …
Reasoner can infer
probability of Urban Sprawl
Reasoner can infer
possibility and necessity
of Urban Sprawl
18. Ontology of Certain/Uncertain Relations
The crisp approach :
Semantic
certainty on the
antecedent
Semantic
certainty on the
antecedent
Semantic
certainty on the
consequent
Semantic
certainty on the
consequent
Syntactic Certainty
on the Relation
The uncertain approach :
Semantic
(un)certainty on
the antecedent
Semantic
(un)certainty on
the antecedent
Semantic
uncertainty on
the consequent
Semantic
uncertainty on
the consequent
Syntactic Uncertainty
on the Relation
19. CONCLUSIONS
• Crisp Ontologies traditionally reduce uncertainty in
phenomena conceptualisation
• Uncertain Ontologies can integrate uncertainty in knowledge
sharing and automated reasoning
• Uncertain Ontologies (probabilistic, fuzzy, possibilistic, ...)
can become building blocks for developping models
• Open question: how to combine uncertain ontologies using
different formalisms.
Reasoning on geographic space is almost always
reasoning with uncertain knowledge on geographic
phenomena.
20. Thanks for your attention!
matteo.caglioni@unice.fr
giovanni.fusco@unice.fr