This document discusses commonsense reasoning and its importance for computational creativity and knowledge invention. It provides an overview of past AI and cognitive science approaches to commonsense reasoning such as semantic networks, frames, and default logic. It then presents the TCL (Typicality Description Logic) framework, which extends description logics with typicality, probabilities, and cognitive heuristics to model commonsense conceptual combination. The framework is applied to generate novel concepts to achieve goals and to dynamically classify multimedia content. Evaluations show it effectively reclassifies content and generates recommendations that users and experts find high quality.
Cognitive Agents with Commonsense - Invited Talk at Istituto Italiano di Tecn...Antonio Lieto
Cognitive Agents with Commonsense - Invited Talk at Istituto Italiano di Tecnologia (IIT), I-Cog Initiative. https://www.facebook.com/icog.initiative/posts/129265685733532
Extending the knowledge level of cognitive architectures with Conceptual Spac...Antonio Lieto
Extending the knowledge level of cognitive architectures with Conceptual Spaces (+ a case study with Dual-PECCS: a hybrid knowledge representation system for common sense reasoning). Talk given at Stockholm, September 2016.
Cognitive Agents with Commonsense - Invited Talk at Istituto Italiano di Tecn...Antonio Lieto
Cognitive Agents with Commonsense - Invited Talk at Istituto Italiano di Tecnologia (IIT), I-Cog Initiative. https://www.facebook.com/icog.initiative/posts/129265685733532
Extending the knowledge level of cognitive architectures with Conceptual Spac...Antonio Lieto
Extending the knowledge level of cognitive architectures with Conceptual Spaces (+ a case study with Dual-PECCS: a hybrid knowledge representation system for common sense reasoning). Talk given at Stockholm, September 2016.
Conceptual Spaces for Cognitive Architectures: A Lingua Franca for Different ...Antonio Lieto
We claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by Gardenfors [23] for defending the need of a conceptual, intermediate, representation level between
the symbolic and the sub-symbolic one. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and
reasoning in Cognitive Architectures
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
The human brain has an amazing power at solving different cognitive tasks, ranging from visual perception, playing chess, building airplanes, to the discovery of the mass and energy relation E=mc^2. However, up to now, nobody knows how the brain works and what is the principle underlying its power. In this presentation, a general principle of learning is proposed to understand intelligence like the human one. It will attempt to reveal the logical structure underlying intelligence so that we can implement it using machines to help us to smooth out the differences in individual intelligence just like machinery to individual physical power. It can help us to build better human societies for equality, peace, and prosperity.
6. kr paper journal nov 11, 2017 (edit a)IAESIJEECS
Knowledge Representation (KR) is a fascinating field across several areas of cognitive science and computer science. It is very hard to identify the requirement of a combination of many techniques and inference mechanism to achieve the accuracy for the problem domain. This research attempted to examine those techniques, and to apply them to implement a Cognitive Hybrid Sentence Modeling and Analyzer. The purpose of developing this system is to facilitate people who face the problem of using English language in daily life.
Conceptual Interoperability and Biomedical DataJim McCusker
The goals of conceptual interoperability are:
Make similar but distinct data resources available for search, conversion, and inter-mapping in a way that mirrors human understanding of the data being searched.
Make data resources that use cross-cutting models (HL7-RIM, provenance models, etc.) interoperable with domain-specific models without explicit mappings between them.
Uncertainty classification of expert systems a rough set approachEr. rahul abhishek
In this paper, we discussed about the un certainity classifications of the Expert Systems using a Rough Set Approach. It is a Softcomputing technique using this we classified the types of Expert Systems. An expert system has a unique structure, different from traditional programs. It is divided into two parts, one fixed, independent of the expert system: the inference engine, and one variable: the knowledge base. To run an expert system, the engine reasons about the knowledge base like a human. In the 80's a third part appeared: a dialog interface to communicate with users. This ability to conduct a conversation with users was later called "conversational". Rough set theory is a technique deals with uncertainty.
How to Ground A Language for Legal Discourse In a Prototypical Perceptual Sem...L. Thorne McCarty
Slides for my talk at the 15th International Conference on Artificial Intelligence and Law (ICAIL 2015), June 11, 2015.
The full ICAIL 2015 paper is available on ResearchGate at bit.ly/1qCnLJq.
Conceptual Spaces for Cognitive Architectures: A Lingua Franca for Different ...Antonio Lieto
We claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by Gardenfors [23] for defending the need of a conceptual, intermediate, representation level between
the symbolic and the sub-symbolic one. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and
reasoning in Cognitive Architectures
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
The human brain has an amazing power at solving different cognitive tasks, ranging from visual perception, playing chess, building airplanes, to the discovery of the mass and energy relation E=mc^2. However, up to now, nobody knows how the brain works and what is the principle underlying its power. In this presentation, a general principle of learning is proposed to understand intelligence like the human one. It will attempt to reveal the logical structure underlying intelligence so that we can implement it using machines to help us to smooth out the differences in individual intelligence just like machinery to individual physical power. It can help us to build better human societies for equality, peace, and prosperity.
6. kr paper journal nov 11, 2017 (edit a)IAESIJEECS
Knowledge Representation (KR) is a fascinating field across several areas of cognitive science and computer science. It is very hard to identify the requirement of a combination of many techniques and inference mechanism to achieve the accuracy for the problem domain. This research attempted to examine those techniques, and to apply them to implement a Cognitive Hybrid Sentence Modeling and Analyzer. The purpose of developing this system is to facilitate people who face the problem of using English language in daily life.
Conceptual Interoperability and Biomedical DataJim McCusker
The goals of conceptual interoperability are:
Make similar but distinct data resources available for search, conversion, and inter-mapping in a way that mirrors human understanding of the data being searched.
Make data resources that use cross-cutting models (HL7-RIM, provenance models, etc.) interoperable with domain-specific models without explicit mappings between them.
Uncertainty classification of expert systems a rough set approachEr. rahul abhishek
In this paper, we discussed about the un certainity classifications of the Expert Systems using a Rough Set Approach. It is a Softcomputing technique using this we classified the types of Expert Systems. An expert system has a unique structure, different from traditional programs. It is divided into two parts, one fixed, independent of the expert system: the inference engine, and one variable: the knowledge base. To run an expert system, the engine reasons about the knowledge base like a human. In the 80's a third part appeared: a dialog interface to communicate with users. This ability to conduct a conversation with users was later called "conversational". Rough set theory is a technique deals with uncertainty.
How to Ground A Language for Legal Discourse In a Prototypical Perceptual Sem...L. Thorne McCarty
Slides for my talk at the 15th International Conference on Artificial Intelligence and Law (ICAIL 2015), June 11, 2015.
The full ICAIL 2015 paper is available on ResearchGate at bit.ly/1qCnLJq.
Probabilistic Abductive Logic Programming using Possible WorldsFulvio Rotella
Reasoning in very complex contexts often requires purely deductive reasoning to be supported by a variety of techniques that can cope with incomplete data. Abductive inference allows to guess information that has not been explicitly observed. Since there are many explanations for such guesses, there is the need for assigning a probability to each one. This work exploits logical abduction to produce multiple explanations consistent with a given background knowledge and defines a strategy to prioritize them using their chance of being true. Another novelty is the introduction of probabilistic integrity constraints rather than hard ones. Then we propose a strategy that learns model and parameters from data and exploits our Probabilistic Abductive Proof Procedure to classify never-seen instances. This approach has been tested on some standard datasets showing that it improves accuracy in presence of corruptions and missing data.
Analogy is one of the most studied representatives of a family of non-classical forms of reasoning working across different domains, usually taken to play a crucial role in creative thought and problem-solving. In the first part of the talk, I will shortly introduce general principles of computational analogy models (relying on a generalization-based approach to analogy-making). We will then have a closer look at Heuristic-Driven Theory Projection (HDTP) as an example for a theoretical framework and implemented system: HDTP computes analogical relations and inferences for domains which are represented using many-sorted first-order logic languages, applying a restricted form of higher-order anti-unification for finding shared structural elements common to both domains. The presentation of the framework will be followed by a few reflections on the "cognitive plausibility" of the approach motivated by theoretical complexity and tractability considerations.
In the second part of the talk I will discuss an application of HDTP to modeling essential parts of concept blending processes as current "hot topic" in Cognitive Science. Here, I will sketch an analogy-inspired formal account of concept blending —developed in the European FP7-funded Concept Invention Theory (COINVENT) project— combining HDTP with mechanisms from Case-Based Reasoning.
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
An important, and well studied, class of stochastic models is given by stochastic differential equations (SDEs). In this talk, we consider Bayesian inference based on measurements from several individuals, to provide inference at the "population level" using mixed-effects modelling. We consider the case where dynamics are expressed via SDEs or other stochastic (Markovian) models. Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that account for (i) the intrinsic random variability in the latent states dynamics, as well as (ii) the variability between individuals, and also (iii) account for measurement error. This flexibility gives rise to methodological and computational difficulties.
Fully Bayesian inference for nonlinear SDEMEMs is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameters of interest. The algorithm is made computationally efficient through careful use of blocking strategies, particle filters (sequential Monte Carlo) and correlated pseudo-marginal approaches. The resulting methodology is is flexible, general and is able to deal with a large class of nonlinear SDEMEMs [1]. In a more recent work [2], we also explored ways to make inference even more scalable to an increasing number of individuals, while also dealing with state-space models driven by other stochastic dynamic models than SDEs, eg Markov jump processes and nonlinear solvers typically used in systems biology.
[1] S. Wiqvist, A. Golightly, AT McLean, U. Picchini (2020). Efficient inference for stochastic differential mixed-effects models using correlated particle pseudo-marginal algorithms, CSDA, https://doi.org/10.1016/j.csda.2020.107151
[2] S. Persson, N. Welkenhuysen, S. Shashkova, S. Wiqvist, P. Reith, G. W. Schmidt, U. Picchini, M. Cvijovic (2021). PEPSDI: Scalable and flexible inference framework for stochastic dynamic single-cell models, bioRxiv doi:10.1101/2021.07.01.450748.
Information-theoretic clustering with applicationsFrank Nielsen
Information-theoretic clustering with applications
Abstract: Clustering is a fundamental and key primitive to discover structural groups of homogeneous data in data sets, called clusters. The most famous clustering technique is the celebrated k-means clustering that seeks to minimize the sum of intra-cluster variances. k-Means is NP-hard as soon as the dimension and the number of clusters are both greater than 1. In the first part of the talk, we first present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means but also other kinds of clustering algorithms like the k-medoids, the k-medians, the k-centers, etc.
We extend the method to incorporate cluster size constraints and show how to choose the appropriate number of clusters using model selection. We then illustrate and refine the method on two case studies: 1D Bregman clustering and univariate statistical mixture learning maximizing the complete likelihood. In the second part of the talk, we introduce a generalization of k-means to cluster sets of histograms that has become an important ingredient of modern information processing due to the success of the bag-of-word modelling paradigm.
Clustering histograms can be performed using the celebrated k-means centroid-based algorithm. We consider the Jeffreys divergence that symmetrizes the Kullback-Leibler divergence, and investigate the computation of Jeffreys centroids. We prove that the Jeffreys centroid can be expressed analytically using the Lambert W function for positive histograms. We then show how to obtain a fast guaranteed approximation when dealing with frequency histograms and conclude with some remarks on the k-means histogram clustering.
References: - Optimal interval clustering: Application to Bregman clustering and statistical mixture learning IEEE ISIT 2014 (recent result poster) http://arxiv.org/abs/1403.2485
- Jeffreys Centroids: A Closed-Form Expression for Positive Histograms and a Guaranteed Tight Approximation for Frequency Histograms.
IEEE Signal Process. Lett. 20(7): 657-660 (2013) http://arxiv.org/abs/1303.7286
http://www.i.kyoto-u.ac.jp/informatics-seminar/
Awarded presentation of my research activity, PhD Day 2011, February 23th 2011, Cagliari, Italy.
This presentation has been awarded as the best one of the track on information engineering.
Want to know more?
see my publications at
http://prag.diee.unica.it/pra/ita/people/satta
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Commonsense reasoning as a key feature for dynamic knowledge invention and computational creativity
1. Commonsense reasoning as a key feature for dynamic
knowledge invention and computational creativity
Antonio Lieto
Università di Torino, Dipartimento di Informatica, IT
ICAR-CNR, Palermo, IT
ICAR Meet 2020
https://speakers.acm.org/speakers/lieto_12489
2. Commonsense Reasoning
Concerns all the non standard (or non monotonic)
forms of reasoning
- induction
- abduction
- default reasoning
- …
2
3. Commonsense Reasoning
Concerns all the non standard (or non monotonic)
forms of reasoning
- induction
- abduction
- default reasoning
- …
3
TIPICALITY
9. Goal-Directed Systems
Goal-directed problem solving is a crucial activity for natural and
artificial intelligent systems (Aha 2018).
Classical strategies for achieving a goal (if the goal cannot be
directly reached) concern knowledge expansion/augmentation
activities:
- e.g. via the communication with another agents
- via a learning process
- via an external injection of novel knowledge in the declarative
memory of an artificial system
9
10. Outline
• Main idea: some goal-reasoning tasks can be solved by a
process of creative commonsense knowledge
invention based on a recombination of the knowledge
already available
– Knowledge invention as a process of creative
commonsense combinatorics
– This generative phenomenon represents an open problem
in current research in AI and automated reasoning: lack of
heuristic frameworks able to automatically deal with this
faculty.
10
11. Guppy Effect (PET FISH problem)
Prototypes are not compositional (Osherson and Smith, 1981).
The resulting PET FISH concept is not merely composed by the additive inclusion
of the typical features of the two composing concepts (i.e. PET and FISH).
Fish = {Greyish, Lives-in Water, not Warm.. }
PET = {hasFur, Warm, not Lives-in Water… }
12. Guppy Effect (PET FISH problem)
Prototypes are not compositional (Osherson and Smith, 1981).
The resulting PET FISH concept is not merely composed by the additive inclusion
of the typical features of the two composing concepts (i.e. PET and FISH).
PET = {hasFur, Warm, not Lives-in Water… }
PET Fish =
{Lives-in Water, not Warm,
Red.. }
Fish = {Greyish, Lives-in Water, not Warm.. }
13. TCL
A non monotonic Description Logic of typicality (TCL), for typicality-
based concept combination based on 3 ingredients
• Description Logics with Typicality (ALC + T)
• Probabilities and Distributed Semantics (Disponte)
• Heuristics from Cognitive Semantics (HEAD-MODIFER)
Lieto & Pozzato, "A Description Logic Framework for Commonsense Conceptual
Combination Integrating Typicality, Probabilities and Cognitive Heuristics", in Journal of
Experimental & Theoretical Artificial Intelligence, 2020. https://arxiv.org/pdf/1811.02366.pdf
14. DL with Typicality (ALC+T)
Non-monotonic extensions of Description Logics for reasoning about
prototypical properties and inheritance with exceptions.
Basic idea: to extend DLs with a typicality operator T (Giordano et al.
2015)
- A KB comprises assertions T(C) ⊑ D
- T(Student) ⊑ FacebookUsers means “normally, students use Facebook”
T is nonmonotonic
18. Distributed Semantics
We extended the ALC+T Logic with typicality inclusions equipped by real
numbers representing probabilities/degrees of belief.
We adopted the DISPONTE semantics (Riguzzi et al 2015) restricted to typicality
inclusions:
extension of ALC by inclusions p :: T (C ) ⊑ D
epistemic interpretation: “we believe p that typical Cs are Ds”
The result of this integration allowed us to reason on typical probabilistic scenarios
18
20. Cognitive Heuristics
Heuristics from cognitive semantics for the identification of plausible
mechanisms for blocking-inheritance.
HEAD-MODIFIER heuristics (Hampton, 2011):
- HEAD: stronger element of the combination
- MODIFIER weaker element
where C ⊑ CH ⊓ CM
The compound concept C as the combination of the HEAD (CH) and the
MODIFIER (CM)
21. (TCL) at work - Pipeline
21
1. KB with real data 2.Probabilistic
Scenarios
3. Selection of the most
appropriate scenarios
in TCL we assume a hybrid KB (Rigid and Typical Roles)
22. Selection Criteria
The typical properties of the form T(C) ⊑ D to ascribe to the
concept C are obtained in the set of scenarios, obtained by
applying the DISPONTE semantics, that are:
• consistent with respect to KB;
• not trivial, e.g. those ascribing all properties of the HEAD
are discarded;
• giving preference to CH w.r.t. CM with the highest
probability
29. Applications
- Computational Creativity
- Characters Generation
- Novel Genre Generation
- Recommender Systems
(Chiodino et al, ECAI 2020)
with
Centro Ricerche RAI
Cognitive modelling
Linda problem; Lieto & Pozzato, JETAI 20)
30. Goal oriented Knowledge Generation
Definition 1. Given a knowledge base K in the logic T
CL
, let G be a set of
concepts {D1, D2, . . . , Dn} called goal.
G = {Property1, Property2, Property3…}.
We say that a concept C is a solution to the goal G if either:
– for all Di ∈ G, either K |= C ⊑ D or K0 |= T(C) ⊑ D in the logic T
CL
or:
– C corresponds to the combination of at least two concepts C1 and C2
occurring in K, i.e.
C ≡ C1 ⊓ C2, and the C-revised knowledge base Kc provided by the logic
T
CL
is such that, for all Di ∈ G, either Kc |= C ⊑ D or Kc |= T(C) ⊑ D in T
CL
34. Cognitive Architectures
34
Allen Newell (1990)
Unified Theory of Cognition
A cognitive architecture (Newell, 1990) implements the
invariant structure of the cognitive system.
It captures the underlying commonality between different
intelligent agents and provides a framework from which
intelligent behavior arises.
35. SOAR Integration
Lieto et al. 2019, Cognitive Systems Research, Beyond Subgoaling, A dynamic knowledge generation framework
for creative problem solving in cognitive architectures.
38. Threefold Evaluation
1) AUTOMATIC: % of reclassified content
2) USER STUDY (involving 20 persons evaluated a total of 122
recommendations generated by the system).
39. Threefold Evaluation
1) AUTOMATIC: % of reclassified content
2) USER STUDY (involving 20 persons evaluated a total of 122
recommendations generated by the system).
3) EXPERTS INTERVIEW
40. Experts Interview (in RAI)
Some problematic elements emerged, that affected the overall
quality (and therefore the assigned ratings) of the recommended
items.
- Recommendation of very old episodes (e.g. episodes or
movies done before the ’60s).
- The experts pointed out the importance of considering the
possibility of expliciting also negative preferences (e.g. ¬Kids).
41. Upshots
I have showed how the cognitively-inspired TCL
commonsense reasoning framework has been applied in the
context of two different applications
This commonsense reasoning framework can be used for more
applications since it models an ubiquitous phenomenon
We plan to extends the applications to other domains
(museum recommendations, music etc.).
42. References
• Lieto, A., & Pozzato, G. L. (2020). A description logic framework for commonsense conceptual combination
integrating typicality, probabilities and cognitive heuristics. Journal of Experimental & Theoretical Artificial
Intelligence, 32(5), 769-804. https://doi.org/10.1080/0952813X.2019.1672799
• Chiodino, E., Di Luccio, D., Lieto, A., Messina, A., Pozzato, G. L., & Rubinetti, D. (2020). A Knowledge-
based System for the Dynamic Generation and Classification of Novel Contents in Multimedia Broadcasting.
In Proc. of ECAI (Vol. 2020), 680 - 687. http://ebooks.iospress.nl/volumearticle/54949
• Lieto, A., Perrone, F., Pozzato, G. L., & Chiodino, E. (2019). Beyond subgoaling: A dynamic knowledge
generation framework for creative problem solving in cognitive architectures. Cognitive Systems Research,
58, 305-316. https://doi.org/10.1016/j.cogsys.2019.08.005
• Chiodino, E., Lieto, A., Perrone, F., and Pozzato G.L. "A goal-oriented framework for knowledge invention
and creative problem solving in cognitive architectures", in ECAI 2020, 24th European Conference on
Artificial Intelligence, 2020, 2893 - 2894, http://ebooks.iospress.nl/volumearticle/55235
• Lieto, A. and Pozzato G.L. "Applying a description logic of typicality as a generative tool for concept
combination in computational creativity", Intelligenza Artificiale, vol. 13, no. 1, pp. 93-106, 2019. https://
content.iospress.com/articles/intelligenza-artificiale/ia180016
42