A Computational Framework for Concept Representation in Cognitive Systems and...Antonio Lieto
This document proposes a framework for representing concepts in cognitive systems called "concepts as heterogeneous proxytypes". It suggests concepts have multiple representations, including classical, prototypical, exemplar-based and theory-based. These representations are stored separately but can be combined. The framework represents concepts computationally using different frameworks like symbols, conceptual spaces and neural networks. It aims to test if this heterogeneous proxytype hypothesis can explain human concept identification and retrieval by implementing it in cognitive architectures.
Towards which Intelligence? Cognition as Design Key for building Artificial I...Antonio Lieto
The document discusses approaches to building artificial intelligence systems based on human cognition. It argues that AI should focus on high-level cognitive functions like humans exhibit full intelligence. A cognitive AI approach models heuristics and bounded rationality used by humans. The document presents a case study of a common sense reasoning system that integrates heterogeneous conceptual representations like prototypes and exemplars, and uses a dual process of reasoning. The system is evaluated against human responses in categorization tasks with 84% accuracy, providing insights to refine the cognitive theory.
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - LietoAntonio Lieto
1) The document discusses the cognitive paradigm in artificial intelligence research and cognitively inspired AI systems.
2) Cognitively inspired AI systems are designed based on insights from human and animal cognition, using structural constraints from cognitive science.
3) Examples of cognitively inspired AI systems discussed include GPS, semantic networks, the RM model of past-tense acquisition, and cognitive architectures like Soar and ACT-R.
Symbols and Search : What makes a machine intelligentAshwin P N
An Undergraduate student's analysis of the 1975 ACM Turing Award Lecture on the paper, "Computer Science as an Epirical Enquiry" by Alan Newell and Herbert Simon.
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...IOSR Journals
This document proposes a methodology to develop intelligent agents with universal artificial intelligence (UAI) that can operate effectively in new environments. The methodology uses a neuro-fuzzy system combined with a hidden Markov model (HMM) to provide agents with learning capabilities and the ability to make decisions in unknown environments. The neuro-fuzzy system would extract fuzzy rules and membership functions from data to guide an agent. The HMM would generate sequences of sensed states to model dynamic environments. This approach aims to create "super intelligent agents" that can perform human-level tasks in any computable environment without reprogramming. A literature review found that neuro-fuzzy and HMM methods have been successfully used for mobile robot obstacle avoidance and human motion recognition.
The document discusses the differences between machine learning (ML), statistical learning, data mining (DM), and automated learning (AL). It argues that while ML and statistical learning developed similar techniques starting in the 1960s, DM emerged in the 1990s from a merging of database research and automated learning. However, industry was much more enthusiastic about adopting DM techniques compared to AL techniques, even though many DM systems are just friendly interfaces of AL systems. The document aims to explain the key differences between DM and AL that led to DM's greater commercial success.
Understanding Movement and Interaction: An Ontology for Kinect-Based 3D Depth Sensors
Natalia Díaz Rodríguez, Robin Wikström, Johan Lilius, Manuel Pegalajar Cuéllar, Miguel Delgado Calvo-Flores
A Computational Framework for Concept Representation in Cognitive Systems and...Antonio Lieto
This document proposes a framework for representing concepts in cognitive systems called "concepts as heterogeneous proxytypes". It suggests concepts have multiple representations, including classical, prototypical, exemplar-based and theory-based. These representations are stored separately but can be combined. The framework represents concepts computationally using different frameworks like symbols, conceptual spaces and neural networks. It aims to test if this heterogeneous proxytype hypothesis can explain human concept identification and retrieval by implementing it in cognitive architectures.
Towards which Intelligence? Cognition as Design Key for building Artificial I...Antonio Lieto
The document discusses approaches to building artificial intelligence systems based on human cognition. It argues that AI should focus on high-level cognitive functions like humans exhibit full intelligence. A cognitive AI approach models heuristics and bounded rationality used by humans. The document presents a case study of a common sense reasoning system that integrates heterogeneous conceptual representations like prototypes and exemplars, and uses a dual process of reasoning. The system is evaluated against human responses in categorization tasks with 84% accuracy, providing insights to refine the cognitive theory.
Cognitive Paradigm in AI - Invited Lecture - Kyiv/Kyev - LietoAntonio Lieto
1) The document discusses the cognitive paradigm in artificial intelligence research and cognitively inspired AI systems.
2) Cognitively inspired AI systems are designed based on insights from human and animal cognition, using structural constraints from cognitive science.
3) Examples of cognitively inspired AI systems discussed include GPS, semantic networks, the RM model of past-tense acquisition, and cognitive architectures like Soar and ACT-R.
Symbols and Search : What makes a machine intelligentAshwin P N
An Undergraduate student's analysis of the 1975 ACM Turing Award Lecture on the paper, "Computer Science as an Epirical Enquiry" by Alan Newell and Herbert Simon.
Universal Artificial Intelligence for Intelligent Agents: An Approach to Supe...IOSR Journals
This document proposes a methodology to develop intelligent agents with universal artificial intelligence (UAI) that can operate effectively in new environments. The methodology uses a neuro-fuzzy system combined with a hidden Markov model (HMM) to provide agents with learning capabilities and the ability to make decisions in unknown environments. The neuro-fuzzy system would extract fuzzy rules and membership functions from data to guide an agent. The HMM would generate sequences of sensed states to model dynamic environments. This approach aims to create "super intelligent agents" that can perform human-level tasks in any computable environment without reprogramming. A literature review found that neuro-fuzzy and HMM methods have been successfully used for mobile robot obstacle avoidance and human motion recognition.
The document discusses the differences between machine learning (ML), statistical learning, data mining (DM), and automated learning (AL). It argues that while ML and statistical learning developed similar techniques starting in the 1960s, DM emerged in the 1990s from a merging of database research and automated learning. However, industry was much more enthusiastic about adopting DM techniques compared to AL techniques, even though many DM systems are just friendly interfaces of AL systems. The document aims to explain the key differences between DM and AL that led to DM's greater commercial success.
Understanding Movement and Interaction: An Ontology for Kinect-Based 3D Depth Sensors
Natalia Díaz Rodríguez, Robin Wikström, Johan Lilius, Manuel Pegalajar Cuéllar, Miguel Delgado Calvo-Flores
Soft computing is a set of computational techniques that aim to mimic human-like reasoning and decision making. The main techniques are fuzzy logic, neural networks, evolutionary computing, machine learning, and probabilistic reasoning. Each technique has strengths and weaknesses, but they complement each other. When used together, soft computing techniques can solve complex problems that are difficult for traditional mathematical methods. The paper reviews these soft computing techniques and explores how they could be applied to problems in various domains.
SEMANTIC STUDIES OF A SYNCHRONOUS APPROACH TO ACTIVITY RECOGNITIONcscpconf
Many important and critical applications such as surveillance or healthcare require some formof (human) activity recognition. Activities are usually represented by a series of actions driven and triggered by events. Recognition systems have to be real time, reactive, correct, complete, and dependable. These stringent requirements justify the use of formal methods to describe, analyze, verify, and generate effective recognition systems. Due to the large number of possible application domains, the researchers aim at building a generic recognition system. They choose the synchronous approach because it has a well-founded semantics and it ensures determinism and safe parallel composition. They propose a new language to represent activities as synchronous automata and they supply it with two complementary formal semantics. First a behavioral semantics gives a reference definition of program behavior using rewriting rules. Second, an equational semantics describes the behavior in a constructive way and can be directly implemented. This paper focuses on the description of these two semantics and their relation.
Analysis of intelligent system design by neuro adaptive control no restrictioniaemedu
This document discusses using neuro-adaptive control to analyze the design of intelligent systems. It begins by introducing the topic and noting that conventional adaptive control techniques assume explicit system models or dynamic structures based on linear models, which may not be valid for complex nonlinear systems. Neural networks and other intelligent control approaches that do not require explicit mathematical modeling are presented as alternatives. The paper then focuses on using time-delay neural networks for system identification and control of nonlinear dynamic systems. Various neural network architectures and learning algorithms for system modeling and control are described.
Analysis of intelligent system design by neuro adaptive controliaemedu
This document summarizes the analysis of intelligent system design using neuro-adaptive control methods. It discusses using neural networks for system identification through series-parallel and parallel models. It also discusses supervised control using a neural network trained by an expert operator, inverse control using a neural network trained on the inverse system model, and neuro-adaptive control using two neural networks - one for system identification and one for control. Neuro-adaptive control allows handling nonlinear system behavior without linear approximations.
This document discusses how combining probabilistic logical inference (PLN) with a nonlinear dynamical attention allocation system (ECAN) can help address the problem of combinatorial explosion in inference. It presents a simple example using a noisy version of the "smokes" problem where ECAN guides PLN's inference by focusing attention on surprising conclusions, allowing meaningful conclusions to be drawn with fewer inference steps. This demonstrates a cognitive synergy between logical reasoning and attention allocation that is hypothesized to be broadly valuable for artificial general intelligence.
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUScsandit
How a human being learns is a wide field and not fully understood until now. This paper should give an alternative attempt to get closer to the answer how human beings learn something and what the relation to emotions is. Therefore, the cognitive architecture of the project “Simulation of Mental Apparatus and Applications (SiMA)” is used to fulfill two tasks. One is to give an answer to the question above and the other one is to enhance the functional model of the mental apparatus with learning. For that reason, the functions of the model are analyzed in detail for their ability to enhance them with a learning ability. The focus of the analysis lay on emotions and their impact on the ability to change memories in the model to determine a different behavior than without learning.
Emotional Learning in a Simulated Model of the Mental Apparatus cscpconf
How a human being learns is a wide field and not fully understood until now. This paper should
give an alternative attempt to get closer to the answer how human beings learn something and
what the relation to emotions is. Therefore, the cognitive architecture of the project “Simulation
of Mental Apparatus and Applications (SiMA)” is used to fulfill two tasks. One is to give an
answer to the question above and the other one is to enhance the functional model of the mental
apparatus with learning. For that reason, the functions of the model are analyzed in detail for
their ability to enhance them with a learning ability. The focus of the analysis lay on emotions
and their impact on the ability to change memories in the model to determine a different
behavior than without learning.
Introduction to Artificial IntelligenceLuca Bianchi
Artificial intelligence has been defined in many ways as our understanding has evolved. Currently, AI is divided into narrow, general and super intelligence based on capabilities. Machine learning is a key approach in AI and involves algorithms that can learn from data to improve performance. Deep learning uses neural networks with many layers to learn representations of data and has achieved success in areas like computer vision and natural language processing.
On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...Vincenzo De Florio
This document discusses perception and apperception in ubiquitous and pervasive computing environments. It defines perception as a system's ability to sense and be aware of changes in figures originating within and without its boundaries. Perception is broken down into three layers: sensors, qualia reflection, and qualia persistence. A perception model and partial order are introduced to characterize and compare systems' perception capabilities. Perception failures can occur due to shortcomings in these layers, such as sensor faults or qualia mapping faults. Apperception is defined as a system's ability to make effective use of perceived context to drive adaptations through constructing theories of past and current situations.
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RManish Saraswat
Simple guide which explains deep learning and neural network with hands on experience in R using MXnet and H2o package. It also explains gradient descent and backpropagation algorithm.
Complete tutorial: http://blog.hackerearth.com/understanding-deep-learning-parameter-tuning-with-mxnet-h2o-package-r
On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...Vincenzo De Florio
This document discusses perception and apperception in ubiquitous and pervasive computing environments. It defines perception as a system's ability to sense and be aware of changes in figures originating within and without its boundaries. The document presents a model of perception with three layers: a sensor layer, a qualia reflection layer to associate sensed data with internal representations, and a qualia persistence layer to retain perceptions over time. It also discusses how limitations in these layers can lead to perception failures. Apperception is defined as a system's ability to make effective use of perceived context to drive adaptations through constructing theories of situations. The models provide a way to compare systems' perception and resilience capabilities as environments change.
Invited Tutorial - Cognitive Design for Artificial Minds AI*IA 2022Antonio Lieto
This document provides an overview of cognitive design for artificial minds. It discusses how cognitive artificial systems are inspired by human and natural cognition. The key points made are:
- Cognitive artificial systems are inspired by human and natural cognition to be more general and versatile than standard AI systems.
- Examples of cognitively inspired AI systems include ACT-R, Soar, and systems developed using the subsumption architecture.
- Cognitively inspired systems differ from standard AI in that they aim to have explanatory power for human cognition through structural models of cognitive processes and representations.
- Such systems can be used to test cognitive theories, provide human-like capabilities, and potentially lead to more general artificial intelligence.
Resilience Engineering & Human Error... in ITJoão Miranda
A system is resilient if it can adjust its functioning prior to, during, or following events (changes, disturbances, and opportunities), and thereby sustain required operations under both expected and unexpected conditions.
Also, in a world of complex systems, human error as an explanation for failure is somewhat a fallacy, an obstacle to learning and therefore, to create resilient systems.
This document provides lecture notes on soft computing techniques. It covers four modules:
1) Introduction to neurofuzzy and soft computing, including fuzzy sets, fuzzy rules, fuzzy inference systems
2) Neural networks, including single layer networks, multilayer perceptrons, unsupervised learning networks
3) Genetic algorithms and derivative-free optimization
4) Evolutionary computing techniques like simulated annealing and swarm optimization.
The document discusses key concepts in soft computing like fuzzy logic, neural networks, evolutionary algorithms and their applications in areas like control systems and pattern recognition. It also provides references for further reading.
This document summarizes and evaluates various rule extraction algorithms from trained artificial neural networks. It begins with an introduction explaining the importance of explanation capabilities for neural networks. It then provides a taxonomy for classifying rule extraction approaches based on the expressiveness of the extracted rules, whether the approach takes an open-box or black-box view of the neural network, any specialized training regimes used, the quality of explanations generated, and computational complexity. The document discusses sensitivity analysis as a basic method for understanding neural network relationships before focusing on decompositional and pedagogical rule extraction approaches.
A multilabel classification approach for complex human activities using a com...IJECEIAES
In our daily lives, humans perform different Activities of Daily Living (ADL), such as cooking, and studying. According to the nature of humans, they perform these activities in a sequential/simple or an overlapping/complex scenario. Many research attempts addressed simple activity recognition, but complex activity recognition is still a challenging issue. Recognition of complex activities is a multilabel classification problem, such that a test instance is assigned to a multiple overlapping activities. Existing data-driven techniques for complex activity recognition can recognize a maximum number of two overlapping activities and require a training dataset of complex (i.e. multilabel) activities. In this paper, we propose a multilabel classification approach for complex activity recognition using a combination of Emerging Patterns and Fuzzy Sets. In our approach, we require a training dataset of only simple (i.e. single-label) activities. First, we use a pattern mining technique to extract discriminative features called Strong Jumping Emerging Patterns (SJEPs) that exclusively represent each activity. Then, our scoring function takes SJEPs and fuzzy membership values of incoming sensor data and outputs the activity label(s). We validate our approach using two different dataset. Experimental results demonstrate the efficiency and superiority of our approach against other approaches.
This document discusses the development of new principles for modeling control systems based on bionic models inspired by human intelligence. It argues that traditional artificial intelligence and cognitive science approaches have not achieved human-level intelligence. The document proposes developing a formal model of the psyche using layered abstraction principles inspired by models used in computer engineering. This new bionic model would aim to exceed feasibility limits of current machine intelligence approaches by more closely modeling principles of the human mental apparatus.
Various techniques for activity recognition are discussed including activity recognition through logic and reasoning, probabilistic reasoning, Wi-Fi-based activity recognition, and data mining approaches. Specific algorithms described are the Smart Home Inhabitant Prediction algorithm, Active LeZi algorithm, and Episode Discovery algorithm. Related works that build on these algorithms aim to improve accuracy, efficiency, and ability to discover different types of patterns. The techniques discussed have potential applications in smart home, healthcare, and other domains by recognizing physical activities to provide personalized assistance.
Ex nihilo nihil fit: A COMMONSENSE REASONING FRAMEWORK FOR DYNAMIC KNOWLEDGE...Antonio Lieto
The document presents a commonsense reasoning framework called TCL that can be used for dynamic knowledge invention through conceptual combination and blending. TCL integrates typicality, probabilities and cognitive heuristics in a description logic framework. It allows modeling of non-monotonic inferences like induction, abduction and default reasoning. The framework has been applied to tasks like goal-oriented knowledge generation, affective computing and its use in robotics is discussed.
Analyzing the Explanatory Power of Bionic Systems With the Minimal Cognitive ...Antonio Lieto
The document discusses bionic systems that connect biological tissues with artificial devices. Two case studies are described:
1) A lamprey experiment where the reticulospinal pathway was replaced with an electromechanical device, allowing investigation of the relationship between input and output.
2) A monkey experiment where neural activity was used to control a cursor, then an artificial actuator. Performance declined initially but improved with feedback, showing plasticity in representing actuator dynamics.
While the artificial components don't directly explain biological mechanisms, they can provide local functional accounts and global insights by allowing investigation of hybrid biological-artificial system functioning.
More Related Content
Similar to Towards A Dual Process Approach to Computational Explanation in Human-Robot Social Interaction
Soft computing is a set of computational techniques that aim to mimic human-like reasoning and decision making. The main techniques are fuzzy logic, neural networks, evolutionary computing, machine learning, and probabilistic reasoning. Each technique has strengths and weaknesses, but they complement each other. When used together, soft computing techniques can solve complex problems that are difficult for traditional mathematical methods. The paper reviews these soft computing techniques and explores how they could be applied to problems in various domains.
SEMANTIC STUDIES OF A SYNCHRONOUS APPROACH TO ACTIVITY RECOGNITIONcscpconf
Many important and critical applications such as surveillance or healthcare require some formof (human) activity recognition. Activities are usually represented by a series of actions driven and triggered by events. Recognition systems have to be real time, reactive, correct, complete, and dependable. These stringent requirements justify the use of formal methods to describe, analyze, verify, and generate effective recognition systems. Due to the large number of possible application domains, the researchers aim at building a generic recognition system. They choose the synchronous approach because it has a well-founded semantics and it ensures determinism and safe parallel composition. They propose a new language to represent activities as synchronous automata and they supply it with two complementary formal semantics. First a behavioral semantics gives a reference definition of program behavior using rewriting rules. Second, an equational semantics describes the behavior in a constructive way and can be directly implemented. This paper focuses on the description of these two semantics and their relation.
Analysis of intelligent system design by neuro adaptive control no restrictioniaemedu
This document discusses using neuro-adaptive control to analyze the design of intelligent systems. It begins by introducing the topic and noting that conventional adaptive control techniques assume explicit system models or dynamic structures based on linear models, which may not be valid for complex nonlinear systems. Neural networks and other intelligent control approaches that do not require explicit mathematical modeling are presented as alternatives. The paper then focuses on using time-delay neural networks for system identification and control of nonlinear dynamic systems. Various neural network architectures and learning algorithms for system modeling and control are described.
Analysis of intelligent system design by neuro adaptive controliaemedu
This document summarizes the analysis of intelligent system design using neuro-adaptive control methods. It discusses using neural networks for system identification through series-parallel and parallel models. It also discusses supervised control using a neural network trained by an expert operator, inverse control using a neural network trained on the inverse system model, and neuro-adaptive control using two neural networks - one for system identification and one for control. Neuro-adaptive control allows handling nonlinear system behavior without linear approximations.
This document discusses how combining probabilistic logical inference (PLN) with a nonlinear dynamical attention allocation system (ECAN) can help address the problem of combinatorial explosion in inference. It presents a simple example using a noisy version of the "smokes" problem where ECAN guides PLN's inference by focusing attention on surprising conclusions, allowing meaningful conclusions to be drawn with fewer inference steps. This demonstrates a cognitive synergy between logical reasoning and attention allocation that is hypothesized to be broadly valuable for artificial general intelligence.
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUScsandit
How a human being learns is a wide field and not fully understood until now. This paper should give an alternative attempt to get closer to the answer how human beings learn something and what the relation to emotions is. Therefore, the cognitive architecture of the project “Simulation of Mental Apparatus and Applications (SiMA)” is used to fulfill two tasks. One is to give an answer to the question above and the other one is to enhance the functional model of the mental apparatus with learning. For that reason, the functions of the model are analyzed in detail for their ability to enhance them with a learning ability. The focus of the analysis lay on emotions and their impact on the ability to change memories in the model to determine a different behavior than without learning.
Emotional Learning in a Simulated Model of the Mental Apparatus cscpconf
How a human being learns is a wide field and not fully understood until now. This paper should
give an alternative attempt to get closer to the answer how human beings learn something and
what the relation to emotions is. Therefore, the cognitive architecture of the project “Simulation
of Mental Apparatus and Applications (SiMA)” is used to fulfill two tasks. One is to give an
answer to the question above and the other one is to enhance the functional model of the mental
apparatus with learning. For that reason, the functions of the model are analyzed in detail for
their ability to enhance them with a learning ability. The focus of the analysis lay on emotions
and their impact on the ability to change memories in the model to determine a different
behavior than without learning.
Introduction to Artificial IntelligenceLuca Bianchi
Artificial intelligence has been defined in many ways as our understanding has evolved. Currently, AI is divided into narrow, general and super intelligence based on capabilities. Machine learning is a key approach in AI and involves algorithms that can learn from data to improve performance. Deep learning uses neural networks with many layers to learn representations of data and has achieved success in areas like computer vision and natural language processing.
On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...Vincenzo De Florio
This document discusses perception and apperception in ubiquitous and pervasive computing environments. It defines perception as a system's ability to sense and be aware of changes in figures originating within and without its boundaries. Perception is broken down into three layers: sensors, qualia reflection, and qualia persistence. A perception model and partial order are introduced to characterize and compare systems' perception capabilities. Perception failures can occur due to shortcomings in these layers, such as sensor faults or qualia mapping faults. Apperception is defined as a system's ability to make effective use of perceived context to drive adaptations through constructing theories of past and current situations.
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RManish Saraswat
Simple guide which explains deep learning and neural network with hands on experience in R using MXnet and H2o package. It also explains gradient descent and backpropagation algorithm.
Complete tutorial: http://blog.hackerearth.com/understanding-deep-learning-parameter-tuning-with-mxnet-h2o-package-r
On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...Vincenzo De Florio
This document discusses perception and apperception in ubiquitous and pervasive computing environments. It defines perception as a system's ability to sense and be aware of changes in figures originating within and without its boundaries. The document presents a model of perception with three layers: a sensor layer, a qualia reflection layer to associate sensed data with internal representations, and a qualia persistence layer to retain perceptions over time. It also discusses how limitations in these layers can lead to perception failures. Apperception is defined as a system's ability to make effective use of perceived context to drive adaptations through constructing theories of situations. The models provide a way to compare systems' perception and resilience capabilities as environments change.
Invited Tutorial - Cognitive Design for Artificial Minds AI*IA 2022Antonio Lieto
This document provides an overview of cognitive design for artificial minds. It discusses how cognitive artificial systems are inspired by human and natural cognition. The key points made are:
- Cognitive artificial systems are inspired by human and natural cognition to be more general and versatile than standard AI systems.
- Examples of cognitively inspired AI systems include ACT-R, Soar, and systems developed using the subsumption architecture.
- Cognitively inspired systems differ from standard AI in that they aim to have explanatory power for human cognition through structural models of cognitive processes and representations.
- Such systems can be used to test cognitive theories, provide human-like capabilities, and potentially lead to more general artificial intelligence.
Resilience Engineering & Human Error... in ITJoão Miranda
A system is resilient if it can adjust its functioning prior to, during, or following events (changes, disturbances, and opportunities), and thereby sustain required operations under both expected and unexpected conditions.
Also, in a world of complex systems, human error as an explanation for failure is somewhat a fallacy, an obstacle to learning and therefore, to create resilient systems.
This document provides lecture notes on soft computing techniques. It covers four modules:
1) Introduction to neurofuzzy and soft computing, including fuzzy sets, fuzzy rules, fuzzy inference systems
2) Neural networks, including single layer networks, multilayer perceptrons, unsupervised learning networks
3) Genetic algorithms and derivative-free optimization
4) Evolutionary computing techniques like simulated annealing and swarm optimization.
The document discusses key concepts in soft computing like fuzzy logic, neural networks, evolutionary algorithms and their applications in areas like control systems and pattern recognition. It also provides references for further reading.
This document summarizes and evaluates various rule extraction algorithms from trained artificial neural networks. It begins with an introduction explaining the importance of explanation capabilities for neural networks. It then provides a taxonomy for classifying rule extraction approaches based on the expressiveness of the extracted rules, whether the approach takes an open-box or black-box view of the neural network, any specialized training regimes used, the quality of explanations generated, and computational complexity. The document discusses sensitivity analysis as a basic method for understanding neural network relationships before focusing on decompositional and pedagogical rule extraction approaches.
A multilabel classification approach for complex human activities using a com...IJECEIAES
In our daily lives, humans perform different Activities of Daily Living (ADL), such as cooking, and studying. According to the nature of humans, they perform these activities in a sequential/simple or an overlapping/complex scenario. Many research attempts addressed simple activity recognition, but complex activity recognition is still a challenging issue. Recognition of complex activities is a multilabel classification problem, such that a test instance is assigned to a multiple overlapping activities. Existing data-driven techniques for complex activity recognition can recognize a maximum number of two overlapping activities and require a training dataset of complex (i.e. multilabel) activities. In this paper, we propose a multilabel classification approach for complex activity recognition using a combination of Emerging Patterns and Fuzzy Sets. In our approach, we require a training dataset of only simple (i.e. single-label) activities. First, we use a pattern mining technique to extract discriminative features called Strong Jumping Emerging Patterns (SJEPs) that exclusively represent each activity. Then, our scoring function takes SJEPs and fuzzy membership values of incoming sensor data and outputs the activity label(s). We validate our approach using two different dataset. Experimental results demonstrate the efficiency and superiority of our approach against other approaches.
This document discusses the development of new principles for modeling control systems based on bionic models inspired by human intelligence. It argues that traditional artificial intelligence and cognitive science approaches have not achieved human-level intelligence. The document proposes developing a formal model of the psyche using layered abstraction principles inspired by models used in computer engineering. This new bionic model would aim to exceed feasibility limits of current machine intelligence approaches by more closely modeling principles of the human mental apparatus.
Various techniques for activity recognition are discussed including activity recognition through logic and reasoning, probabilistic reasoning, Wi-Fi-based activity recognition, and data mining approaches. Specific algorithms described are the Smart Home Inhabitant Prediction algorithm, Active LeZi algorithm, and Episode Discovery algorithm. Related works that build on these algorithms aim to improve accuracy, efficiency, and ability to discover different types of patterns. The techniques discussed have potential applications in smart home, healthcare, and other domains by recognizing physical activities to provide personalized assistance.
Similar to Towards A Dual Process Approach to Computational Explanation in Human-Robot Social Interaction (20)
Ex nihilo nihil fit: A COMMONSENSE REASONING FRAMEWORK FOR DYNAMIC KNOWLEDGE...Antonio Lieto
The document presents a commonsense reasoning framework called TCL that can be used for dynamic knowledge invention through conceptual combination and blending. TCL integrates typicality, probabilities and cognitive heuristics in a description logic framework. It allows modeling of non-monotonic inferences like induction, abduction and default reasoning. The framework has been applied to tasks like goal-oriented knowledge generation, affective computing and its use in robotics is discussed.
Analyzing the Explanatory Power of Bionic Systems With the Minimal Cognitive ...Antonio Lieto
The document discusses bionic systems that connect biological tissues with artificial devices. Two case studies are described:
1) A lamprey experiment where the reticulospinal pathway was replaced with an electromechanical device, allowing investigation of the relationship between input and output.
2) A monkey experiment where neural activity was used to control a cursor, then an artificial actuator. Performance declined initially but improved with feedback, showing plasticity in representing actuator dynamics.
While the artificial components don't directly explain biological mechanisms, they can provide local functional accounts and global insights by allowing investigation of hybrid biological-artificial system functioning.
The document discusses a commonsense reasoning framework called TCL that integrates typicality, probabilities, and cognitive heuristics. TCL extends description logics with a typicality operator and probabilistic semantics to model prototypical properties. It also uses cognitive heuristics like head-modifier to identify plausible mechanisms for concept combination. The framework has been applied to generate novel content and classify emotions, with encouraging results explaining item-emotion associations for the deaf community.
Heterogeneous Proxytypes as a Unifying Cognitive Framework for Conceptual Rep...Antonio Lieto
This document summarizes Antonio Lieto's work on developing a cognitive framework called heterogeneous proxytypes for conceptual representation and reasoning in artificial systems. The framework incorporates multiple knowledge representations, including prototypes, exemplars, and theories. It allows different representations and reasoning mechanisms to be activated based on context. Lieto describes cognitive models that integrate heterogeneous proxytypes, like the DUAL-PECCS system, and evaluates them on commonsense reasoning tasks.
Functional and Structural Models of Commonsense Reasoning in Cognitive Archit...Antonio Lieto
The document provides an overview of functional and structural models of commonsense reasoning in cognitive architectures. It discusses several approaches to commonsense reasoning including semantic networks, frames, scripts, and default logic. It also discusses different levels of representation including conceptual spaces, typicality, and compositionality. The document proposes dual process models that integrate heterogeneous representations like prototypes and exemplars. It presents computational models like Dual PECCS and TCL that implement aspects of commonsense reasoning through integrated and connected representations.
Lieto - Book Presentation Cognitive Design for Artificial Minds (AGI Northwes...Antonio Lieto
The document discusses a book titled "Cognitive Design for Artificial Minds" by Antonio Lieto. It includes quotes from several professors praising the book for proposing a re-unification of artificial intelligence and cognitive science. The book explores connections between AI modeling techniques and cognitive science methods. It also provides an overview of cognitive architectures and argues that a biologically/cognitively inspired approach can help develop next generation AI systems beyond deep learning. The document discusses challenges in developing a standard model of cognition and the need for collaboration across the AI and cognitive science communities.
Cognitive Agents with Commonsense - Invited Talk at Istituto Italiano di Tecn...Antonio Lieto
Cognitive Agents with Commonsense - Invited Talk at Istituto Italiano di Tecnologia (IIT), I-Cog Initiative. https://www.facebook.com/icog.initiative/posts/129265685733532
Commonsense reasoning as a key feature for dynamic knowledge invention and co...Antonio Lieto
This document discusses commonsense reasoning and its importance for computational creativity and knowledge invention. It provides an overview of past AI and cognitive science approaches to commonsense reasoning such as semantic networks, frames, and default logic. It then presents the TCL (Typicality Description Logic) framework, which extends description logics with typicality, probabilities, and cognitive heuristics to model commonsense conceptual combination. The framework is applied to generate novel concepts to achieve goals and to dynamically classify multimedia content. Evaluations show it effectively reclassifies content and generates recommendations that users and experts find high quality.
Knowledge Capturing via Conceptual Reframing: A Goal-oriented Framework for K...Antonio Lieto
The document presents a goal-oriented framework called GOCCIOLA that can generate novel knowledge by recombining concepts in a dynamic way to solve problems. GOCCIOLA uses a logic called TCL that can reason about typical properties of concepts and their combinations. It evaluates plausible scenarios for combining concepts using probabilities and heuristics from cognitive semantics. GOCCIOLA was tested on a concept composition task and able to provide solutions to goals by suggesting new concept combinations. The system has applications in computational creativity and cognitive architectures.
Extending the knowledge level of cognitive architectures with Conceptual Spac...Antonio Lieto
Extending the knowledge level of cognitive architectures with Conceptual Spaces (+ a case study with Dual-PECCS: a hybrid knowledge representation system for common sense reasoning). Talk given at Stockholm, September 2016.
Conceptual Spaces for Cognitive Architectures: A Lingua Franca for Different ...Antonio Lieto
We claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by Gardenfors [23] for defending the need of a conceptual, intermediate, representation level between
the symbolic and the sub-symbolic one. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and
reasoning in Cognitive Architectures
Computational Explanation in Biologically Inspired Cognitive Architectures/Sy...Antonio Lieto
Computational models of cognition can have explanatory power when they are structurally valid models of the natural systems that inspired them. The document discusses different approaches to modeling knowledge in cognitive architectures and humans. It analyzes how ACT-R, CLARION, and LIDA represent concepts, and suggests that humans likely use heterogeneous representations including prototypes, exemplars, and other conceptual structures. Models should account for this heterogeneity to better explain human cognition.
This document describes a case study using ontological representations and narrative to explore cultural heritage archives. The Labyrinth project uses an ontology modeling narrative elements like stories, actions, characters to allow users to navigate a digital archive. The ontology relates these narrative aspects to archive items. Reasoning over the ontology transfers narrative properties to items, allowing exploration by story or action. A user study will evaluate if this approach supports serendipitous discovery and new learning experiences within cultural heritage archives.
Riga2013 Symposium on Concepts and PerceptionAntonio Lieto
This document discusses the relationship between concepts and perception through the lens of dual process theory. It analyzes different perspectives on the nature of concepts, such as prototype theory and exemplar theory, and proposes that concepts involve both System 1 implicit processes and System 2 explicit processes. System 1 processes are related to perception and involve fast, automatic categorization based on prototypes. In contrast, System 2 processes are slower, more controlled processes like monotonic categorization. The document concludes that understanding the heterogeneous nature of concepts, including both System 1 and System 2 aspects, can provide insight into the complex relationship between concepts and perception.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Towards A Dual Process Approach to Computational Explanation in Human-Robot Social Interaction
1. Towards A Dual Process Approach to Computational
Explanation in Human-Robot Social Interaction
Agnese Augello, Ignazio Infantino, Antonio Lieto, Umberto Maniscalco,
Giovanni Pilato, Filippo Vella
ICAR-CNR, National Research Council, Palermo, Italy
Dipartimento di Informatica, University of Turin, Italy
IJCAI 2017 Workshop on Cognition and Artificial Intelligence for Human-Centred Design, 19 Aug. 2017, Melbourne,
Australia
3. “Explanatory Needs” are not New in AI
3
Cybernetics
Computational Cognitive Science
From Human to Artificial Cognition (and back)
4. Explainable AI - Nowadays
- The current request for Explainable AI (XAI) is something
different with respect to the previous notion of “explanation”
- AI is looking for systems able to provide a transparent
account of the reasons determining their behaviour (both in
cases of a successful or unsuccessful output)
4
5. Explainable AI - Nowadays
- The current request for Explainable AI (XAI) is something
different with respect to the previous notion of “explanation”.
- AI is looking for systems able to provide a transparent account
of the reasons determining their behaviour (both in cases of a
successful or unsuccessful output)
Problem: The adoption of current Machine Learning and Deep
Learning techniques faces the classical problem of opacity in
artificial neural networks (this classical problem explodes in Deep
Nets)
5
6. 6
Clarification: “Opacity” does not mean, in principle, “impossible to
Explain”
Inputs can be either removed or modified until the output changes
in a way that is important to the user. This is a trial and error
process, time consuming…very complicated in practice.
E.g. Model based neural networks (mid’80): their connections are
parametrised to satisfy specific constraints implied by a putative
causal model (e.g. approximated).
There are also recent attempts to provide an interpretation of deep
nets (e.g. Zhou et al. 2015) but the general problem remains
largely unsolved.
Opacity and Explanation
7. 7
Since the adoption of deep ANNs is important for improving the
performance of artificial systems but is problematic for solving the
explanatory problem we demanded the latter task to a second
component:
- inspiration from the dual process theory of reasoning
(Stanovitch and West, 2001; Evans and Frankish 2009;
Kahnemann 2011).
- the two software components perform different types of
reasoning.
Our Proposal
8. Dual Process Reasoning
11
(Stanovitch and West, 2000; Kahnemann 2011).
In human cognition, type 1 processes are executed fast
and are not based on logical rules. Then they are checked
against more logical deliberative processes (type 2
processes).
… …
Type 1 Processes Type 2 Processes
Automatic Controllable
Parallel, Fast Sequential, Slow
Pragmatic/contextualized Logical/Abstract
9. Dual Process Reasoning
11
In human cognition, type 1 processes are executed fast
and are not based on logical rules. Then they are checked
against more logical deliberative processes (type 2
processes).
Type 1 Processes Type 2 Processes
Automatic Controllable
Parallel, Fast Sequential, Slow
Pragmatic/contextualized Logical/Abstract
Deep Nets as S1 systems Ontologies as S2 systems
10. The Scenario
• Robotic Reception in a public office
welcoming visitors in the waiting room and
directing them to proper office rooms
• The robot must be able to discriminate the
not appropriate behaviors of the visitors
and act accordingly.
11. The Scenario
• The robot learns how to detect not appropriate and
in particular aggressive behaviors, by examining the
postures and the gestures of people during a
training phase.
• During the interaction, considering its expectations
and its experience, he must be able to quickly
recognize the exhibited social signs (S1
component).
• If required, the robot must be able to provide an
explanatory account of some sort of this process of
interpretation (S2 Component).
12. The S1 System
• Deep networks can effectively be used for
the processing and classification of
sequence of data
• Long Short Term Memory
– avoid the long-term dependency problem
– a more complex cell structure
• Cell structure
14. The S1 System
• We have chosen to gradually stack LSTM layers and
measure the trend of the F1-score to determine what
the correct number of layers can be.
• Each LSTM layer is separated from the next one by
a Rectified Linear Unit function.
• Given a sequence length, we attempted to determine
how many neurons are needed for the
representation to be of good quality.
15. The S1 System
• Number of neurons in the LSTM layers
– set to 64, 128 or 256;
• Considered stacked LSTM levels
– one, two or three
• sliding window
– from 2 to 20
• The training has been performed for 10 epochs.
16. The S1 System
• A dataset of 20 different actions has been
used used to train the network (subset of
the Vicon Physical Action dataset)
• The actions of the dataset have been
divided in
– “normal” behavior
(Bowing, Clapping and Handshaking)
– “not friendly” behavior
(Punching, Slapping and Frontkicking)
17. The S2 System
• The main perceptual differences between different
classes of gestures (e.g. aggressive vs not aggressive
ones) are represented through an explicit ontological
model (available at: http://www.di.unito.it/~lieto/
ExpActOnto.html)
17
18. The S2 System
• Example of ontological features considered to distinguish
among these two classes of gestures are: velocity of the
gesture execution, distance of the final gesture position
from the body etc.
• In other words: we tried to provide an explanatory account of
the output of the opaque S1 component by using an apriori
ontological model of a given situation
• The S2 component allows also to model the differences
between gestures. These models can be used to describe why
a particular sign, e.g. categorized as ’aggressive’, has been
additionally recognized, for example, as a “Punching” Action.
19. Ex. Provided Explanation for the Detected
“Punching” Action
“Punching Action” is characterized by the fact of being an action executed
at a certain velocity (X), categorized as ’High Velocity’, and at a certain
distance (Y) from the Body, categorized as ’Close Distance’ according to the
ontology.
In addition to these traits, common to all the “Aggressive Actions”, the
“Punching” action is also characterized by the fact of being executed with
“Close Hands”. 19
20. Ex. Provided Explanation: “why” punching and
not slapping
The S2 additional model-based explanation about why the previous
’Punching’ cannot be classified, for example, as a ’Slapping’ (both are
’Aggressive Actions’).
Also in this case the fact that the detected body part executing the gesture
is a ’Close Hand’ and not a ’Open Hand’ (as in the case of ’Slapping’)
represent a crucial element for explaining that categorization decision. 20
21. Upshot and Future Work
We sketched a preliminary account of a dual process based framework
able to provide a partial explanation of the reasons driving a robotic
system to some decisions in task of gesture recognition is a social
scenario.
As a future work we plan to evaluate in detail the feasibility of the
proposed framework with a Pepper robot interacting in a real environment.
We want to extend the level of detail of the possible explanation provided
by such framework by considering more complex scenarios and a
multimodal interaction involving both visual and linguistic elements.
Finally, we plan to provide a tighter integration of the two software
components that, currently, operate in a relatively independent way.
21