This document discusses how combining probabilistic logical inference (PLN) with a nonlinear dynamical attention allocation system (ECAN) can help address the problem of combinatorial explosion in inference. It presents a simple example using a noisy version of the "smokes" problem where ECAN guides PLN's inference by focusing attention on surprising conclusions, allowing meaningful conclusions to be drawn with fewer inference steps. This demonstrates a cognitive synergy between logical reasoning and attention allocation that is hypothesized to be broadly valuable for artificial general intelligence.
Soft computing is a set of computational techniques that aim to mimic human-like reasoning and decision making. The main techniques are fuzzy logic, neural networks, evolutionary computing, machine learning, and probabilistic reasoning. Each technique has strengths and weaknesses, but they complement each other. When used together, soft computing techniques can solve complex problems that are difficult for traditional mathematical methods. The paper reviews these soft computing techniques and explores how they could be applied to problems in various domains.
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
This document proposes a general principle of learning based on discovering intrinsic constraint models. It defines intelligence as the ability to understand the world by discovering intrinsic variables and constraints that have minimum entropy. The key goals of learning are to map observations to intrinsic variables and detect constraints to minimize a model entropy objective function. Discovering intrinsic variables is critical for maximizing prediction accuracy, learning efficiency, and generalization power. The principle provides a theoretical foundation for explaining and developing artificial general intelligence.
Ben Goertzel is the CEO of Novamente LLC and Biomind LLC, and CTO of Genescient Corp. He is also the co-founder of the OpenCog Project and Vice Chairman of Humanity+. OpenCog is an open source software framework and design for advanced artificial general intelligence (AGI). It uses a cognitive architecture based on multiple interacting cognitive processes that act on a shared knowledge representation. Key algorithms in OpenCog include MOSES for probabilistic evolutionary learning, probabilistic logic networks for declarative knowledge, and economic attention allocation for resource management. The goal is to develop AGI with high efficient pragmatic general intelligence relative to relevant goals and environments.
This document summarizes a study that compares fuzzy logic and neuro-fuzzy models for predicting direct current in motors. Fuzzy logic and neuro-fuzzy systems were used to model the relationship between motor torque, power, speed (inputs) and current (output). Both techniques were tested on a dataset of 507 samples. The neuro-fuzzy inference system (ANFIS) performed slightly better than the fuzzy logic system at predicting motor current, demonstrating the benefits of combining fuzzy logic with neural networks.
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAScscpconf
Soft Computing refers to the science of reasoning, thinking and deduction that recognizes and uses the real world phenomena of grouping, memberships, and classification of various quantities under study. As such, it is an extension of natural heuristics and capable of dealing with complex systems because it does not require strict mathematical definitions and
distinctions for the system components. It differs from hard computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty and partial truth. In effect, the role modelfor soft computing is the human mind. The guiding principle of soft computing is: Exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. The main techniques in soft computing are evolutionary computing, artificial neural networks, and fuzzy logic and Bayesian statistics. Each technique can be used separately, but a powerful advantage of soft computing is the complementary nature of the techniques. Used together they can produce solutions to problems that are too complex or
inherently noisy to tackle with conventional mathematical methods. The applications of soft computing have proved two main advantages. First, it made solving nonlinear problems, in
which mathematical models are not available, possible. Second, it introduced the human knowledge such as cognition,
ecognition, understanding, learning, and others into the fields of
computing. This resulted in the possibility of constructing intelligent systems such as autonomous self-tuning systems, and automated designed systems. This paper highlights various areas of soft computing techniques.
This document outlines the syllabus for an MTCSCS302 course on Soft Computing taught by Dr. Sandeep Kumar Poonia. The course covers topics including neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms. It is divided into five units: (1) neural networks, (2) fuzzy logic, (3) fuzzy arithmetic and logic, (4) neuro-fuzzy systems and applications of fuzzy logic, and (5) genetic algorithms and their applications. The goal of the course is to provide students with knowledge of soft computing fundamentals and approaches for solving complex real-world problems.
John Anderson was born in 1947 in Vancouver, British Columbia. He earned his PhD from Stanford University in 1972 and has been a professor at several universities, including Carnegie Mellon University since 1978. ACT* is a cognitive theory developed by Anderson that describes a spreading activation model of semantic memory combined with a production system for executing higher-level operations. It distinguishes three types of memory - declarative, procedural, and working memory - and three types of learning.
Soft computing is a set of computational techniques that aim to mimic human-like reasoning and decision making. The main techniques are fuzzy logic, neural networks, evolutionary computing, machine learning, and probabilistic reasoning. Each technique has strengths and weaknesses, but they complement each other. When used together, soft computing techniques can solve complex problems that are difficult for traditional mathematical methods. The paper reviews these soft computing techniques and explores how they could be applied to problems in various domains.
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
This document proposes a general principle of learning based on discovering intrinsic constraint models. It defines intelligence as the ability to understand the world by discovering intrinsic variables and constraints that have minimum entropy. The key goals of learning are to map observations to intrinsic variables and detect constraints to minimize a model entropy objective function. Discovering intrinsic variables is critical for maximizing prediction accuracy, learning efficiency, and generalization power. The principle provides a theoretical foundation for explaining and developing artificial general intelligence.
Ben Goertzel is the CEO of Novamente LLC and Biomind LLC, and CTO of Genescient Corp. He is also the co-founder of the OpenCog Project and Vice Chairman of Humanity+. OpenCog is an open source software framework and design for advanced artificial general intelligence (AGI). It uses a cognitive architecture based on multiple interacting cognitive processes that act on a shared knowledge representation. Key algorithms in OpenCog include MOSES for probabilistic evolutionary learning, probabilistic logic networks for declarative knowledge, and economic attention allocation for resource management. The goal is to develop AGI with high efficient pragmatic general intelligence relative to relevant goals and environments.
This document summarizes a study that compares fuzzy logic and neuro-fuzzy models for predicting direct current in motors. Fuzzy logic and neuro-fuzzy systems were used to model the relationship between motor torque, power, speed (inputs) and current (output). Both techniques were tested on a dataset of 507 samples. The neuro-fuzzy inference system (ANFIS) performed slightly better than the fuzzy logic system at predicting motor current, demonstrating the benefits of combining fuzzy logic with neural networks.
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAScscpconf
Soft Computing refers to the science of reasoning, thinking and deduction that recognizes and uses the real world phenomena of grouping, memberships, and classification of various quantities under study. As such, it is an extension of natural heuristics and capable of dealing with complex systems because it does not require strict mathematical definitions and
distinctions for the system components. It differs from hard computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty and partial truth. In effect, the role modelfor soft computing is the human mind. The guiding principle of soft computing is: Exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. The main techniques in soft computing are evolutionary computing, artificial neural networks, and fuzzy logic and Bayesian statistics. Each technique can be used separately, but a powerful advantage of soft computing is the complementary nature of the techniques. Used together they can produce solutions to problems that are too complex or
inherently noisy to tackle with conventional mathematical methods. The applications of soft computing have proved two main advantages. First, it made solving nonlinear problems, in
which mathematical models are not available, possible. Second, it introduced the human knowledge such as cognition,
ecognition, understanding, learning, and others into the fields of
computing. This resulted in the possibility of constructing intelligent systems such as autonomous self-tuning systems, and automated designed systems. This paper highlights various areas of soft computing techniques.
This document outlines the syllabus for an MTCSCS302 course on Soft Computing taught by Dr. Sandeep Kumar Poonia. The course covers topics including neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms. It is divided into five units: (1) neural networks, (2) fuzzy logic, (3) fuzzy arithmetic and logic, (4) neuro-fuzzy systems and applications of fuzzy logic, and (5) genetic algorithms and their applications. The goal of the course is to provide students with knowledge of soft computing fundamentals and approaches for solving complex real-world problems.
John Anderson was born in 1947 in Vancouver, British Columbia. He earned his PhD from Stanford University in 1972 and has been a professor at several universities, including Carnegie Mellon University since 1978. ACT* is a cognitive theory developed by Anderson that describes a spreading activation model of semantic memory combined with a production system for executing higher-level operations. It distinguishes three types of memory - declarative, procedural, and working memory - and three types of learning.
This document introduces soft computing and provides an agenda for the lecture. Soft computing is defined as a fusion of fuzzy logic, neural networks, evolutionary computing, and probabilistic computing to deal with uncertainty and imprecision. Hybrid systems combine different soft computing techniques for improved performance. The lecture will cover an introduction to soft computing, fuzzy computing, neural networks, evolutionary computing, and hybrid systems. References are also provided.
This document discusses practical emotional neural networks. It summarizes two existing approaches - brain emotional learning (BEL) based networks and emotional backpropagation (EmBP) based networks. BEL networks are inspired by the limbic system and amygdala in the brain, modeling fast emotional processing via short neural pathways. EmBP networks apply emotional states and appraisals to neural network learning. The document proposes a new limbic-based artificial emotional neural network (LiAENN) that models emotional situations like anxiety and confidence during learning. It applies anxious confident decayed brain emotional learning rules and achieves higher accuracy than BEL and EmBP networks on facial detection and emotion recognition tasks.
Symbolic-Connectionist Representational Model for Optimizing Decision Making ...IJECEIAES
Modeling higher order cognitive processes like human decision making come in three representational approaches namely symbolic, connectionist and symbolic-connectionist. Many connectionist neural network models are evolved over the decades for optimizing decision making behaviors and their agents are also in place. There had been attempts to implement symbolic structures within connectionist architectures with distributed representations. Our work was aimed at proposing an enhanced connectionist approach of optimizing the decisions within the framework of a symbolic cognitive model. The action selection module of this framework is forefront in evolving intelligent agents through a variety of soft computing models. As a continous effort, a Connectionist Cognitive Model (CCN) had been evolved by bringing a traditional symbolic cognitive process model proposed by LIDA as an inspiration to a feed forward neural network model for optimizing decion making behaviours in intelligent agents. Significanct progress was observed while comparing its performance with other varients.
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUScsandit
How a human being learns is a wide field and not fully understood until now. This paper should give an alternative attempt to get closer to the answer how human beings learn something and what the relation to emotions is. Therefore, the cognitive architecture of the project “Simulation of Mental Apparatus and Applications (SiMA)” is used to fulfill two tasks. One is to give an answer to the question above and the other one is to enhance the functional model of the mental apparatus with learning. For that reason, the functions of the model are analyzed in detail for their ability to enhance them with a learning ability. The focus of the analysis lay on emotions and their impact on the ability to change memories in the model to determine a different behavior than without learning.
1. The document discusses several approaches to cognitive science including connectionism, neural networks, supervised and unsupervised learning, Hebbian learning, the delta rule, backpropagation, and responses to Descartes from Gelernter, Penrose, and Pinker.
2. Connectionism models mental phenomena using interconnected networks of simple units like neural networks. Learning involves adjusting connection weights between neurons.
3. Supervised learning uses input-output pairs to adjust weights to minimize error, while unsupervised learning only uses inputs to find patterns in the data.
Army Study: Ontology-based Adaptive Systems of Cyber DefenseRDECOM
The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.
A bayesian abduction model for extracting most probable evidence to support s...ijaia
In this paper, we discuss the development of a Bayesian Abduction Model of Sensemaking Support (BAMSS) as a tool for information fusion to support prospective sensemaking. Currently, BAMSS can identify the Most Probable Explanation from a Bayesian Belief Network (BBN) and extract the prevalent conditional probability values to help the sensemaking analysts to understand the cause-effect of the adversary information. Actual vignettes from databases of modern insurgencies and asymmetry warfare are used to validate the performance of BAMSS. BAMSS computes the posterior probability of the network
edges and performs information fusion using a clustering algorithm. In the model, the friendly force commander uses the adversary information to prospectively make sense of the enemy’s intent. Sensitivity
analyses were used to confirm the robustness of BAMSS in generating the Most Probable Explanations from a BBN through abductive inference. The simulation results demonstrate the utility of BAMSS as a computational tool to support sense making
This document provides an introduction to artificial intelligence and cognitive science. It defines intelligence and AI, discusses different approaches to AI like thinking like humans, thinking rationally, acting like humans and acting rationally. It also summarizes the history of AI from early neural networks to modern applications. Key concepts covered include the Turing test, knowledge representation, rational agents, intelligent environments and knowledge-based systems.
This document summarizes a two-stage method for 3D object recognition using an associative memory. In the first stage, key features are used to access hypotheses for an object's identity and configuration from an associative memory. These hypotheses are then fed into a second-stage associative memory that accumulates evidence to estimate the likelihood of each hypothesis based on feature statistics in a database. The method is robust to occlusion and clutter since it relies on local features rather than global properties, and allows objects to be added automatically through visual exploration from different views.
The document discusses neural computing and artificial neural networks. It provides an overview of several key topics, including the aims of investigating biological nervous systems and designing artificial systems that emulate biological principles. The document outlines several planned lectures on topics like the differences between natural and artificial intelligence, neurobiology, neural processing and signaling, stochasticity in neural codes, neural operators for vision, cognition and evolution, and artificial neural networks. It also lists some reference books and provides examples for exercises on neural computing concepts.
DiscoversNet is an adaptive simulation-based learning environment for designing neural networks. It contains a knowledge-based neural network consultant module to provide educational guidance during the neural network design process. The consultant module represents domain knowledge using a knowledge-based neural network and provides advice to users based on their current understanding level. The system allows users to build neural network simulators through interactive manipulation of neural network components and receives feedback to correct misconceptions.
A Proposition on Memes and Meta-Memes in Computing for Higher ...butest
This document proposes a framework for memetic computing with higher-order learning capabilities. It discusses how current computational intelligence techniques have limitations and how problem complexity is outpacing algorithm development. The framework is inspired by how the brain learns and generalizes solutions across different problems through a hierarchical architecture of processing units (memes and meta-memes) and memory. This brain-inspired approach is proposed as a new class of memetic computing that can autonomously learn patterns across multiple temporal and spatial scales to better solve complex problems.
Vertical integration of computational architectures - the mediator problemYehor Churilov
1. The document discusses the problem of integrating computational architectures for artificial intelligence. There is a major gap between low-level sensory representations and higher-level cognitive functions that cannot be bridged by existing two-tier architectures alone.
2. It proposes that a conceptually independent architectural layer is needed to act as a mediator between the different representation levels. This would help address issues around increasing cognitive abilities that demand greater integration across architectures.
3. A second problem is the height of the integration platform - a fully integrated platform is needed at a higher level than currently exists for modular hybrid systems. The document outlines approaches to solving the mediator problem and facilitating greater platform integration through more unified computing methods.
The document discusses the concepts of soft computing and artificial neural networks. It defines soft computing as an emerging approach to computing that parallels the human mind in dealing with uncertainty and imprecision. Soft computing consists of fuzzy logic, neural networks, and genetic algorithms. Neural networks are simplified models of biological neurons that can learn from examples to solve problems. They are composed of interconnected processing units, learn via training, and can perform tasks like pattern recognition. The document outlines the basic components and learning methods of artificial neural networks.
Soft computing is an emerging approach to computing that aims to solve computationally hard problems using inexact solutions that are tolerant of imprecision, uncertainty, partial truth, and approximation. It uses techniques like fuzzy logic, neural networks, evolutionary computation, and probabilistic reasoning to model human-like decision making. Unlike hard computing which requires precise modeling and solutions, soft computing is well-suited for real-world problems where ideal models are not available. The key constituents of soft computing are fuzzy logic, evolutionary computation, neural networks, and machine learning.
Application of soft computing techniques in electrical engineeringSouvik Dutta
This document discusses the application of soft computing techniques in electrical engineering. It begins with an introduction to soft computing and its key elements including fuzzy logic, neural networks, evolutionary computation, machine learning and probabilistic reasoning. It then discusses hard computing versus soft computing, defining hard computing as requiring precise analytical models and definitions, while soft computing can handle imprecision. The document outlines several soft computing techniques - neural networks, fuzzy logic, and their applications in power system economic load dispatch and generation level determination to solve complex, non-linear optimization problems in electrical engineering. In conclusion, soft computing provides alternatives to traditional techniques for electrical engineering problems involving uncertainty.
An Approach of Human Emotional States Classification and Modeling from EEGCSCJournals
In this paper, a new approach is proposed to model the emotional states from EEG signals with mathematical expressions based on wavelet analysis and trust region algorithm. EEG signals are collected in different emotional states and some salient features are extracted through temporal and spectral analysis to indicate the dispersion which will unify different states. The maximum classification accuracy of emotion is obtained for DWT analysis rather than FFT and statistical analysis. So DWT analysis is considered as the best suited for mathematical modeling of human emotions. The emotional states are modeled with different mathematical expressions using the obtained coefficients from trust region algorithm that can be compared with the sub-band wavelet coefficients of different states. The proposed approach is verified with the adjusted R-square percentage and the sum of square errors. The adjusted R- square percentage of the mathematical modeled states are 78.4% for relax, 77.18% for motor action; however for memory, pleasant, enjoying music and fear they are 93%, 95.6%, 97.7% and 91.5% respectively. The proposed system is reliable that can be applied for practical real time implementation of human emotion based systems.
The document discusses analogical reasoning and case-based reasoning. It provides an overview of research in these areas including structure mapping theory, models of analogical processing like SME and MAC/FAC, and case-based reasoning systems. It proposes an analogy ontology to integrate analogical processing and first-principles reasoning by providing a formal representation of analogy concepts and results.
Ivah explains to her family how she knows so much about Tristan. She reveals that since she was 15, she has been visiting a magical place called Peerless Park in her dreams. There, she met an elf named Naenae and learned about Tristan's past and abilities. Both Adam and Ivah have visited this place in dreams, though Adam met different beings there. Ivah's visits explain how she gained secret knowledge about Tristan and his dangerous mother, who has a green glow and uses her powers for evil.
1) Sustainable development requires an integrated systems approach to planning that considers interdependencies between sectors like water, energy, infrastructure, and the environment.
2) Tools like the CLEW model and iSDG model can help analyze cross-sectoral linkages and support sustainable development planning.
3) A systems of systems approach is needed to understand risks from critical infrastructure failures and interdependencies between sectors like energy, water, and transportation.
4) An integrated governance system with high-level inter-ministerial coordination and an integrated information system using national accounts data can support an effective integrated systems approach.
This document introduces soft computing and provides an agenda for the lecture. Soft computing is defined as a fusion of fuzzy logic, neural networks, evolutionary computing, and probabilistic computing to deal with uncertainty and imprecision. Hybrid systems combine different soft computing techniques for improved performance. The lecture will cover an introduction to soft computing, fuzzy computing, neural networks, evolutionary computing, and hybrid systems. References are also provided.
This document discusses practical emotional neural networks. It summarizes two existing approaches - brain emotional learning (BEL) based networks and emotional backpropagation (EmBP) based networks. BEL networks are inspired by the limbic system and amygdala in the brain, modeling fast emotional processing via short neural pathways. EmBP networks apply emotional states and appraisals to neural network learning. The document proposes a new limbic-based artificial emotional neural network (LiAENN) that models emotional situations like anxiety and confidence during learning. It applies anxious confident decayed brain emotional learning rules and achieves higher accuracy than BEL and EmBP networks on facial detection and emotion recognition tasks.
Symbolic-Connectionist Representational Model for Optimizing Decision Making ...IJECEIAES
Modeling higher order cognitive processes like human decision making come in three representational approaches namely symbolic, connectionist and symbolic-connectionist. Many connectionist neural network models are evolved over the decades for optimizing decision making behaviors and their agents are also in place. There had been attempts to implement symbolic structures within connectionist architectures with distributed representations. Our work was aimed at proposing an enhanced connectionist approach of optimizing the decisions within the framework of a symbolic cognitive model. The action selection module of this framework is forefront in evolving intelligent agents through a variety of soft computing models. As a continous effort, a Connectionist Cognitive Model (CCN) had been evolved by bringing a traditional symbolic cognitive process model proposed by LIDA as an inspiration to a feed forward neural network model for optimizing decion making behaviours in intelligent agents. Significanct progress was observed while comparing its performance with other varients.
EMOTIONAL LEARNING IN A SIMULATED MODEL OF THE MENTAL APPARATUScsandit
How a human being learns is a wide field and not fully understood until now. This paper should give an alternative attempt to get closer to the answer how human beings learn something and what the relation to emotions is. Therefore, the cognitive architecture of the project “Simulation of Mental Apparatus and Applications (SiMA)” is used to fulfill two tasks. One is to give an answer to the question above and the other one is to enhance the functional model of the mental apparatus with learning. For that reason, the functions of the model are analyzed in detail for their ability to enhance them with a learning ability. The focus of the analysis lay on emotions and their impact on the ability to change memories in the model to determine a different behavior than without learning.
1. The document discusses several approaches to cognitive science including connectionism, neural networks, supervised and unsupervised learning, Hebbian learning, the delta rule, backpropagation, and responses to Descartes from Gelernter, Penrose, and Pinker.
2. Connectionism models mental phenomena using interconnected networks of simple units like neural networks. Learning involves adjusting connection weights between neurons.
3. Supervised learning uses input-output pairs to adjust weights to minimize error, while unsupervised learning only uses inputs to find patterns in the data.
Army Study: Ontology-based Adaptive Systems of Cyber DefenseRDECOM
The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.
A bayesian abduction model for extracting most probable evidence to support s...ijaia
In this paper, we discuss the development of a Bayesian Abduction Model of Sensemaking Support (BAMSS) as a tool for information fusion to support prospective sensemaking. Currently, BAMSS can identify the Most Probable Explanation from a Bayesian Belief Network (BBN) and extract the prevalent conditional probability values to help the sensemaking analysts to understand the cause-effect of the adversary information. Actual vignettes from databases of modern insurgencies and asymmetry warfare are used to validate the performance of BAMSS. BAMSS computes the posterior probability of the network
edges and performs information fusion using a clustering algorithm. In the model, the friendly force commander uses the adversary information to prospectively make sense of the enemy’s intent. Sensitivity
analyses were used to confirm the robustness of BAMSS in generating the Most Probable Explanations from a BBN through abductive inference. The simulation results demonstrate the utility of BAMSS as a computational tool to support sense making
This document provides an introduction to artificial intelligence and cognitive science. It defines intelligence and AI, discusses different approaches to AI like thinking like humans, thinking rationally, acting like humans and acting rationally. It also summarizes the history of AI from early neural networks to modern applications. Key concepts covered include the Turing test, knowledge representation, rational agents, intelligent environments and knowledge-based systems.
This document summarizes a two-stage method for 3D object recognition using an associative memory. In the first stage, key features are used to access hypotheses for an object's identity and configuration from an associative memory. These hypotheses are then fed into a second-stage associative memory that accumulates evidence to estimate the likelihood of each hypothesis based on feature statistics in a database. The method is robust to occlusion and clutter since it relies on local features rather than global properties, and allows objects to be added automatically through visual exploration from different views.
The document discusses neural computing and artificial neural networks. It provides an overview of several key topics, including the aims of investigating biological nervous systems and designing artificial systems that emulate biological principles. The document outlines several planned lectures on topics like the differences between natural and artificial intelligence, neurobiology, neural processing and signaling, stochasticity in neural codes, neural operators for vision, cognition and evolution, and artificial neural networks. It also lists some reference books and provides examples for exercises on neural computing concepts.
DiscoversNet is an adaptive simulation-based learning environment for designing neural networks. It contains a knowledge-based neural network consultant module to provide educational guidance during the neural network design process. The consultant module represents domain knowledge using a knowledge-based neural network and provides advice to users based on their current understanding level. The system allows users to build neural network simulators through interactive manipulation of neural network components and receives feedback to correct misconceptions.
A Proposition on Memes and Meta-Memes in Computing for Higher ...butest
This document proposes a framework for memetic computing with higher-order learning capabilities. It discusses how current computational intelligence techniques have limitations and how problem complexity is outpacing algorithm development. The framework is inspired by how the brain learns and generalizes solutions across different problems through a hierarchical architecture of processing units (memes and meta-memes) and memory. This brain-inspired approach is proposed as a new class of memetic computing that can autonomously learn patterns across multiple temporal and spatial scales to better solve complex problems.
Vertical integration of computational architectures - the mediator problemYehor Churilov
1. The document discusses the problem of integrating computational architectures for artificial intelligence. There is a major gap between low-level sensory representations and higher-level cognitive functions that cannot be bridged by existing two-tier architectures alone.
2. It proposes that a conceptually independent architectural layer is needed to act as a mediator between the different representation levels. This would help address issues around increasing cognitive abilities that demand greater integration across architectures.
3. A second problem is the height of the integration platform - a fully integrated platform is needed at a higher level than currently exists for modular hybrid systems. The document outlines approaches to solving the mediator problem and facilitating greater platform integration through more unified computing methods.
The document discusses the concepts of soft computing and artificial neural networks. It defines soft computing as an emerging approach to computing that parallels the human mind in dealing with uncertainty and imprecision. Soft computing consists of fuzzy logic, neural networks, and genetic algorithms. Neural networks are simplified models of biological neurons that can learn from examples to solve problems. They are composed of interconnected processing units, learn via training, and can perform tasks like pattern recognition. The document outlines the basic components and learning methods of artificial neural networks.
Soft computing is an emerging approach to computing that aims to solve computationally hard problems using inexact solutions that are tolerant of imprecision, uncertainty, partial truth, and approximation. It uses techniques like fuzzy logic, neural networks, evolutionary computation, and probabilistic reasoning to model human-like decision making. Unlike hard computing which requires precise modeling and solutions, soft computing is well-suited for real-world problems where ideal models are not available. The key constituents of soft computing are fuzzy logic, evolutionary computation, neural networks, and machine learning.
Application of soft computing techniques in electrical engineeringSouvik Dutta
This document discusses the application of soft computing techniques in electrical engineering. It begins with an introduction to soft computing and its key elements including fuzzy logic, neural networks, evolutionary computation, machine learning and probabilistic reasoning. It then discusses hard computing versus soft computing, defining hard computing as requiring precise analytical models and definitions, while soft computing can handle imprecision. The document outlines several soft computing techniques - neural networks, fuzzy logic, and their applications in power system economic load dispatch and generation level determination to solve complex, non-linear optimization problems in electrical engineering. In conclusion, soft computing provides alternatives to traditional techniques for electrical engineering problems involving uncertainty.
An Approach of Human Emotional States Classification and Modeling from EEGCSCJournals
In this paper, a new approach is proposed to model the emotional states from EEG signals with mathematical expressions based on wavelet analysis and trust region algorithm. EEG signals are collected in different emotional states and some salient features are extracted through temporal and spectral analysis to indicate the dispersion which will unify different states. The maximum classification accuracy of emotion is obtained for DWT analysis rather than FFT and statistical analysis. So DWT analysis is considered as the best suited for mathematical modeling of human emotions. The emotional states are modeled with different mathematical expressions using the obtained coefficients from trust region algorithm that can be compared with the sub-band wavelet coefficients of different states. The proposed approach is verified with the adjusted R-square percentage and the sum of square errors. The adjusted R- square percentage of the mathematical modeled states are 78.4% for relax, 77.18% for motor action; however for memory, pleasant, enjoying music and fear they are 93%, 95.6%, 97.7% and 91.5% respectively. The proposed system is reliable that can be applied for practical real time implementation of human emotion based systems.
The document discusses analogical reasoning and case-based reasoning. It provides an overview of research in these areas including structure mapping theory, models of analogical processing like SME and MAC/FAC, and case-based reasoning systems. It proposes an analogy ontology to integrate analogical processing and first-principles reasoning by providing a formal representation of analogy concepts and results.
Ivah explains to her family how she knows so much about Tristan. She reveals that since she was 15, she has been visiting a magical place called Peerless Park in her dreams. There, she met an elf named Naenae and learned about Tristan's past and abilities. Both Adam and Ivah have visited this place in dreams, though Adam met different beings there. Ivah's visits explain how she gained secret knowledge about Tristan and his dangerous mother, who has a green glow and uses her powers for evil.
1) Sustainable development requires an integrated systems approach to planning that considers interdependencies between sectors like water, energy, infrastructure, and the environment.
2) Tools like the CLEW model and iSDG model can help analyze cross-sectoral linkages and support sustainable development planning.
3) A systems of systems approach is needed to understand risks from critical infrastructure failures and interdependencies between sectors like energy, water, and transportation.
4) An integrated governance system with high-level inter-ministerial coordination and an integrated information system using national accounts data can support an effective integrated systems approach.
Este documento presenta el calendario de actividades para 2010 de la asociación Mendiko Lagunak. Incluye cursos de alpinismo y esquí en enero, febrero y marzo, un viaje a Mallorca durante la Semana Santa en marzo y abril, marchas en mayo y junio, actividades en los Pirineos en julio, fiestas en agosto, y la última etapa de la Transcantábrica en septiembre y octubre. También incluye sesiones sobre naturaleza y montaña en noviembre,
This document discusses phenomenological study as a qualitative research methodology. Phenomenological study attempts to understand people's perceptions and perspectives of a particular situation through unstructured interviews with individuals who have direct experience with the phenomenon being studied. The interviews are analyzed to identify common themes in experiences despite diversity. The final result is a general description of the phenomenon as seen through the eyes of the participants.
The document outlines a roadmap for an online learning platform focused on improving individualized learning, collaboration, data aggregation, and school autonomy. Key areas of focus include allowing schools more customization over courses, users, and policies, enhancing safety and communication controls, improving parent engagement tools, updating the user interface, expanding sharing capabilities, and releasing new content and dashboard features. An application programming interface will also be opened up to enable third-party extensions and applications on the platform.
Dokumen tersebut membahas tentang analisis regresi linier sederhana dan berganda untuk menentukan hubungan antara variabel dependen dan independen. Output SPSS menunjukkan bahwa nilai fisika dapat menjelaskan 86,4% variasi nilai kimia.
The document discusses swelling soils in Colorado, including their geology, effects on construction and landscaping, and how to inspect for issues related to swelling soils. Some key points are:
- Clay-rich soils and bedrock in Colorado can absorb water and swell, potentially damaging structures.
- Special construction methods like deep foundations or subsurface drainage systems are needed to mitigate the effects of swelling soils.
- Landscaping should aim to reduce excess water near foundations through choices like plant selection and irrigation practices.
- Inspections of homes in swelling soil areas should evaluate the soil report, design/construction methods used, and any existing damage.
The document summarizes HCTM Technical Campus, a leading engineering and management institution in India. It provides an overview of the institution's programs, accreditations, rankings, and focus on developing well-rounded graduates prepared for global careers through its rigorous academic programs and unique Professional Development Program. The campus prioritizes corporate-academic linkages, international faculty, and extracurricular activities to develop students' critical thinking, leadership, and work-life balance skills.
The document provides release dates and brief descriptions for 12 Xbox 360 games coming out between January and February 2010, including Darksiders, Bayonetta, Divinity II: Ego Draconis, Army of Two: The 40th Day, Dark Void, Mass Effect 2, BioShock 2, Dante's Inferno, Aliens vs. Predator, Dynasty Warriors: Strikeforce, and Tom Clancy's Splinter Cell Conviction.
Aspect oriented a candidate for neural networks and evolvable softwareLinchuan Wang
This is a paper written in 2004 and has been accepted by WASOD workshop 2004
https://people.cs.kuleuven.be/~dirk.craeynest/ada-belgium/events/04/040927-sefm-waosd.html
the meta data can be found in A Bibliography of Aspect-Oriented Software Development, Version 2.0
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...ijasuc
There is a vast amount of researched literature available on Route Finding and Link Establishment in
MANET protocols based on various concepts such as “pro-active”, “reactive”, “power awareness”,
“cross-layering” etc. Most of these techniques are rather restrictive, taking into account a few of the
several aspects that go into effective route establishment. When we look at practical implementations of
MANETs, we have to take into account various factors in totality, not in isolation. The several factors that
decide and influence the routing have to be considered as a whole in the difficult task of finding the best
solution in route finding and optimization. The inputs to the system are manifold and apparently unrelated.
Most of the parameters are imprecise or non-crisp in nature. The uncertainty and imprecision lead to think
that intelligent routing techniques are essential and important in evolving robust and dependable solutions
to route finding. The obvious method by which this can be achieved is the deployment of soft computing
techniques such as Neural Nets, Fuzzy Logic and Genetic algorithms. Neural Networks help us to solve the
complex problem of transforming the inputs to outputs without apriori knowledge of what the relationship
is between inputs and outputs. Fuzzy Logic helps us to deal with imprecise and ill-conditioned data.
Genetic Algorithms help us to select the best possible solution from the solution space in an optimal sense.
Our paper presented here below seeks to explore new horizons in this direction. The results of our
experimentation have been very satisfactory and we have achieved the goal of optimal route finding to a
large extent. There is of course considerable room for further refinements.
New Generation Routing Protocol over Mobile Ad Hoc Wireless Networks based on...ijasuc
There is a vast amount of researched literature available on Route Finding and Link Establishment in
MANET protocols based on various concepts such as “pro-active”, “reactive”, “power awareness”,
“cross-layering” etc. Most of these techniques are rather restrictive, taking into account a few of the
several aspects that go into effective route establishment. When we look at practical implementations of
MANETs, we have to take into account various factors in totality, not in isolation. The several factors that
decide and influence the routing have to be considered as a whole in the difficult task of finding the best
solution in route finding and optimization. The inputs to the system are manifold and apparently unrelated.
Most of the parameters are imprecise or non-crisp in nature. The uncertainty and imprecision lead to think
that intelligent routing techniques are essential and important in evolving robust and dependable solutions
to route finding. The obvious method by which this can be achieved is the deployment of soft computing
techniques such as Neural Nets, Fuzzy Logic and Genetic algorithms. Neural Networks help us to solve the
complex problem of transforming the inputs to outputs without apriori knowledge of what the relationship
is between inputs and outputs. Fuzzy Logic helps us to deal with imprecise and ill-conditioned data.
Genetic Algorithms help us to select the best possible solution from the solution space in an optimal sense.
Our paper presented here below seeks to explore new horizons in this direction. The results of our
experimentation have been very satisfactory and we have achieved the goal of optimal route finding to a
large extent. There is of course considerable room for further refinements.
Capital market applications of neural networks etc23tino
The document provides an overview of capital market applications of neural networks, fuzzy logic, and genetic algorithms that have been studied in academic literature. It reviews studies that use these techniques for market forecasting, trading rules, option pricing, bond ratings, and portfolio construction. For market forecasting specifically, several studies are described that use neural networks and neuro-fuzzy systems to predict stock market indexes and interest rates, finding they often outperform traditional econometric models.
Means-ends analysis (MEA) is a problem-solving technique used in AI to limit search. It works by choosing an action at each step to reduce the difference between the current and goal states, applying the action recursively until the goal is reached. Early systems like GPS and Prodigy used MEA by providing knowledge of how differences map to actions. MEA improves search by focusing on actual differences rather than brute force methods.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
What is meant by deep learning?
Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts
This document discusses an integrated approach to ontology development methodology and provides a case study using a shopping mall domain. It begins by reviewing existing ontology development methodologies and identifying their pitfalls. An integrated methodology is then proposed which aims to reduce these pitfalls. The key steps in the proposed methodology are: 1) capturing motivating user scenarios or keywords, 2) generating formal/informal questions and answers from the scenarios, 3) extracting terms and constraints, and 4) building the ontology using a top-down approach. The methodology is applied to developing an ontology for a shopping mall domain to provide multilingual information to visitors.
This document provides a critical review of recurrent neural networks for sequence learning. It begins with an abstract summarizing the paper. It then discusses why recurrent neural networks are well-suited for modeling sequential data compared to other models like feedforward neural networks and Markov models. Specifically, it notes that RNNs can capture long-range temporal dependencies, unlike models with a finite context window. It also explains that RNNs can represent a vast number of states using real-valued activations, unlike discrete state Markov models.
Intelligent agents in ontology-based applicationsinfopapers
The document describes the development of an intelligent agent using JADE that is linked to a knowledge base system implemented with Protege and Algernon. The agent delivers useful information to users from the web or other agents based on their preferences. The knowledge base contains ontologies defined in Protege and facts that can be queried using if-then rules in Algernon. The example application was developed in Java Studio Creator to demonstrate an intelligent information agent.
A DOMAIN INDEPENDENT APPROACH FOR ONTOLOGY SEMANTIC ENRICHMENTcscpconf
Ontology automatic enrichment consists of adding automatically new concepts and/or new relations to an initial ontology built manually using a basic domain knowledge. In a concrete manner, enrichment is firstly, extracting concepts and relations from textual sources then putting them in their right emplacements in the initial ontology. However, the main issue in that process is how to preserve the coherence of the ontology after this operation. For this purpose, we consider the semantic aspect in the enrichment process by using similarity techniques between terms. Contrarily to other approaches, our approach is domain independent and the enrichment process is based on a semantic analysis. Another advantage of our approach is that it takes into account the two types of relations, taxonomic and non taxonomic
ones.
Ontologies are being used to organize information in many domains like artificial intelligence,
information science, semantic web, library science. Ontologies of an entity having different information
can be merged to create more knowledge of that particular entity. Ontologies today are powering more
accurate search and retrieval in websites like Wikipedia etc. As we move towards the future to Web 3.0,
also termed as the semantic web, ontologies will play a more important role.
Ontologies are represented in various forms like RDF, RDFS, XML, OWL etc. Querying ontologies can
yield basic information about an entity. This paper proposes an automated method for ontology creation,
using concepts from NLP (Natural Language Processing), Information Retrieval and Machine Learning.
Concepts drawn from these domains help in designing more accurate ontologies represented using the
XML format. This paper uses document classification using classification algorithms for assigning labels
to documents, document similarity to cluster similar documents to the input document, together, and
summarization to shorten the text and keep important terms essential in making the ontology. The module
is constructed using the Python programming language and NLTK (Natural Language Toolkit). The
ontologies created in XML will convey to a lay person the definition of the important term's and their
lexical relationships.
Ontologies are being used to organize information in many domains like artificial intelligence,
information science, semantic web, library science. Ontologies of an entity having different information
can be merged to create more knowledge of that particular entity. Ontologies today are powering more
accurate search and retrieval in websites like Wikipedia etc. As we move towards the future to Web 3.0,
also termed as the semantic web, ontologies will play a more important role.
Ontologies are represented in various forms like RDF, RDFS, XML, OWL etc. Querying ontologies can
yield basic information about an entity. This paper proposes an automated method for ontology creation,
using concepts from NLP (Natural Language Processing), Information Retrieval and Machine Learning.
Concepts drawn from these domains help in designing more accurate ontologies represented using the
XML format. This paper uses document classification using classification algorithms for assigning labels
to documents, document similarity to cluster similar documents to the input document, together, and
summarization to shorten the text and keep important terms essential in making the ontology. The module
is constructed using the Python programming language and NLTK (Natural Language Toolkit). The
ontologies created in XML will convey to a lay person the definition of the important term's and their
lexical relationships.
Ontologies are being used to organize information in many domains like artificial intelligence,
information science, semantic web, library science. Ontologies of an entity having different information
can be merged to create more knowledge of that particular entity. Ontologies today are powering more
accurate search and retrieval in websites like Wikipedia etc. As we move towards the future to Web 3.0,
also termed as the semantic web, ontologies will play a more important role.
Ontologies are represented in various forms like RDF, RDFS, XML, OWL etc. Querying ontologies can
yield basic information about an entity. This paper proposes an automated method for ontology creation,
using concepts from NLP (Natural Language Processing), Information Retrieval and Machine Learning.
Concepts drawn from these domains help in designing more accurate ontologies represented using the
XML format. This paper uses document classification using classification algorithms for assigning labels
to documents, document similarity to cluster similar documents to the input document, together, and
summarization to shorten the text and keep important terms essential in making the ontology. The module
is constructed using the Python programming language and NLTK (Natural Language Toolkit). The
ontologies created in XML will convey to a lay person the definition of the important term's and their
lexical relationships.
The document describes DiscoverNet, an adaptive simulation-based learning environment for designing neural networks. DiscoverNet allows users to build neural network simulators by manipulating neural objects. It includes a consultant module to provide educational guidance during the design process. The consultant reproduces the user's initial neural network design and identifies any issues to help the user correct misconceptions. Experimental results show that DiscoverNet supports learning and helps users develop an understanding of neural network mechanisms.
This dissertation examines using neural networks to predict financial time series, specifically the S&P Mib Index of the Milan stock exchange. The document provides background on neural networks, including their history and development from simple linear models to modern multi-layer models. It describes supervised neural networks and their components like activation functions and weights. The dissertation then details training neural network weights using methods like backpropagation and techniques to prevent overfitting. Finally, it applies these concepts in a case study using neural networks to forecast changes in the S&P Mib Index.
"Testing the “end of privacy” hypothesis in computer-mediated communication An agent-based modelling approach", Paola Tubaro & Antonio A. Casilli, presentation at the Fondation CIGREF, Paris, Nov 14th, 2011
NEURAL MODEL-APPLYING NETWORK (NEUMAN): A NEW BASIS FOR COMPUTATIONAL COGNITIONaciijournal
NeuMAN represents a new model for computational cognition synthesizing important results across AI,
psychology, and neuroscience. NeuMAN is based on three important ideas: (1) neural mechanisms perform
all requirements for intelligence without symbolic reasoning on finite sets, thus avoiding exponential
matching algorithms; (2) the network reinforces hierarchical abstraction and composition for sensing and
acting; and (3) the network uses learned sequences within contextual frames to make predictions, minimize
reactions to expected events, and increase responsiveness to high-value information. These systems exhibit
both automatic and deliberate processes. NeuMAN accords with a wide variety of findings in neural and
cognitive science and will supersede symbolic reasoning as a foundation for AI and as a model of human
intelligence. It will likely become the principal mechanism for engineering intelligent systems.
This document discusses using agent-based modeling and neural networks for web analytics in e-learning. It proposes a hybrid agent-based model with embedded artificial neural networks to simulate the production and dissemination of knowledge among three agent types (authors, tutors, and students) in online courses. The model aims to assess knowledge trends and is implemented using the AnyLogic software package for a case study at a Ukrainian higher education institution. Agent-based modeling and neural networks are described as effective tools for developing learning analytics to understand knowledge flow between e-learning participants.
A N E XTENSION OF P ROTÉGÉ FOR AN AUTOMA TIC F UZZY - O NTOLOGY BUILDING U...ijcsit
The process of building ontology is a very
complex and time
-
consuming process
especially when dealing
with huge amount of data. Unfortunately current
marketed
tools are very limited and don’t meet
all
user
needs.
Indeed, t
hese software build the core of the ontology from initial data that generates
a
big number of
information.
In this paper, we
aim to resolve these problems
by adding an extension to the well known
ontology editor Protégé in order to work towards a complete
FCA
-
based framework
which resolves the
limitation of other tools in
building fuzzy
-
ontology
.
W
e will give
, in this paper
, some
details on
our
sem
i
-
automat
ic collaborative tool
called FOD Tab Plug
-
in
which
takes into consideration another degree of
granularity in the process of generation
.
In fact, i
t follows a bottom
-
up strategy based on conceptual
clustering, fuzzy logic and Formal Concept Analysis (FCA) a
nd it defines ontology between classes
resulting from a preliminary classification of data and not from the initial large amount of data
.
A N E XTENSION OF P ROTÉGÉ FOR AN AUTOMA TIC F UZZY - O NTOLOGY BUILDING U...
pln_ecan_agi_16
1. Controlling Combinatorial Explosion in
Inference via Synergy with Nonlinear-Dynamical
Attention Allocation
Ben Goertzel1,2
and Misgana Bayetta Belachew1,2,4
and Matthew Ikle’ 3
and
Gino Yu4
1
OpenCog Foundation, 2
Hanson Robotics, 3
Adams State University, 4
School of
Design, Hong Kong Poly U
Abstract. One of the core principles of the OpenCog AGI design, ”cog-
nitive synergy”, is exemplified by the the synergy between logical rea-
soning and attention allocation. This synergy centers on a feedback in
which nonlinear-dynamical attention-spreading guides logical inference
control, and inference directs attention to surprising new conclusions it
has created. In this paper we report computational experiments in which
this synergy is demonstrated in practice, in the context of a very simple
logical inference problem.
More specifically: First-order probabilistic inference generates conclu-
sions, and its inference steps are pruned via ”Short Term importance”
(STI) attention values associated to the logical Atoms it manipulates.
As inference generates conclusions, information theory is used to assess
the surprisingness value of these conclusions, and the ”short term im-
portance” attention values of the Atoms representing the conclusions
are updated accordingly. The result of this feedback is that meaningful
conclusions are drawn after many fewer inference steps than would be
the case without the introduction of attention allocation dynamics and
feedback therewith.
This simple example demonstrates a cognitive dynamic that is hypothe-
sized to be very broadly valuable for general intelligence.
1 Introduction
One approach to creating AGI systems is the “integrative” strategy, involving
combining multiple components embodying different structures or algorithms,
and relying on synergistic dynamics between components. One kind of inte-
grative system involves various highly independent software components, each
solving a specialized set of problems in a mostly standalone manner, with oc-
casional communication between each other in order to exchange problems and
solutions. On the other end of the scale, are systems designed as tightly intercon-
nected components that give rise to complex non-linear dynamical phenomena.
Here, we are specifically focused on the latter approach. We will discuss the par-
ticulars of one form of cognitive synergy – between probabilistic inference and
nonlinear-dynamical attention allocation – within the context of one particular
2. integrative AGI architecture, PrimeAGI (formerly named OpenCogPrime) [9]
[10], implemented on the OpenCog platform. [7].
The specific nature of the synergy explored and demonstrated here is as
follows:
– Probabilistic logical reasoning, proceeding via forward chaining and using
the PLN (Probabilistic Logic Networks) rule-base, chooses premises for its
inferences based on weights called STI (ShortTermImportance) values
– When the inference process discovers something sufficiently surprising (via
an information-theoretic measure), it boosts the STI value associated with
this discovery
– STI values spread according to a particular set of equations modeled on
economics (ECAN, Economic Attention Allocation), so that when an item
or fragment of knowledge has a high STI, related entities will also get their
STI boosted
Thus, broadly speaking, we have a feedback in which
– importance values prune the combinatorial explosion of inference chaining
– inferentially determined surprisingness guides importance assessments
– importance spreads among related entities
According to this dynamic, entities related to other entities that have been useful
for an inference process, will also tend to get brought to the attention to that
inference process. This will cause the inference process to focus, much more
often than would otherwise be the case, on sets of interrelated knowledge items
regarding which there are surprising (interesting) conclusions to be drawn. It is a
form of deliberative thinking in which conclusion-drawing and attention-focusing
interact synergetically.
This sort of dynamic is very general in nature and, according to the con-
ceptual theory underlying PrimeAGI, is critical to the operation of probabilistic
inference based general intelligence. Here we illustrate the synergy via a simple
”toy” example, which highlights the nature of the feedback involved very clearly.
Our current work involves leveraging the same synergy in more realistic cases,
e.g. to help a system maintain focus in the course of inference-guided natural
language dialogue.
2 Background: PrimeAGI
Our work here is based upon specific details of the AGI architecture called
PrimeAGI (formerly known as OpenCogPrime), based on the open-source
OpenCog project at http://opencog.org. PrimeAGI is a large and complex
system whose detailed description occupies two volumes [9] [10].
The concept of cognitive synergy is at the core of the PrimeAGI design, with
highly interdependent subsystems responsible for inference regarding patterns
3. obtained from visual, auditory and abstract domains, uncertain reasoning, lan-
guage comprehension and generation, concept formation, and action planning.
The goal of the PrimeAGI project is to engineer systems that exhibit general
intelligence equivalent to a human adult, and ultimately beyond.
The dynamics of interaction between processes in PrimeAGI is designed in
such a way that knowledge can be converted between different types of memory;
and when a learning process that is largely concerned with a particular type
of memory encounters a situation where the rate of learning is very slow, it
can proceed to convert some of the relevant knowledge into a representation
for a different type of memory to overcome the issue, demonstrating cognitive
synergy. The simple case of synergy between ECAN and PLN explored here is
an instance of this broad concept; PLN being concerned mainly with declarative
memory and ECAN mainly with attentional memory.
2.1 Memory Types and Cognitive Processes in PrimeAGI
PrimeAGI’s memory types are the declarative, procedural, sensory, and episodic
memory types that are widely discussed in cognitive neuroscience [14], plus at-
tentional memory for allocating system resources generically, and intentional
memory for allocating system resources in a goal-directed way. Table 1 overviews
these memory types, giving key references and indicating the corresponding cog-
nitive processes, and which of the generic patternist cognitive dynamics each
cognitive process corresponds to (pattern creation, association, etc.).
The essence of the PrimeAGI design lies in the way the structures and pro-
cesses associated with each type of memory are designed to work together in a
closely coupled way, the operative hypothesis being that this will yield coopera-
tive intelligence going beyond what could be achieved by an architecture merely
containing the same structures and processes in separate “black boxes.”
The inter-cognitive-process interactions in OpenCog are designed so that
– conversion between different types of memory is possible, though sometimes
computationally costly (e.g. an item of declarative knowledge may with some
effort be interpreted procedurally or episodically, etc.)
– when a learning process concerned centrally with one type of memory en-
counters a situation where it learns very slowly, it can often resolve the
issue by converting some of the relevant knowledge into a different type of
memory: i.e. cognitive synergy
The simple case of ECAN/PLN synergy described here is an instance of this
broad concept.
2.2 Probabilistic Logic Networks
PLN serves as the probabilistic reasoning system within OpenCog’s more general
artificial general intelligence framework. PLN logical inferences take the form of
syllogistic rules, which give patterns for combining statements with matching
4. Memory Type Specific Cognitive Processes
General Cognitive
Functions
Declarative Probabilistic Logic Networks (PLN) [6]; concept blending [5] pattern creation
Procedural
MOSES (a novel probabilistic evolutionary program learning
algorithm) [13]
pattern creation
Episodic internal simulation engine [8]
association, pattern
creation
Attentional Economic Attention Networks (ECAN) [11]
association, credit
assignment
Intentional
probabilistic goal hierarchy refined by PLN and ECAN,
structured according to MicroPsi [2]
credit assignment, pattern
creation
Sensory
In OpenCogBot, this will be supplied by the DeSTIN
component
association, attention
allocation, pattern creation,
credit assignment
Table 1. Memory Types and Cognitive Processes in OpenCog Prime. The third column
indicates the general cognitive function that each specific cognitive process carries out,
according to the patternist theory of cognition.
terms. Related to each rule is a truth-value formula which calculates the truth
value resulting from application of the rule. PLN uses forward-chaining and
backward-chaining processes to combine the various rules and create inferences.
2.3 Economic Attention Networks
The attention allocation system within OpenCog is handled by the Economic
Attention Network (ECAN). ECAN is a graph of untyped nodes and links and
links that may be typed either HebbianLink or InverseHebbianLink. Each Atom
in an ECAN is weighted with two numbers, called STI (short-term importance)
and LTI (long-term importance), while each Hebbian or InverseHebbian link
is weighted with a probability value. A system of equations, based upon an
economic metaphor of STI and LTI values as artificial currencies, governs im-
portance value updating. These equations serve to spread importance to and
from various atoms within the system, based upon the importance of their roles
in performing actions related to the system’s goals.
An important concept with ECAN is the attentional focus, consisting of those
atoms deemed most important for the system to achieve its goals at a particular
instant. Through the attentional focus, one key role of ECAN is to guide the
forward and backward chaining processes of PLN inference. Quite simply, when
PLN’s chaining processes need to choose logical terms or relations to include
in their inferences, they can show priority to those occurring in the system’s
attentional focus (due to having been placed their by ECAN). Conversely, when
terms or relations have proved useful to PLN, they can have their importance
boosted, which will affect ECAN’s dynamics. This is a specific example of the
cognitive synergy principle at the heart of the PrimeAGI design.
5. 3 Evaluating PLN on a Standard MLN Test Problem
In order to more fully understand the nature of PLN/ECAN synergy, in 2014 we
chose to explore it (see [12])in the context of two test problems standardly used in
the context of MLNs (Markov Logic Networks) [4]. These problems are relatively
easy for both PLN and MLN, and do not stress either system’s capabilities.
The first test case considered there – and the one we will consider here – is a
very small-scale logical inference called the smokes problem, discussed in its MLN
form at [1]. The PLN format of the smokes problem used for our experiments
is given at https://github.com/opencog/test-datasets/blob/master/pln/
tuffy/smokes/smokes.scm.
The conclusions obtained from PLN backward chaining on the smokes test
case are
cancer(Edward) <.62, 1>
cancer(Anna) <.50, 1>
cancer(Bob) <.45, 1>
cancer(Frank) <.45, 1>
which is reasonably similar to the output of MLN as reported in [1],
0.75 Cancer(Edward)
0.65 Cancer(Anna)
0.50 Cancer(Bob)
0.45 Cancer(Frank)
In [12] we explored the possibility of utilizing ECAN to assist PLN on this
test problems; however our key conclusion from this work was that ECAN’s
guidance is not of much use to PLN on the problems as formulated. However,
that work did lead us to conceptually interesting conclusions regarding the sorts
of circumstances in which ECAN is most likely to help PLN. Specifically, after
applying PLN and ECAN to that example, we hypothesized that if one modified
the example via adding a substantial amount of irrelevant evidence about other
aspects of the people involved, then one would have a case where ECAN could
help PLN, because it could help focus attention on the relevant relationships.
This year we have finally followed up on this concept, and have demonstrated
that, indeed, if one pollutes the smokes problem by adding a bunch of ”noise”
relationships with relatively insignificant truth values, then we obtain a case
in which ECAN is of considerable use for guiding PLN toward the meaningful
information and away from the meaningless, and thus helping PLN to find the
meaningful conclusions (the ”needles in the haystack”) more rapidly. While this
problem is very ”toy” and simple, the phenomenon it illustrates is quite general
and, we believe, of quite general importance.
4 PLN + ECAN on the Noisy Smokes Problem
To create a ”noisy” version of the smokes problem, we created a number of ”ran-
dom” smokers and friends and added them to the OpenCog Atomspace along
6. with the Atoms corresponding to the people in the original ”smokes” problem
and their relationships. We created M random smokers, and and N random peo-
ple associated with each smokers; so M ∗ N additional random people all total.
For the experiments reported here, we set N = 5. The ”smokes” and ”friend” re-
lationships linking the random people to others were given truth value strengths
of 0.2 for the smoking relationship and 0.85 for the friendship relationship. Of
course, these numbers are somewhat arbitrary, but our goal here was to pro-
duce a simple illustrative exmaple, not to seriously explore the anthropology of
secondhand smoking.
We ran the PLN forward chainer, in a version developed in 2015 that utilizes
the OpenCog Unified Rule Engine (URE) 1
. To guide the forward chaining pro-
cess, we used a heuristic in which the Atom selected as the source of inference
is chosen by tournament selection keyed to Atoms’ STI values. To measure the
surprisingness of a conclusion derived by PLN, we used a heuristic formula based
on the Jensen-Shannon Divergence (JSD) between the truth value of an Atom
A and the truth value of A∗
, the most natural supercategory of A:
JSD(A, A∗
) ∗ 10JSD(A,A∗
)
This formula was chosen as a simple rescaling of the JSD, intended to exag-
gerate the differences between Atoms with low JSD and high JSD. A pending
research issue is to choose a heuristic rescaling of the JSD in a more theoreti-
cally motivated way; however, this rather extreme scaling function seems to work
effectively in practice.
Determining the most natural supercategory is in general a challenging issue,
which may be done via using a notion of ”coherence” as described in [3]. However,
for the simple examples pursued here, there is no big issue. For instance, the
supercategory of
Evaluationlink
PredicateNode "smokes"
ConceptNode "Bob
is simply the SatisfyingSet of the PredicateNode ”smokes”; i.e. the degree to
which an average Atom that satisfies the argument-type constraints of the ”smokes”
predicate, actually smokes. So then ”Bob smokes” is surprising if Bob smokes
significantly more often than an average entity.
In the case of this simple toy knowledge base, the only entities involved are
people, so this means ”Bob smokes” is surprising if Bob smokes significantly more
often than the average person the system knows about. In a non-toy Atomspace,
we would have other entities besides people represented, and then a coherence
criterion would need to be invoked to identify that the relevant supercategory is
”People in the SatisfyingSet of the PredicateNode ’smokes’” (including people
with a membership degree of zero to this SatisfyingSet) rather than ”entities in
general in the SatisfyingSet of the PredicateNode ’smokes’”.
1
https://github.com/opencog/atomspace/tree/master/opencog/rule-engine
7. Similarly, in the context of this problem, the surprisingness of ”Bob and Jim
are friends” is calculated relative to the generic probability that ”Two random
people known to the system are friends.”
4.1 Tweaks to ECAN
We ended up making two significant change to OpenCog’s default ECAN imple-
mentation in the course of doing these experiments.
The first change pertained to the size of the system’s AttentionalFocus (work-
ing memory). Previously the ”Attentional Focus” (the working memory of the
system, comprised of the Atoms with the highest STI) was defined as the set of
Atoms in the Atomspace with STI above a certain fixed threshold, the Atten-
tionalFocusBoundary. In these experiments, we found that this sometimes led
to an overly large AttentionalFocus, which resulted in a slow forward chaining
process heavily polluted by noise. We decided to cap the size of the Attention-
alFocus at a certain maximum value K, currently K = 30. This is a somewhat
crude measure and better heuristics may be introduced in future. But given that
the size of the human working memory seems to be fairly strictly limited, this
assumption seems unlikely to be extremely harmful in an AGI context.
The second change pertained to the balance between rent and wages. Previ-
ously these values were allowed to remain imbalanced until the central bank’s
reserve amount deviated quite extremely from the initial default value. However,
this appeared to lead to overly erratic behavior. So we modified the code so that
rent is updated each cycle, so as to retain balance with wages (i.e. so that, given
the particular size of the Atomspace and Attentional Focus at that point in time,
rent received will roughly equal wages paid).
4.2 Parameter Setting
The ECAN subsystem’s parameters, whose meanings are described in [10], were
set as follows:
ECAN_MAX_ATOM_STI_RENT = 3
ECAN_STARTING_ATOM_STI_RENT = 2
ECAN_STARTING_ATOM_STI_WAGE = 30
HEBBIAN_MAX_ALLOCATION_PERCENTAGE = 0.1
SPREAD_DECIDER_TYPE = Step
ECAN_MAX_SPREAD_PERCENTAGE = 0.5
STARTING_STI_FUNDS = 500000
The PLN forward chainer was set to carry out 5 inference steps in each
”cognitive cycle”, and one step of basic ECAN operations (importance spreading,
importance updating, and HebbianLink formation) was carried out in each cycle
as well.
8. 4.3 Experimental Results
Figure 1 shows the number of cycles required to correctly infer the truth value
of several relationships in the original smokes problem, depending on the noise
parameter M. The results are a bit erratic, but clearly demonstrate that the
feedback between ECAN and PLN is working. As the amount of noise Atoms
in the Atomspace increases, the amount of time required to draw the correct
conclusions does not increase commensurately. Instead, for noise amounts above
a very low level, it seems to remain within about the same range as the amount
of noise increases.
On the other hand, without ECAN, the PLN forward chainer fails to find the
correct conclusions at all with any appreciable amount of noise. In principle it
would find the answers eventually, but this would require a huge amount of time
to pass. The number of possible inferences is simply be too large, so without
some kind of moderately intelligent pruning, PLN spends a long time exploring
numerous random possibilities.
5 Conclusion
We began this experimental investigation with the hypothesis that cognitive
synergy between PLN and ECAN would be most useful in cases where there is
a considerable amount of detailed information in the Atomspace regarding the
problem at hand, and part of the problem involves heuristically sifting through
this information to find the useful bits. The noisy smokes example was chosen as
an initial focus of investigation, and we found that, in this example, ECAN does
indeed help PLN to draw reasonable conclusions in a reasonable amount of time,
in spite of the presence of a fairly large amount of distracting but uninteresting
information.
The theory underlying PrimeAGI contends that this sort of synergy is critical
to general intelligence, and will occur in large and complex problems as well as in
toy problems like the one considered here. Validating this hypothesis will require
additional effort beyond the work reported here, and might conceivably reveal
the need for further small tweaks to the ECAN framework.
References
1. Project tuffy http://hazy.cs.wisc.edu/hazy/tuffy/doc/
2. Bach, J.: Principles of Synthetic Intelligence. Oxford University Press (2009)
3. Ben Goertzel, S.K.: Measuring surprisingness (2014), http://wiki.opencog.org/
wikihome/index.php/Measuring_Surprisingness/
4. F. Niu, C. Re, A.D..J.S.: Tuffy: Scaling up statistical inference in markov logic
networks using an rdbms. In: Jagadish, H.V. (ed.) Proceedings of the 37th Inter-
national Conference on Very Proceedings of the 37th International Conference on
Very Large Data Bases (VLDB 2011). vol. 4, pp. 373–384. Seattle, Washington
(2011)
9. 5. Fauconnier, G., Turner, M.: The Way We Think: Conceptual Blending and the
Mind’s Hidden Complexities. Basic (2002)
6. Goertzel, B., Ikle, M., Goertzel, I., Heljakka, A.: Probabilistic Logic Networks.
Springer (2008)
7. Goertzel, B.: Cognitive synergy: A universal principle of feasible general intelli-
gence? In: Proceedings of ICCI 2009, Hong Kong (2009)
8. Goertzel, B., Et Al, C.P.: An integrative methodology for teaching embodied non-
linguistic agents, applied to virtual animals in second life. In: Proc.of the First
Conf. on AGI. IOS Press (2008)
9. Goertzel, B., Pennachin, C., Geisweiller, N.: Engineering General Intelligence,
Part 1: A Path to Advanced AGI via Embodied Learning and Cognitive Synergy.
Springer: Atlantis Thinking Machines (2013)
10. Goertzel, B., Pennachin, C., Geisweiller, N.: Engineering General Intelligence, Part
2: The CogPrime Architecture for Integrative, Embodied AGI. Springer: Atlantis
Thinking Machines (2013)
11. Goertzel, B., Pitt, J., Ikle, M., Pennachin, C., Liu, R.: Glocal memory: a design
principle for artificial brains and minds. Neurocomputing (Apr 2010)
12. Harrigan, C., Goertzel, B., Ikle’, M., Belayneh, A., Yu, G.: Guiding probabilistic
logical inference with nonlinear dynamical attention allocation. In: Proceedings of
AGI-14 (2014)
13. Looks, M.: Competent Program Evolution. PhD Thesis, Computer Science De-
partment, Washington University (2006)
14. Tulving, E., Craik, R.: The Oxford Handbook of Memory. Oxford U. Press (2005)
10. Fig. 1. Number of cognitive cycles required to derive the desired obvious conclusions
from the noise-polluted Atomspace. The x-axis measures the amount of noise Atoms
added. The key point is that the number of cycles does not increase extremely rapidly
with the addition of more and more distractions. Red = conclusions regarding Anna;
green=Bob, blue=Edward, yellow=Frank.