John Anderson was born in 1947 in Vancouver, British Columbia. He earned his PhD from Stanford University in 1972 and has been a professor at several universities, including Carnegie Mellon University since 1978. ACT* is a cognitive theory developed by Anderson that describes a spreading activation model of semantic memory combined with a production system for executing higher-level operations. It distinguishes three types of memory - declarative, procedural, and working memory - and three types of learning.
This document summarizes a two-stage method for 3D object recognition using an associative memory. In the first stage, key features are used to access hypotheses for an object's identity and configuration from an associative memory. These hypotheses are then fed into a second-stage associative memory that accumulates evidence to estimate the likelihood of each hypothesis based on feature statistics in a database. The method is robust to occlusion and clutter since it relies on local features rather than global properties, and allows objects to be added automatically through visual exploration from different views.
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
This document proposes a general principle of learning based on discovering intrinsic constraint models. It defines intelligence as the ability to understand the world by discovering intrinsic variables and constraints that have minimum entropy. The key goals of learning are to map observations to intrinsic variables and detect constraints to minimize a model entropy objective function. Discovering intrinsic variables is critical for maximizing prediction accuracy, learning efficiency, and generalization power. The principle provides a theoretical foundation for explaining and developing artificial general intelligence.
7. knowledge acquisition, representation and organization 8. semantic network...AhL'Dn Daliva
This document discusses knowledge acquisition, representation, and organization. It describes the two types of knowledge - declarative and procedural - and five guidelines for knowledge acquisition. It also discusses theories of knowledge representation including rule-based production models, distributed networks, and propositional models. A key point is that semantic networks can be used to represent knowledge as a system of interconnected concepts. The document also discusses long-term memory and its two types - episodic and semantic memory. It describes cognitive semantic networks and models by Collins and Quillian as well as schema theory. Concept maps are discussed as a way to visualize relationships between concepts.
This document discusses how combining probabilistic logical inference (PLN) with a nonlinear dynamical attention allocation system (ECAN) can help address the problem of combinatorial explosion in inference. It presents a simple example using a noisy version of the "smokes" problem where ECAN guides PLN's inference by focusing attention on surprising conclusions, allowing meaningful conclusions to be drawn with fewer inference steps. This demonstrates a cognitive synergy between logical reasoning and attention allocation that is hypothesized to be broadly valuable for artificial general intelligence.
Means-ends analysis (MEA) is a problem-solving technique used in AI to limit search. It works by choosing an action at each step to reduce the difference between the current and goal states, applying the action recursively until the goal is reached. Early systems like GPS and Prodigy used MEA by providing knowledge of how differences map to actions. MEA improves search by focusing on actual differences rather than brute force methods.
This document provides an abstract and agenda for a workshop on machine learning meeting human learning. The workshop aims to bring together researchers studying machine learning, cognitive science, neuroscience, and educational science. The goal is to investigate how advanced machine learning theories and algorithms can serve as computational models of human learning behaviors. The workshop also seeks to explore insights from the cognitive study of human learning that could inspire new machine learning theories and algorithms. The agenda includes talks on topics like training deep neural networks, probabilistic models of cognition, reinforcement learning, and human semi-supervised learning.
Army Study: Ontology-based Adaptive Systems of Cyber DefenseRDECOM
The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.
1. The document discusses several approaches to cognitive science including connectionism, neural networks, supervised and unsupervised learning, Hebbian learning, the delta rule, backpropagation, and responses to Descartes from Gelernter, Penrose, and Pinker.
2. Connectionism models mental phenomena using interconnected networks of simple units like neural networks. Learning involves adjusting connection weights between neurons.
3. Supervised learning uses input-output pairs to adjust weights to minimize error, while unsupervised learning only uses inputs to find patterns in the data.
This document summarizes a two-stage method for 3D object recognition using an associative memory. In the first stage, key features are used to access hypotheses for an object's identity and configuration from an associative memory. These hypotheses are then fed into a second-stage associative memory that accumulates evidence to estimate the likelihood of each hypothesis based on feature statistics in a database. The method is robust to occlusion and clutter since it relies on local features rather than global properties, and allows objects to be added automatically through visual exploration from different views.
A General Principle of Learning and its Application for Reconciling Einstein’...Jeffrey Huang
This document proposes a general principle of learning based on discovering intrinsic constraint models. It defines intelligence as the ability to understand the world by discovering intrinsic variables and constraints that have minimum entropy. The key goals of learning are to map observations to intrinsic variables and detect constraints to minimize a model entropy objective function. Discovering intrinsic variables is critical for maximizing prediction accuracy, learning efficiency, and generalization power. The principle provides a theoretical foundation for explaining and developing artificial general intelligence.
7. knowledge acquisition, representation and organization 8. semantic network...AhL'Dn Daliva
This document discusses knowledge acquisition, representation, and organization. It describes the two types of knowledge - declarative and procedural - and five guidelines for knowledge acquisition. It also discusses theories of knowledge representation including rule-based production models, distributed networks, and propositional models. A key point is that semantic networks can be used to represent knowledge as a system of interconnected concepts. The document also discusses long-term memory and its two types - episodic and semantic memory. It describes cognitive semantic networks and models by Collins and Quillian as well as schema theory. Concept maps are discussed as a way to visualize relationships between concepts.
This document discusses how combining probabilistic logical inference (PLN) with a nonlinear dynamical attention allocation system (ECAN) can help address the problem of combinatorial explosion in inference. It presents a simple example using a noisy version of the "smokes" problem where ECAN guides PLN's inference by focusing attention on surprising conclusions, allowing meaningful conclusions to be drawn with fewer inference steps. This demonstrates a cognitive synergy between logical reasoning and attention allocation that is hypothesized to be broadly valuable for artificial general intelligence.
Means-ends analysis (MEA) is a problem-solving technique used in AI to limit search. It works by choosing an action at each step to reduce the difference between the current and goal states, applying the action recursively until the goal is reached. Early systems like GPS and Prodigy used MEA by providing knowledge of how differences map to actions. MEA improves search by focusing on actual differences rather than brute force methods.
This document provides an abstract and agenda for a workshop on machine learning meeting human learning. The workshop aims to bring together researchers studying machine learning, cognitive science, neuroscience, and educational science. The goal is to investigate how advanced machine learning theories and algorithms can serve as computational models of human learning behaviors. The workshop also seeks to explore insights from the cognitive study of human learning that could inspire new machine learning theories and algorithms. The agenda includes talks on topics like training deep neural networks, probabilistic models of cognition, reinforcement learning, and human semi-supervised learning.
Army Study: Ontology-based Adaptive Systems of Cyber DefenseRDECOM
The U.S. Army Research Laboratory is part of the U.S. Army Research, Development and Engineering Command, which has the mission to ensure decisive overmatch for unified land operations to empower the Army, the joint warfighter and our nation. RDECOM is a major subordinate command of the U.S. Army Materiel Command.
1. The document discusses several approaches to cognitive science including connectionism, neural networks, supervised and unsupervised learning, Hebbian learning, the delta rule, backpropagation, and responses to Descartes from Gelernter, Penrose, and Pinker.
2. Connectionism models mental phenomena using interconnected networks of simple units like neural networks. Learning involves adjusting connection weights between neurons.
3. Supervised learning uses input-output pairs to adjust weights to minimize error, while unsupervised learning only uses inputs to find patterns in the data.
This document outlines the syllabus for an MTCSCS302 course on Soft Computing taught by Dr. Sandeep Kumar Poonia. The course covers topics including neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms. It is divided into five units: (1) neural networks, (2) fuzzy logic, (3) fuzzy arithmetic and logic, (4) neuro-fuzzy systems and applications of fuzzy logic, and (5) genetic algorithms and their applications. The goal of the course is to provide students with knowledge of soft computing fundamentals and approaches for solving complex real-world problems.
This document provides an introduction to artificial intelligence and cognitive science. It defines intelligence and AI, discusses different approaches to AI like thinking like humans, thinking rationally, acting like humans and acting rationally. It also summarizes the history of AI from early neural networks to modern applications. Key concepts covered include the Turing test, knowledge representation, rational agents, intelligent environments and knowledge-based systems.
The document discusses the differences between machine learning (ML), statistical learning, data mining (DM), and automated learning (AL). It argues that while ML and statistical learning developed similar techniques starting in the 1960s, DM emerged in the 1990s from a merging of database research and automated learning. However, industry was much more enthusiastic about adopting DM techniques compared to AL techniques, even though many DM systems are just friendly interfaces of AL systems. The document aims to explain the key differences between DM and AL that led to DM's greater commercial success.
Comparison of relational and attribute-IEEE-1999-published ...butest
1. The document compares relational data mining methods to attribute-based methods for use in intelligent systems and data mining. Relational methods use first-order logic to represent background knowledge and relationships between objects, while attribute-based methods like neural networks are limited to attribute-value representations.
2. Relational methods have advantages over attribute-based methods for applications that require expressing complex logical relationships and background knowledge. They can also better handle sparse data. However, existing inductive logic programming systems for relational data mining are relatively inefficient for numerical data.
3. The paper proposes a hybrid relational data mining technique called MMDR that combines inductive logic programming with probabilistic inference. This allows it to efficiently handle
The document discusses various topics relating to knowledge representation in artificial intelligence, including:
1) Different types of knowledge that need representation including declarative, procedural, commonsense, and scientific knowledge.
2) Ontologies define terminology and objects/relationships in a systematic way to enable knowledge sharing between agents.
3) Semantic networks represent knowledge graphically with nodes for objects/events and arcs for relationships, enabling reasoning through inheritance and matching.
4) Conceptual graphs also represent knowledge graphically as a bipartite graph with concepts and relations, and can represent logical expressions.
This document presents a new model of categorization called Categorization by Elimination (CBE). CBE uses as few cues or features as necessary to make an accurate category assignment, unlike most existing models which use all available cues. CBE orders cues by validity and uses them sequentially, eliminating potential categories after each cue is considered until only one category remains. The authors show that CBE performs as well as humans and other algorithms on categorization tasks while using fewer cues, making it a parsimonious psychological model of fast and frugal categorization.
Soft computing is an area of study that deals with imprecise or uncertain data using techniques like neural networks, fuzzy logic, and evolutionary computation. Unlike conventional computing which seeks exactness, soft computing is tolerant of imprecision and approximation to achieve tractability and robust solutions. The key components of soft computing aim to emulate aspects of human cognition by using neural networks for learning, fuzzy logic for modeling uncertainty, and evolutionary algorithms for optimization. Soft computing has many successful applications and its influence is growing in science, engineering, and other fields.
This document summarizes a study that compares fuzzy logic and neuro-fuzzy models for predicting direct current in motors. Fuzzy logic and neuro-fuzzy systems were used to model the relationship between motor torque, power, speed (inputs) and current (output). Both techniques were tested on a dataset of 507 samples. The neuro-fuzzy inference system (ANFIS) performed slightly better than the fuzzy logic system at predicting motor current, demonstrating the benefits of combining fuzzy logic with neural networks.
Soft computing is a set of computational techniques that aim to mimic human-like reasoning and decision making. The main techniques are fuzzy logic, neural networks, evolutionary computing, machine learning, and probabilistic reasoning. Each technique has strengths and weaknesses, but they complement each other. When used together, soft computing techniques can solve complex problems that are difficult for traditional mathematical methods. The paper reviews these soft computing techniques and explores how they could be applied to problems in various domains.
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAScscpconf
Soft Computing refers to the science of reasoning, thinking and deduction that recognizes and uses the real world phenomena of grouping, memberships, and classification of various quantities under study. As such, it is an extension of natural heuristics and capable of dealing with complex systems because it does not require strict mathematical definitions and
distinctions for the system components. It differs from hard computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty and partial truth. In effect, the role modelfor soft computing is the human mind. The guiding principle of soft computing is: Exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. The main techniques in soft computing are evolutionary computing, artificial neural networks, and fuzzy logic and Bayesian statistics. Each technique can be used separately, but a powerful advantage of soft computing is the complementary nature of the techniques. Used together they can produce solutions to problems that are too complex or
inherently noisy to tackle with conventional mathematical methods. The applications of soft computing have proved two main advantages. First, it made solving nonlinear problems, in
which mathematical models are not available, possible. Second, it introduced the human knowledge such as cognition,
ecognition, understanding, learning, and others into the fields of
computing. This resulted in the possibility of constructing intelligent systems such as autonomous self-tuning systems, and automated designed systems. This paper highlights various areas of soft computing techniques.
An informative and descriptive title for your literature survey John Wanjiru
The document summarizes research on developing artificial intelligence that can master the game of Go. It describes how researchers at DeepMind used a combination of deep neural networks and Monte Carlo tree search to create the AlphaGo agent. The AlphaGo agent uses a policy network trained through supervised and reinforcement learning to select moves, and a value network trained through reinforcement learning to evaluate board positions. Researchers found that AlphaGo was able to defeat human champions by a wide margin, demonstrating that its approach had achieved a level of play beyond human expertise.
The document discusses the origins and development of the World Wide Web and Semantic Web. It begins with Tim Berners-Lee's original proposal in 1989 to create a global hypertext system using universal document identifiers. It then describes the formation of the World Wide Web Consortium and Berners-Lee's vision of a web of machine-understandable data. The document concludes by examining debates around natural language processing and semantics on the Semantic Web.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
6. kr paper journal nov 11, 2017 (edit a)IAESIJEECS
Knowledge Representation (KR) is a fascinating field across several areas of cognitive science and computer science. It is very hard to identify the requirement of a combination of many techniques and inference mechanism to achieve the accuracy for the problem domain. This research attempted to examine those techniques, and to apply them to implement a Cognitive Hybrid Sentence Modeling and Analyzer. The purpose of developing this system is to facilitate people who face the problem of using English language in daily life.
The Model of Achievement Competence Motivation (MACM): Part A Introduction o...Kevin McGrew
The Model of Achievement Competence Motivation (MACM) is a series of slide modules. This is the first (Part A) in the series. The modules will serve as supplemental materials to "The Model of Achievement Competence Motivation (MACM)--Standing on the shoulders of giants" (McGrew, in press, 2021 - in a forthcoming special issue on motivation in the Canadian Journal of School Psychology)
The document describes a tri-partite model of computational knowledge. It proposes modeling human cognition using three modules that process information at different timescales and cognitive costs based on evolutionary features. Module I deals with unconscious knowledge like perception and attention. Module II involves conscious reasoning processes. Module III focuses on learning and development over various timescales. The model aims to quantitatively represent cognitive processes below rational reasoning to enable more human-like artificial intelligence.
Adaptive Neural Fuzzy Inference System for Employability AssessmentEditor IJCATR
Employability is potential of a person for gaining and maintains employment. Employability is measure through the
education, personal development and understanding power. Employability is not the similar as ahead a graduate job, moderately it
implies something almost the capacity of the graduate to function in an employment and be capable to move between jobs, therefore
remaining employable through their life. This paper introduced a new adaptive neural fuzzy inference system for assessment of
employability with the help of some neuro fuzzy rules. The purpose and scope of this research is to examine the level of employability.
The concern research use both fuzzy inference systems and artificial neural network which is known as neuro fuzzy technique for
solve the problem of employability assessment. This paper use three employability skills as input and find a crisp value as output
which indicates the glassy of employee. It uses twenty seven neuro fuzzy rules, with the help of Sugeno type inference in Mat-lab and
finds single value output. The proposed system is named as Adaptive Neural Fuzzy Inference System for Employability Assessment
(ANFISEA).
Soft computing is an approach to computing that aims to model human-like decision making. It deals with imprecise or uncertain data using techniques like fuzzy logic, neural networks, and genetic algorithms. The goal is to develop systems that are tolerant of imprecision, uncertainty, and approximation to achieve practical and low-cost solutions to real-world problems. Soft computing was initiated in 1981 and includes fields like fuzzy logic, neural networks, and evolutionary computation. It provides approximate solutions using techniques like neural network reasoning, genetic programming, and functional approximation.
Beyond cognitive abilities: An integrative model of learning-related persona...Kevin McGrew
For centuries educational psychologists have highlighted the importance of "non-cognitive" variables in school learning. The presentation is a "big picture" overview of how cognitive abilities and non-cognitive factors can be integrated into an over-arching conceptual framework. The presentation also illustrates how the big picture framework can be used to conceptualize a number of contemporary "buzz word" initiatives related to building 21st century educationally important skills (social-emotional learning, critical thinking, creativity, complex problem solving, etc.)
Symbolic-Connectionist Representational Model for Optimizing Decision Making ...IJECEIAES
Modeling higher order cognitive processes like human decision making come in three representational approaches namely symbolic, connectionist and symbolic-connectionist. Many connectionist neural network models are evolved over the decades for optimizing decision making behaviors and their agents are also in place. There had been attempts to implement symbolic structures within connectionist architectures with distributed representations. Our work was aimed at proposing an enhanced connectionist approach of optimizing the decisions within the framework of a symbolic cognitive model. The action selection module of this framework is forefront in evolving intelligent agents through a variety of soft computing models. As a continous effort, a Connectionist Cognitive Model (CCN) had been evolved by bringing a traditional symbolic cognitive process model proposed by LIDA as an inspiration to a feed forward neural network model for optimizing decion making behaviours in intelligent agents. Significanct progress was observed while comparing its performance with other varients.
In the classroom, learners actively process new information by storing it in memory and retrieving relevant information from lessons. George Armitage Miller first proposed the information processing theory and discovered that working memory can hold around seven items. John Atkinson and Richard Shiffrin proposed the multi-store model of sensory memory, short-term memory, and long-term memory. Alan Baddeley later expanded this with the working memory model including the central executive, phonological loop, and visuospatial sketchpad. Information processing theory explains how humans encode, store, and retrieve information.
Artificial Intelligence is advancing throughout the world. According to a study by Creative Strategies, 95% of mobile users are using AI-enabled voice assistance. It is hard to seek out a society that doesn’t use AI techniques. This technique brings numerous uses in a number of ways. It includes decision-making capabilities, diagnosis generation, identifying the connection between causes and consequences, forecasting events, controlling devices like smart sensors, mechanical arms, etc.
https://takeoffprojects.com/ai-based-projects
This document outlines the syllabus for an MTCSCS302 course on Soft Computing taught by Dr. Sandeep Kumar Poonia. The course covers topics including neural networks, fuzzy logic, probabilistic reasoning, and genetic algorithms. It is divided into five units: (1) neural networks, (2) fuzzy logic, (3) fuzzy arithmetic and logic, (4) neuro-fuzzy systems and applications of fuzzy logic, and (5) genetic algorithms and their applications. The goal of the course is to provide students with knowledge of soft computing fundamentals and approaches for solving complex real-world problems.
This document provides an introduction to artificial intelligence and cognitive science. It defines intelligence and AI, discusses different approaches to AI like thinking like humans, thinking rationally, acting like humans and acting rationally. It also summarizes the history of AI from early neural networks to modern applications. Key concepts covered include the Turing test, knowledge representation, rational agents, intelligent environments and knowledge-based systems.
The document discusses the differences between machine learning (ML), statistical learning, data mining (DM), and automated learning (AL). It argues that while ML and statistical learning developed similar techniques starting in the 1960s, DM emerged in the 1990s from a merging of database research and automated learning. However, industry was much more enthusiastic about adopting DM techniques compared to AL techniques, even though many DM systems are just friendly interfaces of AL systems. The document aims to explain the key differences between DM and AL that led to DM's greater commercial success.
Comparison of relational and attribute-IEEE-1999-published ...butest
1. The document compares relational data mining methods to attribute-based methods for use in intelligent systems and data mining. Relational methods use first-order logic to represent background knowledge and relationships between objects, while attribute-based methods like neural networks are limited to attribute-value representations.
2. Relational methods have advantages over attribute-based methods for applications that require expressing complex logical relationships and background knowledge. They can also better handle sparse data. However, existing inductive logic programming systems for relational data mining are relatively inefficient for numerical data.
3. The paper proposes a hybrid relational data mining technique called MMDR that combines inductive logic programming with probabilistic inference. This allows it to efficiently handle
The document discusses various topics relating to knowledge representation in artificial intelligence, including:
1) Different types of knowledge that need representation including declarative, procedural, commonsense, and scientific knowledge.
2) Ontologies define terminology and objects/relationships in a systematic way to enable knowledge sharing between agents.
3) Semantic networks represent knowledge graphically with nodes for objects/events and arcs for relationships, enabling reasoning through inheritance and matching.
4) Conceptual graphs also represent knowledge graphically as a bipartite graph with concepts and relations, and can represent logical expressions.
This document presents a new model of categorization called Categorization by Elimination (CBE). CBE uses as few cues or features as necessary to make an accurate category assignment, unlike most existing models which use all available cues. CBE orders cues by validity and uses them sequentially, eliminating potential categories after each cue is considered until only one category remains. The authors show that CBE performs as well as humans and other algorithms on categorization tasks while using fewer cues, making it a parsimonious psychological model of fast and frugal categorization.
Soft computing is an area of study that deals with imprecise or uncertain data using techniques like neural networks, fuzzy logic, and evolutionary computation. Unlike conventional computing which seeks exactness, soft computing is tolerant of imprecision and approximation to achieve tractability and robust solutions. The key components of soft computing aim to emulate aspects of human cognition by using neural networks for learning, fuzzy logic for modeling uncertainty, and evolutionary algorithms for optimization. Soft computing has many successful applications and its influence is growing in science, engineering, and other fields.
This document summarizes a study that compares fuzzy logic and neuro-fuzzy models for predicting direct current in motors. Fuzzy logic and neuro-fuzzy systems were used to model the relationship between motor torque, power, speed (inputs) and current (output). Both techniques were tested on a dataset of 507 samples. The neuro-fuzzy inference system (ANFIS) performed slightly better than the fuzzy logic system at predicting motor current, demonstrating the benefits of combining fuzzy logic with neural networks.
Soft computing is a set of computational techniques that aim to mimic human-like reasoning and decision making. The main techniques are fuzzy logic, neural networks, evolutionary computing, machine learning, and probabilistic reasoning. Each technique has strengths and weaknesses, but they complement each other. When used together, soft computing techniques can solve complex problems that are difficult for traditional mathematical methods. The paper reviews these soft computing techniques and explores how they could be applied to problems in various domains.
ON SOFT COMPUTING TECHNIQUES IN VARIOUS AREAScscpconf
Soft Computing refers to the science of reasoning, thinking and deduction that recognizes and uses the real world phenomena of grouping, memberships, and classification of various quantities under study. As such, it is an extension of natural heuristics and capable of dealing with complex systems because it does not require strict mathematical definitions and
distinctions for the system components. It differs from hard computing in that, unlike hard computing, it is tolerant of imprecision, uncertainty and partial truth. In effect, the role modelfor soft computing is the human mind. The guiding principle of soft computing is: Exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness and low solution cost. The main techniques in soft computing are evolutionary computing, artificial neural networks, and fuzzy logic and Bayesian statistics. Each technique can be used separately, but a powerful advantage of soft computing is the complementary nature of the techniques. Used together they can produce solutions to problems that are too complex or
inherently noisy to tackle with conventional mathematical methods. The applications of soft computing have proved two main advantages. First, it made solving nonlinear problems, in
which mathematical models are not available, possible. Second, it introduced the human knowledge such as cognition,
ecognition, understanding, learning, and others into the fields of
computing. This resulted in the possibility of constructing intelligent systems such as autonomous self-tuning systems, and automated designed systems. This paper highlights various areas of soft computing techniques.
An informative and descriptive title for your literature survey John Wanjiru
The document summarizes research on developing artificial intelligence that can master the game of Go. It describes how researchers at DeepMind used a combination of deep neural networks and Monte Carlo tree search to create the AlphaGo agent. The AlphaGo agent uses a policy network trained through supervised and reinforcement learning to select moves, and a value network trained through reinforcement learning to evaluate board positions. Researchers found that AlphaGo was able to defeat human champions by a wide margin, demonstrating that its approach had achieved a level of play beyond human expertise.
The document discusses the origins and development of the World Wide Web and Semantic Web. It begins with Tim Berners-Lee's original proposal in 1989 to create a global hypertext system using universal document identifiers. It then describes the formation of the World Wide Web Consortium and Berners-Lee's vision of a web of machine-understandable data. The document concludes by examining debates around natural language processing and semantics on the Semantic Web.
A SYSTEM OF SERIAL COMPUTATION FOR CLASSIFIED RULES PREDICTION IN NONREGULAR ...ijaia
Objects or structures that are regular take uniform dimensions. Based on the concepts of regular models,
our previous research work has developed a system of a regular ontology that models learning structures
in a multiagent system for uniform pre-assessments in a learning environment. This regular ontology has
led to the modelling of a classified rules learning algorithm that predicts the actual number of rules needed
for inductive learning processes and decision making in a multiagent system. But not all processes or
models are regular. Thus this paper presents a system of polynomial equation that can estimate and predict
the required number of rules of a non-regular ontology model given some defined parameters.
6. kr paper journal nov 11, 2017 (edit a)IAESIJEECS
Knowledge Representation (KR) is a fascinating field across several areas of cognitive science and computer science. It is very hard to identify the requirement of a combination of many techniques and inference mechanism to achieve the accuracy for the problem domain. This research attempted to examine those techniques, and to apply them to implement a Cognitive Hybrid Sentence Modeling and Analyzer. The purpose of developing this system is to facilitate people who face the problem of using English language in daily life.
The Model of Achievement Competence Motivation (MACM): Part A Introduction o...Kevin McGrew
The Model of Achievement Competence Motivation (MACM) is a series of slide modules. This is the first (Part A) in the series. The modules will serve as supplemental materials to "The Model of Achievement Competence Motivation (MACM)--Standing on the shoulders of giants" (McGrew, in press, 2021 - in a forthcoming special issue on motivation in the Canadian Journal of School Psychology)
The document describes a tri-partite model of computational knowledge. It proposes modeling human cognition using three modules that process information at different timescales and cognitive costs based on evolutionary features. Module I deals with unconscious knowledge like perception and attention. Module II involves conscious reasoning processes. Module III focuses on learning and development over various timescales. The model aims to quantitatively represent cognitive processes below rational reasoning to enable more human-like artificial intelligence.
Adaptive Neural Fuzzy Inference System for Employability AssessmentEditor IJCATR
Employability is potential of a person for gaining and maintains employment. Employability is measure through the
education, personal development and understanding power. Employability is not the similar as ahead a graduate job, moderately it
implies something almost the capacity of the graduate to function in an employment and be capable to move between jobs, therefore
remaining employable through their life. This paper introduced a new adaptive neural fuzzy inference system for assessment of
employability with the help of some neuro fuzzy rules. The purpose and scope of this research is to examine the level of employability.
The concern research use both fuzzy inference systems and artificial neural network which is known as neuro fuzzy technique for
solve the problem of employability assessment. This paper use three employability skills as input and find a crisp value as output
which indicates the glassy of employee. It uses twenty seven neuro fuzzy rules, with the help of Sugeno type inference in Mat-lab and
finds single value output. The proposed system is named as Adaptive Neural Fuzzy Inference System for Employability Assessment
(ANFISEA).
Soft computing is an approach to computing that aims to model human-like decision making. It deals with imprecise or uncertain data using techniques like fuzzy logic, neural networks, and genetic algorithms. The goal is to develop systems that are tolerant of imprecision, uncertainty, and approximation to achieve practical and low-cost solutions to real-world problems. Soft computing was initiated in 1981 and includes fields like fuzzy logic, neural networks, and evolutionary computation. It provides approximate solutions using techniques like neural network reasoning, genetic programming, and functional approximation.
Beyond cognitive abilities: An integrative model of learning-related persona...Kevin McGrew
For centuries educational psychologists have highlighted the importance of "non-cognitive" variables in school learning. The presentation is a "big picture" overview of how cognitive abilities and non-cognitive factors can be integrated into an over-arching conceptual framework. The presentation also illustrates how the big picture framework can be used to conceptualize a number of contemporary "buzz word" initiatives related to building 21st century educationally important skills (social-emotional learning, critical thinking, creativity, complex problem solving, etc.)
Symbolic-Connectionist Representational Model for Optimizing Decision Making ...IJECEIAES
Modeling higher order cognitive processes like human decision making come in three representational approaches namely symbolic, connectionist and symbolic-connectionist. Many connectionist neural network models are evolved over the decades for optimizing decision making behaviors and their agents are also in place. There had been attempts to implement symbolic structures within connectionist architectures with distributed representations. Our work was aimed at proposing an enhanced connectionist approach of optimizing the decisions within the framework of a symbolic cognitive model. The action selection module of this framework is forefront in evolving intelligent agents through a variety of soft computing models. As a continous effort, a Connectionist Cognitive Model (CCN) had been evolved by bringing a traditional symbolic cognitive process model proposed by LIDA as an inspiration to a feed forward neural network model for optimizing decion making behaviours in intelligent agents. Significanct progress was observed while comparing its performance with other varients.
In the classroom, learners actively process new information by storing it in memory and retrieving relevant information from lessons. George Armitage Miller first proposed the information processing theory and discovered that working memory can hold around seven items. John Atkinson and Richard Shiffrin proposed the multi-store model of sensory memory, short-term memory, and long-term memory. Alan Baddeley later expanded this with the working memory model including the central executive, phonological loop, and visuospatial sketchpad. Information processing theory explains how humans encode, store, and retrieve information.
Artificial Intelligence is advancing throughout the world. According to a study by Creative Strategies, 95% of mobile users are using AI-enabled voice assistance. It is hard to seek out a society that doesn’t use AI techniques. This technique brings numerous uses in a number of ways. It includes decision-making capabilities, diagnosis generation, identifying the connection between causes and consequences, forecasting events, controlling devices like smart sensors, mechanical arms, etc.
https://takeoffprojects.com/ai-based-projects
Olympus is a multi-agent system that uses agents, ontologies, and web mining to create knowledge chains and recommend personal knowledge to learners. It monitors a learner's web navigation, classifies webpage contents using an ontology, and creates recommended knowledge chains for the learner based on the classified webpages. The system aims to motivate learners to create new knowledge chains by semi-automatically generating potential chains for them to accept, modify, or discard.
This document provides an overview of machine learning. It defines learning and discusses different types of learning including rote, supervised, and unsupervised learning. It explains the need for machine learning to allow systems to learn on their own from data. Machine learning is described as a branch of AI that allows systems to learn from examples without being explicitly programmed. Various machine learning tasks and applications are mentioned like optical character recognition. Different machine learning techniques are then summarized, including learning through examples, explanation based learning, and learning by analogy.
Cognitive process dimension in rbt explanatory notepagesArputharaj Bridget
The document discusses how the revised Bloom's Taxonomy (RBT) promotes meaningful learning beyond just knowledge acquisition. RBT includes six cognitive process categories that move from retention to transfer of knowledge: Remember, Understand, Apply, Analyze, Evaluate, and Create. These categories represent a fuller range of cognitive processes compared to just focusing on memorization. The goal of education should be both retention of material as well as transfer of knowledge to new situations. RBT helps teachers foster learning objectives and assessments that promote both retention and transfer.
This document describes a new genetic algorithm (GA)-based system for predicting the future performance of individual stocks. The system uses GAs for inductive machine learning rather than optimization. It is compared to a neural network system using data from over 1,600 stocks. The study finds that the GA system can predict stock returns 12 weeks in the future and that combining GA and neural network forecasts provides synergistic benefits.
This chapter discusses how the six levels of the New Taxonomy interact with the three knowledge domains of information, mental processes, and psychomotor processes. In contrast to Bloom's Taxonomy, the New Taxonomy explicitly defines how each of its six levels applies to each of the three knowledge domains. Level 1, knowledge retrieval, involves recalling or executing knowledge. For information, this means recalling details or organizing ideas. For mental and psychomotor procedures, it means recalling skills or processes and being able to execute them. Level 2, comprehension, requires students to identify and represent the most important aspects of knowledge through synthesis and representation.
This document provides an overview of memory, including:
- The three main processes involved in memory are encoding, storage, and retrieval.
- There are different types and classifications of memory, including short-term and long-term memory, declarative and non-declarative memory, and episodic and semantic memory.
- Memories are represented and organized through mental structures like schemas, semantic networks, and conceptual hierarchies that group related information.
- Theories of memory like levels of processing aim to explain how memories are formed and consolidated in the brain over time.
Question 1Learning About Cookies as Spyware.Research what k.docxaudeleypearl
Question 1:
Learning About Cookies as Spyware.
Research what kind of information cookies store. You might find the following websites helpful:
· www.allaboutcookies.org/
· www.howstuffworks.com/cookie1.htm
Using WORD, write an ORIGINAL brief essay of 300 words or more describing cookies and the way they can invade privacy.
Safe Assign is software that verifies the originality of your work against on-line sources and other students.
Note your Safe Assign score. Continue submitting until your Safe Assign score is less than 25. For your first written assignment, you have unlimited times to retry your assignment.
Attach your WORD doc and then hit SUBMT.
Question 2
Using the Web or other resources, find out what your state's laws are regarding cyber stalking.
Write a brief essay describing those laws and what they mean.
Question 3:
Learn About Defending Against DDoS
Using WORD, write an ORIGINAL brief essay of 300 words or more:
· Find a DoS attack that has occurred in the last six months
· You might find some resources at www.f-secure.com.
· Note how that attack was conducted.
· Write a brief explanation of how you might have defended against that specific attack.
Question 4:
Use a search engine to find the names of five different cyber viruses.
Using WORD, write a short paragraph on each.
Question 5:
Use the Web to search for examples of hacks that made the news.
Write a brief description of the attack indicating what type of hack was involved.
Question 6:
Consider this hypothetical situation:
David Doe is a network administrator for the ABC Company. David is passed over for promotion three times. He is quite vocal in his dissatisfaction with this situation. In fact, he begins to express negative opinions about the organization in general. Eventually, David quits and begins his own consulting business. Six months after David’s departure, it is discovered that a good deal of the ABC Company’s research has suddenly been duplicated by a competitor. Executives at ABC suspect that David Doe has done some consulting work for this competitor and may have passed on sensitive data. However, in the interim since David left, his computer has been formatted and reassigned to another person. ABC has no evidence that David Doe did anything wrong.
What steps might have been taken to detect David’s alleged industrial espionage?
What steps might have been taken to prevent his perpetrating such an offense?
Question 7:
1). Using the Web or other resources, write a brief paper about RSA, its history, its methodology, and where it is used.
2). Send a brief message (ten words minimum) using the Caesar Cypher.
Question 8:
Using the Web or other resources, do a bit of research on the methodologies that Microsoft Windows firewall uses. Consider the strengths and weaknesses of that approach.
Question 9:
Using the guidelines provided in this week's chapter (and other resources as needed), create a step-by-step IT security policy for handling user accounts/rights ...
Theories of induction in psychology and artificial intelligence assume that the process leads from observation and knowledge to the formulation of linguistic conjectures. This paper proposes instead that the process yields mental models of phenomena. It uses this hypothesis to distinguish between deduction, induction, and creative forms of thought. It shows how models could underlie inductions about specific matters. In the domain of linguistic conjectures, there are many possible inductive generalizations of a conjecture. In the domain of models, however, generalization calls for only a single operation: the addition of information to a model. If the information to be added is inconsistent with the model, then it eliminates the model as false: this operation suffices for all generalizations in a Boolean domain. Otherwise, the information that is added may have effects equivalent (a) to the replacement of an existential quantifier by a universal quantifier, or (b) to the promotion of an existential quantifier from inside to outside the scope of a universal quantifier. The latter operation is novel, and does not seem to have been used in any linguistic theory of induction. Finally, the paper describes a set of constraints on human induction, and outlines the evidence in favor of a model theory of induction.
This document discusses cognitive learning theory and its implications for teaching. It presents a model of cognitive processes that includes sensory receptors, executive control, working memory, long-term memory, and the affective domain. Information enters through the senses and is processed by executive control functions like perception and attention. Working memory handles short-term storage and manipulation of information before it is committed to long-term memory, which contains knowledge stored in declarative, procedural, and contextual formats. Effective teaching should account for how this cognitive model describes how information is acquired, stored, and retrieved from memory.
This document discusses human factors and human information processing. It provides definitions of human factors as a field that applies human characteristics like perception and memory to product and system design. It then discusses models of human information processing, comparing the human to a computer system with input, processing, and output subsystems. The document outlines several stages of human information processing - perceptual, cognitive, and action stages. It also discusses perspectives on user interface design, focusing on functional, aesthetic, and structural perspectives.
Applying Machine Learning to Agricultural Databutest
This document discusses applying machine learning techniques to agricultural data. It describes a software tool called WEKA that allows experimenting with different machine learning algorithms on real-world datasets. As a case study, the document examines using machine learning to infer rules for culling less productive cows from dairy herd data. Several machine learning methods were tested on the data and produced encouraging results for using machine learning to help solve agricultural problems.
This document discusses an integrated approach to ontology development methodology and provides a case study using a shopping mall domain. It begins by reviewing existing ontology development methodologies and identifying their pitfalls. An integrated methodology is then proposed which aims to reduce these pitfalls. The key steps in the proposed methodology are: 1) capturing motivating user scenarios or keywords, 2) generating formal/informal questions and answers from the scenarios, 3) extracting terms and constraints, and 4) building the ontology using a top-down approach. The methodology is applied to developing an ontology for a shopping mall domain to provide multilingual information to visitors.
The document discusses efficient reasoning in artificial intelligence systems. It describes how reasoning systems use stored information to derive conclusions and answers to queries. However, as reasoning systems become more expressive, they can also become less efficient or even undecidable. The document surveys techniques for addressing this tradeoff between expressiveness and efficiency in both logic-based and probabilistic reasoning systems. These techniques allow systems to sacrifice some correctness, precision, or expressiveness to gain efficiency.
A New Active Learning Technique Using Furthest Nearest Neighbour Criterion fo...ijcsa
Active learning is a supervised learning method that is based on the idea that a machine learning algorithm can achieve greater accuracy with fewer labelled training images if it is allowed to choose the image from which it learns. Facial age classification is a technique to classify face images into one of the several predefined age groups. The proposed study applies an active learning approach to facial age classification which allows a classifier to select the data from which it learns. The classifier is initially trained using a small pool of labeled training images. This is achieved by using the bilateral two dimension linear discriminant analysis. Then the most informative unlabeled image is found out from the unlabeled pool using the furthest nearest neighbor criterion, labeled by the user and added to the
appropriate class in the training set. The incremental learning is performed using an incremental version of bilateral two dimension linear discriminant analysis. This active learning paradigm is proposed to be applied to the k nearest neighbor classifier and the support vector machine classifier and to compare the performance of these two classifiers.
The document discusses two prominent models of memory: the multi-store model and the working memory model. The multi-store model, proposed by Atkinson and Shiffrin, posits that memory consists of three stores - sensory memory, short-term memory, and long-term memory. Information moves from sensory memory to short-term memory to long-term memory through encoding and rehearsal processes. The working memory model refined this by proposing separate subsystems for visual-spatial and auditory information within short-term/working memory. Both models provided a framework for understanding how information is processed and stored in memory but also had limitations that further research helped to address.
Comparing Working Memory And The Episodic BufferLakeisha Jones
The document discusses working memory and compares it to the multi-store model of memory. It explains that working memory consists of multiple subsystems that process different types of information, unlike the multi-store model which views short-term memory as a single unitary store. The episodic buffer is introduced as an additional storage system in working memory that can integrate information from other sources like long-term memory. Research provides support for the working memory model over the multi-store model in explaining memory tasks and processes in a more detailed way.
The document discusses different types of knowledge that may need to be represented in AI systems, including objects, events, performance, and meta-knowledge. It also discusses representing knowledge at two levels: the knowledge level containing facts, and the symbol level containing representations of objects defined in terms of symbols. Common ways of representing knowledge mentioned include using English, logic, relations, semantic networks, frames, and rules. The document also discusses using knowledge for applications like learning, reasoning, and different approaches to machine learning such as skill refinement, knowledge acquisition, taking advice, problem solving, induction, discovery, and analogy.
Dahua provides a comprehensive guide on how to install their security camera systems. Learn about the different types of cameras and system components, as well as the installation process.
Welcome to ASP Cranes, your trusted partner for crane solutions in Raipur, Chhattisgarh! With years of experience and a commitment to excellence, we offer a comprehensive range of crane services tailored to meet your lifting and material handling needs.
At ASP Cranes, we understand the importance of reliable and efficient crane operations in various industries, from construction and manufacturing to logistics and infrastructure development. That's why we strive to deliver top-notch solutions that enhance productivity, safety, and cost-effectiveness for our clients.
Our services include:
Crane Rental: Whether you need a crawler crane for heavy lifting or a hydraulic crane for versatile operations, we have a diverse fleet of well-maintained cranes available for rent. Our rental options are flexible and can be customized to suit your project requirements.
Crane Sales: Looking to invest in a crane for your business? We offer a wide selection of new and used cranes from leading manufacturers, ensuring you find the perfect equipment to match your needs and budget.
Crane Maintenance and Repair: To ensure optimal performance and safety, regular maintenance and timely repairs are essential for cranes. Our team of skilled technicians provides comprehensive maintenance and repair services to keep your equipment running smoothly and minimize downtime.
Crane Operator Training: Proper training is crucial for safe and efficient crane operation. We offer specialized training programs conducted by certified instructors to equip operators with the skills and knowledge they need to handle cranes effectively.
Custom Solutions: We understand that every project is unique, which is why we offer custom crane solutions tailored to your specific requirements. Whether you need modifications, attachments, or specialized equipment, we can design and implement solutions that meet your needs.
At ASP Cranes, customer satisfaction is our top priority. We are dedicated to delivering reliable, cost-effective, and innovative crane solutions that exceed expectations. Contact us today to learn more about our services and how we can support your project in Raipur, Chhattisgarh, and beyond. Let ASP Cranes be your trusted partner for all your crane needs!
Charging Fueling & Infrastructure (CFI) Program Resources by Cat PleinForth
Cat Plein, Development & Communications Director of Forth, gave this presentation at the Forth and Electrification Coalition CFI Grant Program - Overview and Technical Assistance webinar on June 12, 2024.
Expanding Access to Affordable At-Home EV Charging by Vanessa WarheitForth
Vanessa Warheit, Co-Founder of EV Charging for All, gave this presentation at the Forth Addressing The Challenges of Charging at Multi-Family Housing webinar on June 11, 2024.
EV Charging at MFH Properties by Whitaker JamiesonForth
Whitaker Jamieson, Senior Specialist at Forth, gave this presentation at the Forth Addressing The Challenges of Charging at Multi-Family Housing webinar on June 11, 2024.
Charging Fueling & Infrastructure (CFI) Program by Kevin MillerForth
Kevin Miller, Senior Advisor, Business Models of the Joint Office of Energy and Transportation gave this presentation at the Forth and Electrification Coalition CFI Grant Program - Overview and Technical Assistance webinar on June 12, 2024.
Understanding Catalytic Converter Theft:
What is a Catalytic Converter?: Learn about the function of catalytic converters in vehicles and why they are targeted by thieves.
Why are They Stolen?: Discover the valuable metals inside catalytic converters (such as platinum, palladium, and rhodium) that make them attractive to criminals.
Steps to Prevent Catalytic Converter Theft:
Parking Strategies: Tips on where and how to park your vehicle to reduce the risk of theft, such as parking in well-lit areas or secure garages.
Protective Devices: Overview of various anti-theft devices available, including catalytic converter locks, shields, and alarms.
Etching and Marking: The benefits of etching your vehicle’s VIN on the catalytic converter or using a catalytic converter marking kit to make it traceable and less appealing to thieves.
Surveillance and Monitoring: Recommendations for using security cameras and motion-sensor lights to deter thieves.
Statistics and Insights:
Theft Rates by Borough: Analysis of data to determine which borough in NYC experiences the highest rate of catalytic converter thefts.
Recent Trends: Current trends and patterns in catalytic converter thefts to help you stay aware of emerging hotspots and tactics used by thieves.
Benefits of This Presentation:
Awareness: Increase your awareness about catalytic converter theft and its impact on vehicle owners.
Practical Tips: Gain actionable insights and tips to effectively prevent catalytic converter theft.
Local Insights: Understand the specific risks in different NYC boroughs, helping you take targeted preventive measures.
This presentation aims to equip you with the knowledge and tools needed to protect your vehicle from catalytic converter theft, ensuring you are prepared and proactive in safeguarding your property.
Implementing ELDs or Electronic Logging Devices is slowly but surely becoming the norm in fleet management. Why? Well, integrating ELDs and associated connected vehicle solutions like fleet tracking devices lets businesses and their in-house fleet managers reap several benefits. Check out the post below to learn more.
Charging and Fueling Infrastructure Grant: Round 2 by Brandt HertensteinForth
Brandt Hertenstein, Program Manager of the Electrification Coalition gave this presentation at the Forth and Electrification Coalition CFI Grant Program - Overview and Technical Assistance webinar on June 12, 2024.
1. Biography
John Anderson was born in Vancouver, British Columbia, in 1947. He entered the University of
British Columbia with hopes to become a writer, but left with the dream of practicing psychology as a
precise and quantitative science. He graduated at the head of his class in Arts and Science in 1968.
Anderson earned his Ph.D. from Stanford in 1972 under Gordon Bower. He then spent one
year at Yale as an assistant professor, three years at the University of Michigan as a Junior Fellow, one
year at Yale as an associate professor, and a final year as a full professor. He has been at Carnegie
Mellon University since 1978. ACT* (Pronounced "A-C-T-star") is a cognitive theory dealing primarily with
memory structures.
Theory
The model describes a spreading activation model of semantic memory, combined with a
production system for executing higher level operations. According to this theory, there are three types of
memory and three types of learning.
Declarative memory (WHAT) encompasses factual components and their associations and
sequences.
Procedural memory, or production memory, (HOW) are sequences of behaviors (productions)
based on conditions and actions stored in declarative memory. A production is a series of "if - then" rules: if
x happens, then do y. New productions are formed by linking up existing ones, adding components, and
deleting components.
Working memory is the part of long-term memory which is currently in consciousness. These
three aspects of memory work closely together, and each has its own functions and processes.
act
Three types of learning are
Generalization- in which procedures (productions) are cross-contextualized or more widely
applied
Discrimination - in which procedures (productions) become more specialized
Strengthening - in which procedures (productions) are applied more frequently.
The theory includes notions of goal structure, problem-solving context, and feedback.
Research with ACT* has showed that reaction time for fact retrieval increase as a function of the number of
times the items sought were mentioned in a story. Unique content in stories is easier for the reader to
retrieve.
Memory ACTIVATION determines the probability of access to memory, and the rate at which a
memory can be accessed, after a subject is cued to recall information. Two factors influence the level of
activation: how recently the person has accessed the memory, and how much they have practiced or
rehearsed the information.
SPREADING ACTIVATION proposes that activation travels along a network of connections, so
that once cued, a subject may have multiple responses based on the connections among bits of
information in memory. Spreading activation is not believed to be entirely under the subject's control, but
cueing may activate remote connections without the subject's volition being involved. This tendency for
memories to be activated is called ASSOCIATIVE PRIMING.
ACT-R is a general theory of cognition developed by John Anderson and colleagues at
Carnegie Mellon Univeristy that focuses on memory processes . It is an elaboration of the original ACT
theory (Anderson, 1976) and builds upon HAM, a model of semantic memory proposed by Anderson &
Bower (1973). Anderson (1983) provides a complete description of ACT-R. In addition, Anderson (1990)
provides his own critique of ACT-R and Anderson (1993) provides the outline for a broader development of
the theory. See the CMU ACT site for the most up-to-date information on the theory.
ACT-R distinguishes among three types of memory structures: declarative, procedural and working
memory. Declarative memory takes the form of a semantic net linking propositions, images, and
sequences by associations. Procedural memory (also long-term) represents information in the form of
productions; each production has a set of conditions and actions based in declarative memory. The nodes
of long-term memory all have some degree of activation and working memory is that part of long-term
memory that is most highly activated.
According to ACT-R, all knowledge begins as declarative information; procedural knowledge is learned by
making inferences from already existing factual knowledge. ACT-R supports three fundamental types of
learning: generalization, in which productions become broader in their range of application, discrimination,
in which productions become narrow in their range of application, and strengthening, in which some
productions are applied more often. New productions are formed by the conjunction or disjunction of
existing productions.
Application
ACT-R can explain a wide variety of memory effects as well as account for higher order skills such as
geometry proofs, programming and language learning (see Anderson, 1983; 1990). ACT-R has been the
basis for intelligent tutors (Anderson, Boyle, Farrell &Reiser, 1987; Ritter et al, 2007).
Example
One of the strengths of ACT is that it includes both proposition and procedural representation of knowledge
as well as accounting for the use of goals and plans. For example, here is a production rule that could be
used to convert declarative sentences into a question:
IF the goal is to question whether the proposition (LVrelationLVagentLVobject) is true THEN set as
subgoals
1. to plan the communication (LVrelationLVagentLVobject)
2. to move the first word in the description of LVrelation to the beginning of the sentence
3. to execute the plan
This production rule could be used to convert the sentence: "The lawyer is buying the car." into the
question: "Is the lawyer buying the car?"
Principles
1.
Identify the goal structure of the problem space.
2.
Provide instruction in the context of problem-solving.
3.
Provide immediate feedback on errors.
4.
Minimize working memory load.
5.
Adjust the "grain size" of instruction with learning to account for the knowledge
compilation process.
6.
Enable the student to approach the target skill by successive approximation.
The Adaptive Character of Thought - Rational (ACT-R) is a theory of cognition developed
principally by John Anderson at Carnegie-Mellon University [4]. ACT-R models how humans recall
chunks" of information from memory and how they solve problems by breaking them down into subgoals
and applying knowledge from working memory as needed.
1.1 Roadmap
We rst introduce the crucial distinction between declarative knowledge and procedural
knowledge in Section2. The document then proceeds in a top-down fashion: Under the assumption that the
agent (a human,or possibly a computer) already has all of the knowledge he/she needs, we examine in
Section 3 how the decision-making process is made on a rational basis under ACT-R. In particular, we
describe the mechanism by which a particular production rule," corresponding to the actions" of ACT-R, is
chosen out of many possible alternatives. In Section 4, we remove the assumption that knowledge is
already available, and describe the ACT-R processes by which new knowledge is acquired. This includes
both the creation of new memories, as well as the strengthening (and decay) of existing ones. Finally, in
Sections 6 and 7, we discuss how ACT-R partially models the Spacing Effect and the Power Laws of
Learning/Forgetting.
12 Declarative versus Procedural Knowledge
Under ACT-R, human knowledge is divided into two disjoint but related sets of knowledge {
declarative and procedural. Declarative knowledge comprises many knowledge chunks, which are the
current set of facts that are known and goals that are active. Two examples of chunks are The bank is
closed on Sundays," and The current goal is to run up a hill." Notice that each chunk may refer to other
chunks. For instance, our first example chunk refers to the concepts of bank," closed," and Sunday,"
which presumably arethemselves all chunks in their own right. When a chunk i refers to, or is referred to
by, another chunk j, then chunk i is said to be connected to chunk j. This relationship is not clearly defined
in ACT-R { for instance, whether the relationship is always symmetrical, or whether it can be re
exive (i.e., a chunk referring to itselfin recursive fashion), is not speci ed.
Procedural knowledge is the set of production rules { if/then statements that specify how a
particular goal can be achieved when a specied pre-condition is met { that the agent currently knows. A
productionrule might state, for instance, If I am hungry, then eat." For the domain of intelligent tutoring
systems,for which ACT* and ACT-R were partly conceived, a more typical rule might be, If the goal is to
prove triangle similarity, then prove that any two pairs of angles are congruent." The human memory
contains many declarative knowledge chunks and production rules. At any point in time, when a person is
trying to complete some task, a production rule, indicating the next step to take in order to solve the
problem, may re" if the rule's pre-condition, which is a conjunction of logicalpropositions that must hold
true according to the current state of declarative memory, is fulfilled. Sincethe currently available set of
knowledge chunks may fulfill the pre-conditions of multiple production rules,a competition exists among
production rules to select the one that will actually fire. (This competition willbe described later.)
Whichever rule ends up firing may result either in the goal being achieved, or in thecreation of new
knowledge chunks in working memory, which may then trigger more production rules, andso on.
Process columns" is pushed onto the goal stack."
ACT* THEORY
Explaining memory effects
History and Orientation
ACT* is a general theory of cognition developed by John Anderson that focuses on memory processes.
ACT* distinguishes among three types of memory structures: declarative, procedural and working memory.
Declarative memory takes the form of a semantic net linking propositions, images, and sequences by
associations. Procedural memory (also long-term) represents information in the form of productions; each
production has a set of conditions and actions based in declarative memory. The nodes of long-term
memory all have some degree of activation and working memory is that part of long-term memory that is
most highly activated.
Core Assumptions and Statements
According to ACT*, all knowledge begins as declarative information; procedural knowledge is learned by
making inferences from already existing factual knowledge. ACT* supports three fundamental types of
learning: generalization, in which productions become broader in their range of application, discrimination,
in which productions become narrow in their range of application, and strengthening, in which some
productions are applied more often. New productions are formed by the conjunction or disjunction of
existing productions.
Conceptual Model
Source: Anderson (1976).
Favorite Methods
Experimental research and Computational simulations.
Scope and Application
ACT* can explain a wide variety of memory effects as well as account for higher order skills such as geometry proofs,
programming and language learning (see Anderson, 1983; 1990). ACT* has been the basis for intelligent tutors (Anderson,
Boyle, Farrell &Reiser, 1987).
Example
One of the strengths of ACT is that it includes both proposition and procedural representation of knowledge as well as
accounting for the use of goals and plans. For example, here is a production rule that could be used to convert declarative
sentences into a question:
IF the goal is to question whether the proposition (LVrelationLVagentLVobject) is true THEN set as subgoals
1. to plan the communication (LVrelationLVagentLVobject)
2. to move the first word in the description of LVrelation to the beginning of the sentence
3. to execute the plan
This production rule could be used to convert the sentence: "The lawyer is buying the car." into the question: "Is the lawyer
buying the car?"
From Ebbinghaus onward psychology has seen an enormous amount of research invested in the study of learning and
memory. This research has produced a steady stream of results and, with a few "mini-revolutions" along the way, a steady
increase in our understanding of how knowledge is acquired, retained, retrieved, and utilized. Throughout this history there
has been a concern with the relationship of this research to its obvious application to education. The first author has written
two textbooks (Anderson, 1995a, 1995b) summarizing some of this research. In both textbooks he has made efforts to
identify the implications of this research for education. However, he left both textbooks feeling very dissatisfied -- that the
intricacy of research and theory on the psychological side was not showing through in the intricacy of educational
application. One finds in psychology many claims of relevance of cognitive psychology research for education. However,
these claims are loose and vague and contrast sharply with the crisp theory and results that exist in the field.
To be able to rigorously understand what the implications are of cognitive psychology research one needs a rigorous theory
that bridges the gap between the detail of the laboratory experiment and the scale of the educational enterprise. This chapter
is based on the ACT-R theory (Anderson, 1993, 1996) which has been able to explain learning in basic psychology
experiments and in a number of educational domains. ACT-R has been advertised as a "simple theory of learning and
cognition". It proposes that complex cognition is composed of relatively simple knowledge units which are acquired
according to relatively simple principles. Human cognition is complex but this complexity reflects complex composition of the
basic elements and principles just as a computer can produce complex aggregate behavior from simple computing elements.
The ACT-R perspective places a premium on the practice which is required to learn permanently the components of the
desired competence. The ACT-R theory claims that to learn a complex competence each component of that competence
must be mastered. It is a sharp contrast to many educational claims, supposedly based in cognitive research, that there are
moments of insight or transformations when whole knowledge structures become reorganized or learned. In contrast, ACT-R
implies that there is no “free lunch” and each piece of knowledge requires its own due of learning. Given the prevalence of
the “free lunch myth” we will endeavor to show that it is not true empirically and to explain why it can not be true within the
ACT-R theory.
This chapter will have the following organization. First we will describe the ACT-R theory and its learning principles. In the
light of this theory, we will identify what we think are the important implications of psychological research for education. We
will also address the issue of why so much of the research on learning and memory falls short of significant educational
application. We will devote special attention to the issues of insight, learning with understanding, and transfer which are part
of the free lunch myth. Finally, we will describe how we have tried to bring the lessons of this analysis to bear in the design of
our cognitive tutors (Anderson, Boyle, Corbett, & Lewis, 1990; Anderson, Corbett, Koedinger, & Pelletier, 1995).
The ACT-R Theory
The ACT-R theory admits of three basic binary distinctions. First, there is a distinction between two types of knowledge -declarative knowledge of facts and procedural knowledge of how to3 do various cognitive tasks. Second, there is the
distinction between the performance assumptions about how ACT-R deploys what it knows to solve a task and the learning
assumptions about how it acquires new knowledge. Third, there is a distinction between the symbolic level in ACT-R which
involves discrete knowledge structures and a sub-symbolic level which involves neural-like activation-based processes that
determine the availability of these symbolic structures. We will first describe ACT-R at the symbolic level. A symbolic-level
analysis of the knowledge structures in a domain corresponds basically to a task analysis of what needs to be learned in that
domain. However, as we will see, the availability of these symbolic structures depends critically on the subsymbolic
processes.
Declarative and Procedural Knowledge
Declarative knowledge reflects the factual information that a person knows and can report. According to ACT-R declarative
knowledge is represented as a network of small units of primitive knowledge called chunks. Figure 1 is a graphical display of
a chunk encoding the addition fact that 3+4=7 and some of its surrounding facts. These are some of the many facts that a
child might have involving these numbers. Frequently, one encounters the question “What does it mean to understand 3 or to
understand numbers in general?” The answer in ACT-R is quite definite on this matter: Understanding involves a large
number of declarative chunks like those in Figure 1 plus a large number of procedural units which determine how this
knowledge is used. According to the ACT-R theory, understanding requires nothing more or less than such a set of
knowledge units. Understanding of a concept results when we have enough knowledge about the concept that we can
flexibly solve significant problems involving the concept.
Procedural knowledge, such as mathematical problem-solving skill, is represented by a large number of rule-like units called
productions. Production rules are condition-action units which respond to various problem-solving conditions with specific
cognitive actions. The steps of thought in a production system correspond to a sequence of such condition-action rules
which execute or (in the terminology of production systems) fire. Production rules in ACT-R specify in their condition the
existence of specific goals and often involve the creation of subgoals. For instance, suppose a child was at the point
illustrated below in the solution of a multi-column addition problem:
534
+248
2
Focused on the tens column the following production rule might apply taken from the ACT-Rsimulation of multi-column
addition in Anderson (1993): IF the goal is to add n1 and n2 in a column and n1 + n2 = n3 THEN set as a subgoal to write n3
in that column
This production rule specifies in its condition the goal of working on the tens column and involves a retrieval of a declarative
chunk like the 3+4=7 fact in Figure 1. In its action it creates a subgoal which might involve things like processing a carry. It is
many procedural rules like this along with the chunks which in total produce what we recognize as competence in a domain
like mathematics.
ACT-R
ACT-R (pronounced act-ARE: Adaptive Control of Thought—Rational) is a cognitive architecture mainly developed by John
Robert Anderson at Carnegie Mellon University. Like any cognitive architecture, ACT-R aims to define the basic and
irreducible cognitive and perceptual operations that enable the human mind. In theory, each task that humans can perform
should consist of a series of these discrete operations.
Most of the ACT-R basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen
and described as a way of specifying how the brain itself is organized in a way that enables individual processing modules to
produce cognition.
Inspiration[edit]
ACT-R has been inspired by the work of Allen Newell, and especially by his lifelong championing the idea of unified theories
[1]
as the only way to truly uncover the underpinnings of cognition. In fact, John Anderson usually credits Allen Newell as the
major source of influence over his own theory.
What ACT-R looks like[edit]
Like other influential cognitive architectures (including Soar, CLARION, and EPIC), the ACT-R theory has a computational
implementation as an interpreter of a special coding language. The interpreter itself is written in Lisp, and might be loaded
into any of the most common distributions of the Lisp language.
This means that any researcher may download the ACT-R code from the ACT-R website, load it into a Lisp distribution, and
gain full access to the theory in the form of the ACT-R interpreter.
Also, this enables researchers to specify models of human cognition in the form of a script in the ACT-R language. The
language primitives and data-types are designed to reflect the theoretical assumptions about human cognition. These
assumptions are based on numerous facts derived from experiments in cognitive psychology and brain imaging.
Like a programming language, ACT-R is a framework: for different tasks (e.g., Tower of Hanoi, memory for text or for list of
words, language comprehension, communication, aircraft controlling), researchers create "models" (i.e., programs) in ACTR. These models reflect the modelers' assumptions about the task within the ACT-R view of cognition. The model might then
be run.
Running a model automatically produces a step-by-step simulation of human behavior which specifies each individual
cognitive operation (i.e., memory encoding and retrieval, visual and auditory encoding, motor programming and execution,
mental imagery manipulation). Each step is associated with quantitative predictions of latencies and accuracies. The model
can be tested by comparing its results with the data collected in behavioral experiments.
In recent years, ACT-R has also been extended to make quantitative predictions of patterns of activation in the brain, as
detected in experiments with fMRI. In particular, ACT-R has been augmented to predict the shape and time-course of
the BOLD response of several brain areas, including the hand and mouth areas in the motor cortex, the left prefrontal cortex,
the anterior cingulate cortex, and thebasal ganglia.
Brief outline[edit]
ACT-R's most important assumption is that human knowledge can be divided into two irreducible kinds of
representations: declarativeand procedural.
Within the ACT-R code, declarative knowledge is represented in the form of chunks, i.e. vector representations of individual
properties, each of them accessible from a labelled slot.
Chunks are held and made accessible through buffers, which are the front-end of what are modules, i.e. specialized and
largely independent brain structures.
There are two types of modules:
Perceptual-motor modules, which take care of the interface with the real world (i.e., with a simulation of the
real world). The most well-developed perceptual-motor modules in ACT-R are the visual and the manual
modules.
Memory modules. There are two kinds of memory modules in ACT-R:
Declarative memory, consisting of facts such as Washington, D.C. is the capital of United
States, France is a country in Europe, or 2+3=5
Procedural memory, made of productions. Productions represent knowledge about how we do
things: for instance, knowledge about how to type the letter "Q" on a keyboard, about how to
drive, or about how to perform addition.
All the modules can only be accessed through their buffers. The contents of the buffers at a given moment in time represents
the state of ACT-R at that moment. The only exception to this rule is the procedural module, which stores and applies
procedural knowledge. It does not have an accessible buffer and is actually used to access other module's contents.
Procedural knowledge is represented in form of productions. The term "production" reflects the actual implementation of
ACT-R as aproduction system, but, in fact, a production is mainly a formal notation to specify the information flow from
cortical areas (i.e. the buffers) to the basal ganglia, and back to the cortex.
At each moment, an internal pattern matcher searches for a production that matches the current state of the buffers. Only
one such production can be executed at a given moment. That production, when executed, can modify the buffers and thus
change the state of the system. Thus, in ACT-R, cognition unfolds as a succession of production firings.
The symbolic vs. connectionist debate[edit]
In the cognitive sciences, different theories are usually ascribed to either the "symbolic" or the "connectionist" approach to
[2]
cognition. ACT-R clearly belongs to the "symbolic" field and is classified as such in standard textbooks and collections. Its
entities (chunks and productions) are discrete and its operations are syntactical, that is, not referring to the semantic content
of the representations but only to their properties that deem them appropriate to participate in the computation(s). This is
seen clearly in the chunk slots and in the properties of buffer matching in productions, both of which function as standard
symbolic variables.
Members of the ACT-R community, including its developers, prefer to think of ACT-R as a general framework that specifies
how the brain is organized, and how its organization gives birth to what is perceived (and, in cognitive psychology,
investigated) as mind, going beyond the traditional symbolic/connectionist debate. None of this, naturally, argues against the
classification of ACT-R as symbolic system, because all symbolic approaches to cognition aim to describe the mind, as a
product of brain function, using a certain class of entities and systems to achieve that goal.
A common misunderstanding suggests that ACT-R may not be a symbolic system because it attempts to characterize brain
function. This is incorrect on two counts: First, because all approaches to computational modeling of cognition, symbolic or
otherwise, must in some respect characterize brain function, because the mind is brain function. And second, because all
such approaches, including connectionist approaches, attempt to characterize the mind at a cognitive level of description
[3]
and not at the neural level, because it is only at the cognitive level that important generalizations can be retained.
Further misunderstandings arise because of the associative character of certain ACT-R properties, such as chunks
spreading activation to each other, or chunks and productions carrying quantitative properties relevant to their selection.
None of these properties counter the fundamental nature of these entities as symbolic, regardless of their role in unit
selection and, ultimately, in computation.
Theory vs. implementation, and Vanilla ACT-R[edit]
The importance of distinguishing between the theory itself and its implementation is usually highlighted by ACT-R
developers.
In fact, much of the implementation does not reflect the theory. For instance, the actual implementation makes use of
additional 'modules' that exist only for purely computational reasons, and are not supposed to reflect anything in the brain
(e.g., one computational module contains the pseudo-random number generator used to produce noisy parameters, while
another holds naming routines for generating data structures accessible through variable names).
Also, the actual implementation is designed to enable researchers to modify the theory, e.g. by altering the standard
parameters, or creating new modules, or partially modifying the behavior of the existing ones.
Finally, while Anderson's laboratory at CMU maintains and releases the official ACT-R code, other alternative
[4]
implementations of the theory have been made available. These alternative implementations include jACT-R (written
in Java by Anthony M. Harrison at theNaval Research Laboratory) and Python ACT-R (written in Python by Terrence C.
[5]
Stewart and Robert L. West at Carleton University, Canada).
[6]
Similarly, ACT-RN (now discontinued) was a full-fledged neural implementation of the 1993 version of the theory. All of
these versions were fully functional, and models have been written and run with all of them.
Because of these implementational degrees of freedom, the ACT-R community usually refers to the "official", lisp-based,
version of the theory, when adopted in its original form and left unmodified, as "Vanilla ACT-R".
Applications[edit]
Over the years, ACT-R models have been used in more than 700 different scientific publications, and have been cited in
many more.
Memory, attention, and executive control[edit]
The ACT-R declarative memory system has been used to model human memory since its inception. In the course of years, it
has been adopted to successfully model a large number of known effects. They include the fan effect of interference for
[7]
[8]
[9]
associated information, primacy and recency effects for list memory, and serial recall.
ACT-R has been used to model attentive and control processes in a number of cognitive paradigms. These include
[10][11]
[12][13]
[14]
[15]
the Stroop task,
task switching,
the psychological refractory period, and multi-tasking.
Natural language[edit]
A number of researchers have been using ACT-R to model several aspects of natural language understanding and
[16]
[17]
[18]
production. They include models of syntactic parsing, language understanding, language acquisition
and metaphor
[19]
comprehension.
Complex tasks[edit]
[20]
ACT-R has been used to capture how humans solve complex problems like the Tower of Hanoi, or how people solve
[21]
[22]
algebraic equations. It has also been used to model human behavior in driving and flying.
With the integration of perceptual-motor capabilities, ACT-R has become increasingly popular as a modeling tool in human
factors and human-computer interaction. In this domain, it has been adopted to model driving behavior under different
[23][24]
[25][26]
[27]
conditions,
menu selection and visual search on computer application,
and web navigation.
Cognitive neuroscience[edit]
[28]
More recently, ACT-R has been used to predict patterns of brain activation during imaging experiments. In this field, ACT[29]
R models have been successfully used to predict prefrontal and parietal activity in memory retrieval, anterior cingulate
[30]
[31]
activity for control operations, and practice-related changes in brain activity.
Education[edit]
[32][33]
ACT-R has been often adopted as the foundation for cognitive tutors.
These systems use an internal ACT-R model to
mimic the behavior of a student and personalize his/her instructions and curriculum, trying to "guess" the difficulties that
students may have and provide focused help.
Such "Cognitive Tutors" are being used as a platform for research on learning and cognitive modeling as part of the
Pittsburgh Science of Learning Center. Some of the most successful applications, like the Cognitive Tutor for Mathematics,
are used in thousands of schools across the United States.
Brief history[edit]
Early years: 1973-1990[edit]
ACT-R is the ultimate successor of a series of increasingly precise models of human cognition developed by John R.
Anderson.
Its roots can be backtraced to the original HAM (Human Associative Memory) model of memory, described by John R.
[34]
[35]
Anderson andGordon Bower in 1973. The HAM model was later expanded into the first version of the ACT theory. This
was the first time the procedural memory was added to the original declarative memory system, introducing a computational
[36]
dichotomy that was later proved to hold in human brain. The theory was then further extended into the ACT* model of
[37]
human cognition.
Integration with rational analysis: 1990-1998[edit]
In the late eighties, Anderson devoted himself to exploring and outlining a mathematical approach to cognition that he
[38]
named Rational Analysis. The basic assumption of Rational Analysis is that cognition is optimally adaptive, and precise
[39]
estimates of cognitive functions mirror statistical properties of the environment. Later on, he came back to the
development of the ACT theory, using the Rational Analysis as a unifying framework for the underlying calculations. To
highlight the importance of the new approach in the shaping of the architecture, its name was modified to ACT-R, with the
[40]
"R" standing for "Rational"
In 1993, Anderson met with Christian Lebiere, a researcher in connectionist models mostly famous for developing with Scott
[41]
Fahlmanthe Cascade Correlation learning algorithm. Their joint work culminated in the release of ACT-R 4.0. Thanks to
Mike Byrne (now atRice University), version 4.0 also included optional perceptual and motor capabilities, mostly inspired
from the EPIC architecture, which greatly expanded the possible applications of the theory.
Current developments 1998-present[edit]
After the release of ACT-R 4.0, John Anderson became more and more interested in the underlying neural plausibility of his
life-time theory, and began to use brain imaging techniques pursuing his own goal of understanding the computational
underpinnings of human mind.
The necessity of accounting for brain localization pushed for a major revision of the theory. ACT-R 5.0 introduced the
concept of modules, specialized sets of procedural and declarative representations that could be mapped to known brain
[42]
systems. In addition, the interaction between procedural and declarative knowledge was mediated by newly introduced
buffers, specialized structures for holding temporarily active information (see the section above). Buffers were thought to
reflect cortical activity, and a subsequent series of studies later confirmed that activations in cortical regions could be
successfully related to computational operations over buffers.
A new version of the code, completely rewritten, was presented in 2005 as ACT-R 6.0. It also included significant
improvements in the ACT-R coding language.
Spin Offs[edit]
The long development of the ACT-R theory gave birth to a certain number of parallel and related projects.
The most important ones are the PUPS production system, an initial implementation of Anderson's theory, later abandoned;
[6]
and ACT-RN, a neural network implementation of the theory developed by Christian Lebiere.
Lynne Reder, also at Carnegie Mellon University, developed in the early nineties SAC, a model of conceptual and perceptual
aspects of memory that shares many features with the ACT-R core declarative system, although differing in some
assumptions.
1 Definition
John R. Anderson's etal.s Adaptive Control of Thought (ACT*) theories are human information processing and knowledge
representation theories.
ACT theory started out in the Simon-Newell tradition, i.e. as a purely symbolic model of human thought and memory. The
latest version is Adaptive control of thought-rational (ACT-R Version 6) (Anderson et al., 2004) and incorporates more recent
ideas about embodyment (perception and action) and subsymbolic processes.
2 Overview
Related to the distinction of declarative vs. procedural knowledge, the critical atomic components of cognition and human
memory are identified as chunks and productions. According to Yates (2007:32), Anderson (1996) claims the following: “
All that there is to intelligence is the simple accrual and tuning of many small units of knowledge that in total produce
complex cognition. The whole is no more than the sum of its parts, but it has a lot of parts. (p. 356).”
According to Yates (2007:33):
Procedural knowledge consists of condition-action (IF-THEN) pairs called productions which are activated
according to rules relating to a goal structure (Anderson, 1983). Within the ACT framework, all knowledge is initially
declarative and is interpreted by general procedures. Productions, then, connect declarative knowledge with behavior.
Procedural knowledge represents "how to do things." It is knowledge that is displayed in our behavior, but that we do not
hold consciously (Anderson &Lebiere, 1998). As a task is performed, interpretive applications are gradually replaced with
productions that perform the task directly, a process called proceduralization. For example, rehearsing how to manually shift
gears in a car is gradually replaced by a production that recognizes and executes the production. In other words, explicit
declarative knowledge is replaced by direct application of procedural knowledge (Anderson, 2005). Sequences of
productions may be combined into a single production, a process called composition. Together, proceduralization and
composition are called knowledge compilation, which creates task-specific productions during practice. The process of
proceduralization affects working memory by reducing the load resulting from information being retrieved from long-term
memory.
See production system and Soar for some technical background.
According to ACT*, all knowledge begins as declarative information; procedural knowledge is learned by making inferences
from already existing factual knowledge. ACT* supports three fundamental types of learning: generalization, in which
productions become broader in their range of application, discrimination, in which productions become narrow in their range
of application, and strengthening, in which some productions are applied more often. New productions are formed by the
conjunction or disjunction of existing productions. (Kearsley: 1994)
Summary of ACT-R (Anderson et al. 2004).
1.
There are multiple independent modules whose information processing is encapsulated.
2.
The modules can place chunks reflecting their processing in their buffers and the production
system can detect when critical patterns are satisfied among these chunks.
3.
From those productions whose conditions are satisfied a single production will be selected at any
time and fire, leading to updates to various buffers that in turn can trigger information
processing in their respective modules.
4.
While chunks and productions are the symbolic components of the system reflecting its overall
information flow, chunks have subsymbolic activations and productions have subsymbolic
utilities that control which chunks and productions get used.
5.
Learning can involve either acquiring new chunks and productions or tuning their subsymbolic
parameters.
6.
These processes are stochastic and take place in real time.
3 ACT as modeling framework
“ACT-R is a cognitive architecture: a theory about how human cognition works. On the exterior, ACT-R looks like a
programming language; however, its constructs reflect assumptions about human cognition. These assumptions are based
on numerous facts derived from psychology experiments” (About, retrieved 11:05, 16 November 2007 (MET)).
4 ACT theory in education
ACT* theory can explain a range of learning types and therefore influence instructional design models. It also is popular in
research onintelligent tutoring systems since the ACT* is a model of a cognitive architecture embedded in a
modeling/programming language. As such it can be used to model learners, e.g. "understand" what difficulties they might
have.
(This section needs to be expanded a lot ...)
Overview
ACT-R is a model of the human cognitive process developed and used by cognitive psychologists, which can
be applied to HCI. It is an acronym for "The Adaptive Control of Thought - Rational". While it is often referred to as "the ACTR theory", it is not properly considered a theory of cognition, but rather a cognitive architecture that can accommodate
different theories. The scope of ACT-R is greater than the scope of any particular theory, and multiple (possibly competing)
theories can fit within the framework of ACT-R. It was developed to model problem solving, learning and memory. ACT-R is
generally used by researchers in cognitive psychology, but researchers have also found applications in HCI.
Production rules
A fundamental characteristic of ACT-R is that it is a production system theory. The basic premise of a
production system theory is that a cognitive skill is composed of conditional statements known as production rules. A
production rule is a statement that describes an action which should be taken if a condition is met, sometimes referred to as
a condition-action pair. For example:
IF the goal is to classify a shape
and the shape has four equal sides
THEN classify the shape as a square.
Cognitive tasks are achieved by stringing together production rules, and applying them to working memory.
Such a collection of production rules is referred to simply as a production. When a production rule is applied, it is said to fire.
Principles
In ACT-R, there are two different categories of long-term memory: declarative and procedural. Declarative
memory consists of facts such as "Annapolis is the capital of Maryland", "A square has four equal sides", or "8*7=56".
Procedural memory consists of our knowledge of how to do things, though we may not be able to verbalize how we are able
to do these things. Examples of procedural knowledge include our ability to drive a car or speak English. Declarative
knowledge is represented in ACT-R by units called chunks. Procedural knowledge is represented by productions, which are
collections of production rules. ACT-R defines a syntax to represent chunks and productions. An ACT-R model can be
represented as a computer program in the LISP programming language, and can be executed. In this syntax, chunks have a
schema-like representation containing an "isa" field specifying the category of knowledge, and additional fields to encode the
knowledge. Below is an encoding of the fact "8*7=56"
fact8*7
isa
multiplication-fact
multiplicand1
eight
multiplicand2
seven
product
fifty-six
Below is an encoding of the production rules for counting from one number from another. It is taken from
the ACT-R Research Groupwebsite.
(P increment
=goal>
ISA
count-from
number
=num1
=retrieval>
ISA
count-order
first
=num1
second
=num2
==>
=goal>
number
=num2
+retrieval>
ISA
count-order
first
=num2
)
Within this production rules paradigm, cognitive tasks are performed by assembling production rules by
setting goals, and by reading and writing to working memory (sometimes referred to as buffers). Goals (and subgoals) are
represented on a structure called the goal stack.
Two other important concepts in ACT-R are pattern matching and conflict resolution. Pattern matching is the
process which determines if a production's conditions are met by the current state of working memory. Conflict resolution is
the process that determines which production should be applied if several production rules are applicable.
ACT-R models are defined on two levels of abstraction: the symbolic level and the subsymbolic level. The
symbolic level is concerned with productions and chunks as described above. These high-level concepts are implemented
by a subsymbolic structure, which consists of a collection of massively parallel processes which are modeled by a set of
mathematical equations. These subsymbolic elements affect the high-level chunks and productions. They can be used to
determine which production to select for execution, and they determine the speed at which information can be retrieved from
declarative memory. They are also responsible for most of the learning processes in ACT-R. The ideal is that this
subsymbolic system accurately models the neurological information processing units of the human brain.
Scope and Application
Since ACT-R is a cognitive architecture, it covers a wide range of human cognitive tasks, focusing on learning
and problem solving. It has been previously applied to modeling such tasks as solving the Tower of Hanoi puzzle, memory
for text or for lists of words, language comprehension, communication and aircraft controlling. To develop an ACT-R model,
one must add domain-specific knowledge to the ACT-R architecture.
Examples
ACT-R models tend to be quite large for all but the most non-trivial of tasks. A prototypical example is the ACTR model for solving the standard Tower of Hanoi problem. This example can be found at the ACT-R research group website.
ACT-R has only recently been applied to HCI. Many of these applications are at a preliminary "proof-ofconcept" stage. Byrne (1999) used ACT-R (specifically, ACT-R/PM) to model random menu selection. Users searched for a
target item on a menu, timings were recorded and compared to an ACT-R model.
Another interesting example of the use of ACT-R applied specifically to HCI is given by Ritter et al (2002). They
suggest the use of ACT-R/PM to design a Cognitive Model Interface Evaluation (CMIE) tool. Such a tool can display a user
interface, run a cognitive model to interact with the interface, provide display facilities for model traces, and predict
performance. They are currently developing a prototype system.