SlideShare a Scribd company logo
1 of 162
18CSC305J – Artificial
Intelligence
Dr S Raguvaran | CINTEL | SRM University
Dr S Raguvaran | CINTEL | SRM University 1
Dr S Raguvaran | CINTEL | SRM University 2
18CSC305J – Artificial Intelligence (Lab | DO 2 | Batch 1)
Code: bo4wppl
18CSC305J – Artificial Intelligence (Lab | DO 5 | Batch 2)
Code: ptmzhtj
18CSC305J - Artificial Intelligence (Theory)
Code : v7wadnh
Google Classroom Codes
Dr S Raguvaran | CINTEL | SRM University 3
Course Outcomes
CO1 Formulate a problem and build intelligent agents
CO2 Apply appropriate searching techniques to solve a real-world problem
CO3 Analyze the problem and infer new Knowledge using suitable Knowledge representation schemes
CO4 Prepare a plan and solve real-world problems using learning algorithms
CO5 Design an expert system and implement advanced techniques in Artificial Intelligence agents
Dr S Raguvaran | CINTEL | SRM University 4
Overall Assessment Plan
CLA 1 (10 marks) CLA 2 (15 marks) CLA 3 (15 marks) CLA 4 (10 marks)
THEORY LAB THEORY LAB THEORY LAB THEORY LAB
5 Marks
(CT 1)
(Unit 1)
5 marks
(LAB – 3
EXP)
7.5 Marks
(CT 2)
(Unit 2 & 3)
5 Marks
(4 EXP)
+
2.5 Marks
(PROJECT)
2.5 Marks
(CT 3)
(Unit 4 & 5)
+
5 Marks
(PROJEC
T)
5 Marks
(3 EXP)
+2.5
Marks
(PROJEC
T)
5 Marks
(Assignme
nt / Quiz /
ST /
Hackerrank /
…)
5 Marks
(PROJECT)
Test Schedule
S.No. DATE TEST TOPICS DURATION
1 01-02-2024
Cycle Test –
I
Unit I 1 Hours
2 21-03-2024
Cycle Test –
II
Unit II
and III
2 Hours
3 30-04-2024
Cycle Test –
III
Unit IV
and V
1 Hours
Theory Assessment Plan
CYCLE TEST PATTERN
CT-1
Pattern – 25 marks
10 MCQs,
3 * 5-mark questions (out of 4)
Portion – Unit 1
CT-2
Pattern – 50 marks (Open Book Examination)
5 * 10-mark questions (out of 7)
Portion – Unit 2 & 3
CT-3
Pattern – 25 marks
10 MCQs,
3 * 5-mark questions (out of 4)
Portion – Unit 4 & 5
Lab Assessment Plan
1 Lab 1: Implementation of toy problems
CLAP1: 5 marks
3 Exp= 3 marks
Viva= 2 marks
2 Lab 2: Developing agent programs for real world
problems
3 Lab 3: Implementation of constraint satisfaction
problems
4 Lab4: Implementation and Analysis of DFS and BFS
for same application
5 Lab 5: Developing Best first search and A* Algorithm
for real world problems
CLAP2: 7.5 marks
4 Exp= 5 marks
Project Implementation-(Review 1) = 2.5 marks
6 Lab 6: Implementation of unification for real world
problems.
7 Lab 7: Implementation of uncertain methods for an
application (Fuzzy logic/ Dempster Shafer
Theory/Monty Hall)
8 Lab 8: Implementation of learning algorithms for
an application CLAP3: 7.5 marks
9 Lab 9: Implementation of NLP programs 3 Exp= 5 marks
Project Implementation-(Final Review) = 2.5
marks
10 Lab 10: Applying deep learning methods to solve
an application
Course Project:
 Documentation – 5 marks
 Flow – 1 mark
 Explanation – 2 marks
 Properly documented – 2 marks CLAP4: 5 marks
Rubrics for Lab Exercises
 Program Documentation
 Aim
 Steps/ Procedure
 Implementation/Code
 O/P (Screenshots)
 Result
 Rubrics for Lab Programs:
Problem Explanation: 2 Marks
Implementation: 3 Marks
Coding Standards: 2 Marks
Output: 3 Marks
Total: 10 Marks
Rubrics for Mini Project
Case Study: (Total 15 Marks)
Review-1: (2.5 marks – CT2 (Lab))
Team and Title (societal benefit) Selection – 2 marks
Problem statement – 1 mark
Objective with technical depth – 2 marks
Total 5 marks – converted to 2.5 marks.
Review-2: (2.5 Marks – CT3 (Lab))
Proposed Workflow – 2 marks
Implementation – 5 marks
Presentation (communication, individual contribution, question and answers) – 3 marks
Total 10 marks – converted to 2.5 marks.
Rubrics for Mini Project
Review-3: (5 Marks – CT3 Theory)
Project demonstration, Explanation – 5 marks
Presentation (communication, individual contribution, question and answers) - 3 marks
Github Upload – 2 marks
Total 10 marks – converted to 5 marks.
Instructions:
Team Members: 3 max (Each team should do a Unique project)
Team Members’ contributions need to be measured and graded accordingly.
Unit 1 List of Topics
• Introduction to AI-AI techniques
• Problem solving with AI
• AI Models, Data acquisition and learning aspects in AI
• Problem solving- Problem solving process, Formulating
problems
• Problem types and characteristics
• Problem space and search
• Intelligent agent
• Rationality and Rational agent with
performance measures
• Flexibility and Intelligent agents
• Task environment and its properties
•Types of agents
•Other aspects of agents
•Constraint satisfaction problems(CSP)
•Crypto arithmetic puzzles
•CSP as a search problem-constrains and
representation
•CSP-Backtracking, Role of heuristic
•CSP-Forward checking and constraint
propagation
•CSP-Intelligent backtracking
Artificial Intelligence - Introduction
WHAT IS AI?
Dr S Raguvaran | CINTEL | SRM University 12
Artificial Intelligence - Introduction
What if we give the following abilities to our Conventional Car?
Learning
Reasoning
Problem-solving
Perception
Speech Recognition Language Understanding
Dr S Raguvaran | CINTEL | SRM University 13
Artificial Intelligence - Introduction
WHAT IS AI?
Dr S Raguvaran | CINTEL | SRM University 14
Artificial Intelligence - Introduction
General Definition
Artificial Intelligence (AI) is a field of computer science that aims to create machines
or systems that can perform tasks that typically require human intelligence. These
tasks include learning, reasoning, problem-solving, perception, speech
recognition, and language understanding.
Dr S Raguvaran | CINTEL | SRM University 15
Views of Artificial Intelligence
“[The automation of] activities that we
associate with human thinking,
activities
such as decision-making, problem
solving,
learning . . .” (Bellman, 1978)
“The study of the computations that
make
it possible to perceive, reason, and act.”
(Winston, 1992)
“The art of creating machines that
perform
functions that require intelligence
when performed by people.” (Kurzweil,
1990)
“Computational Intelligence is the study
of the design of intelligent agents.”
(Poole
et al., 1998)
Dr S Raguvaran | CINTEL | SRM University 16
Views of Artificial Intelligence
01 02 03 04
Thinking
Humanly
Acting
Humanly
Thinking
Rationally
Acting
Rationally
Dr S Raguvaran | CINTEL | SRM University 17
The Four Categories of AI
1. Acting humanly: The Turing Test approach
2. Thinking humanly: The cognitive modeling
approach
4. Acting rationally: The rational agent approach
3. Thinking rationally: The “laws of thought”
approach
Dr S Raguvaran | CINTEL | SRM University 18
1. Acting humanly: The Turing Test approach
Alan Turing
Dr S Raguvaran | CINTEL | SRM University 19
1. Acting humanly: The Turing Test approach
The Turing Test, introduced by Alan Turing, evaluates a machine's ability to
convincingly imitate human responses in a conversation. If a human judge cannot
reliably distinguish between machine and human based on responses alone, the
machine is considered to have passed the test, indicating a high level of artificial
intelligence. The test assesses natural language understanding and conversational
behavior.
Dr S Raguvaran | CINTEL | SRM University 20
Patient ID Age Gender Symptoms
Medical
History
1 35 Female Fever, Cough, Fatigue None
2 28 Male
Headache, Nausea,
Vomiting
Migraine
3 40 Female
Fever, Cough,
Shortness of Breath
None
4 55 Male
Abdominal Pain,
Nausea
High
Cholesterol
5 60 Male
Chest Pain, Shortness
of Breath, Sweating
High Blood
Pressure
Activity : Automated Reasoning
Diagnosis
Common Cold
Migraine
COVID-19
(Corona)
Gallbladder
Stone
Heart Attack
Dr S Raguvaran | CINTEL | SRM University 21
1. Acting humanly: The Turing Test approach
To pass the computer would need to possess the following capabilities:
1. Natural language processing to enable it to communicate successfully in English;
2. Knowledge representation to store what it knows or hears;
3. Automated reasoning to use the stored information to answer questions and to draw
new conclusions
4. Machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Dr S Raguvaran | CINTEL | SRM University 22
2. Thinking humanly: The cognitive modeling approach
Understanding Human Thought
Verification through Behaviour
Cognitive Science vs. AI
Dr S Raguvaran | CINTEL | SRM University 23
2. Thinking humanly: The cognitive modeling approach
Understanding Human Thought
To claim that a program
thinks like a human, we need
insights into how humans
think. Three approaches
include introspection,
psychological experiments,
and brain imaging.
The goal is to develop a precise theory of the mind, express it as a computer program,
and verify its behavior against human actions.
Dr S Raguvaran | CINTEL | SRM University 24
2. Thinking humanly: The cognitive modeling approach
Cognitive Science vs. AI
Cognitive science involves
experimental investigation of actual
humans or animals, while AI assumes
the reader has only a computer for
experimentation.
Both fields continue to influence each other, particularly in areas like computer vision,
where neurophysiological evidence informs computational models.
Dr S Raguvaran | CINTEL | SRM University 25
3. Thinking rationally: The “laws of thought” approach
Syllogisms
"Socrates is a man; all men are mortal;
Socrates is mortal
Dr S Raguvaran | CINTEL | SRM University 26
3. Thinking rationally: The “laws of thought” approach
Syllogisms
You should always follow your dreams.
I just spent the entire day dreaming about
winning the lottery.
I'm practically a millionaire by bedtime.
Dr S Raguvaran | CINTEL | SRM University 27
3. Thinking rationally: The “laws of thought” approach
Aristotle's Contribution:
Aristotle, a Greek philosopher, made early
attempts to codify "right thinking" through
syllogisms, providing patterns for irrefutable
reasoning processes.
Aristotle
Logicist Aspirations in
AI:
The logicist tradition in
AI aims to build
intelligent systems based
on programs capable of
solving problems using
logical reasoning.
Development of Logic:
Logicians in the 19th century created a precise notation for
expressing statements about various objects and their
relations, extending beyond the realm of numbers.
By 1965, programs capable of solving any solvable problem
described in logical notation had been developed, marking
the emergence of the logicist tradition in artificial intelligence.
Dr S Raguvaran | CINTEL | SRM University 28
Dr S Raguvaran | CINTEL | SRM University 29
Choose the appropriate synonyms for the word "rational“ :
1. Logical
2. Reasonable
3. Intelligent
4. Coherent
5. Judicious
6. Sensible
7. Wise
8. Sound
9. Enlightened
10. Reasoned
11.All the above
Dr S Raguvaran | CINTEL | SRM University 30
4. Acting rationally: The rational agent approach
Definition of Agent:
An agent is something that acts, and in the context of computer science, agents are
expected to operate autonomously, perceive their environment, persist over time,
adapt to change, and create and pursue goals.
Rational Agent:
A rational agent is one that acts to achieve the best outcome or, in uncertain situations, the
best expected outcome.
Rational agents go beyond correct inferences, as correct inference is just one
mechanism for achieving rationality. Rationality also involves situations where no
provably correct action exists.
Abstractly, an agent is a function from percept histories to actions:
[f: P*  A]
Dr S Raguvaran | CINTEL | SRM University 31
Rational agents go beyond correct inferences | Why?
Exploration-Exploitation Tradeoffs:
Exploration involves trying out new actions or
strategies to gather more information about the
environment. Exploitation involves choosing the
known, optimal actions based on current knowledge.
In the context of rationality, exploration is akin to
considering situations where no provably correct
action is known. It involves taking risks to discover
potentially better strategies.
Dr S Raguvaran | CINTEL | SRM University 32
Advantages of Rational-Agent
Approach:
1. The rational-agent approach is more
general than the "laws of thought"
approach, encompassing various
mechanisms for achieving rationality.
2. It is more amenable to scientific
development than approaches based
on human behavior or thought, as the
standard of rationality is well-defined
and can be unpacked to generate
provably rational agent designs.
Complexity of Rationality:
1. Achieving perfect rationality
(always doing the right thing) is
not feasible in complicated
environments due to high
computational demands.
2. Constructing rational agents
involves addressing a wide variety
of issues, despite the apparent
simplicity of the problem
statement.
4. Acting rationally: The rational agent approach
Dr S Raguvaran | CINTEL | SRM University 33
AI Approach Ability
1. Acting humanly: The Turing Test
approach
a. Addresses the AI system's ability to act
in a way that maximizes the achievement
of its goals.
2. Thinking humanly: The cognitive
modeling approach
b. Assesses the ability of the AI system to
exhibit behavior indistinguishable from
that of a human.
3. Thinking rationally: The “laws of
thought” approach
c. Evaluates the AI system's internal
processes to match human-like thinking
and reasoning.
4. Acting rationally: The rational agent
approach
d. Focuses on the AI system's ability to
follow logical principles and make
accurate inferences.
Match The Following
Dr S Raguvaran | CINTEL | SRM University 34
AI Approach Ability
1. Acting humanly: The Turing Test
approach
b. Assesses the ability of the AI system to
exhibit behavior indistinguishable from
that of a human.
2. Thinking humanly: The cognitive
modeling approach
c. Evaluates the AI system's internal
processes to match human-like thinking
and reasoning.
3. Thinking rationally: The “laws of
thought” approach
d. Focuses on the AI system's ability to
follow logical principles and make
accurate inferences.
4. Acting rationally: The rational agent
approach
a. Addresses the AI system's ability to act
in a way that maximizes the achievement
of its goals.
Match The Following |Answer
Dr S Raguvaran | CINTEL | SRM University 35
Which AI approach focuses on making decisions to achieve the best outcome
or expected outcome?
a. Acting humanly: The Turing Test approach
b. Thinking humanly: The cognitive modeling approach
c. Thinking rationally: The "laws of thought" approach
d. Acting rationally: The rational agent approach
d. Acting rationally: The rational agent approach
Dr S Raguvaran | CINTEL | SRM University 36
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
Dr S Raguvaran | CINTEL | SRM University 37
THE HISTORY OF ARTIFICIAL INTELLIGENCE
Dr S Raguvaran | CINTEL | SRM University 38
1. Natural Language Processing (NLP)
2. Computer Vision
3. Machine Learning and Deep Learning
4. Autonomous Vehicles
5. Robotics
6. Healthcare Applications
7. Generative Adversarial Networks (GANs)
8. Quantum Computing
9. Explainable AI (XAI)
10. Edge Computing for AI
The State of The Art : AI
Dr S Raguvaran | CINTEL | SRM University 39
Advantages of Artificial Intelligence
1. More powerful and more useful
computers
2. New and improved interfaces
3. Solving new problems
4. Better handling of information
5. Relieves information overload
6. Conversion of information into
knowledge
Disadvantages
1. Increased costs
2. Difficulty with software
development - slow and expensive
3. Few experienced programmers
4. Few practical products have
reached the market as yet.
Dr S Raguvaran | CINTEL | SRM University 40
AI Techniques
Challenges and Diversity in Artificial Intelligence: A Spectrum of Applications
and Complexities
1.Diverse Applications:
•AI is applied across various domains, such as medical and manufacturing.
2.Day-to-Day Problems:
•AI addresses everyday challenges, contributing to daily life applications.
3.Identification and Authentication:
•AI plays a vital role in solving security-related identification and authentication problems.
4.Classification Problems:
•Decision-making systems often involve classification challenges tackled by AI algorithms.
5.Interdependent and Cross-Domain Issues:
•AI deals with challenges that span multiple domains, including the complex nature of Cyber-Physical Systems.
6.Computational Complexity:
•AI problems often require significant computational resources due to their complexity.
7.Explainability of AI Techniques:
•The transparency and interpretability of AI techniques, especially in deep learning, pose challenges.
8.Ethical Considerations:
•AI introduces ethical concerns, such as bias in algorithms and privacy implications.
Dr S Raguvaran | CINTEL | SRM University 41
Introduction to AI Techniques
Definition: AI techniques are methods that facilitate knowledge acquisition. The
primary AI techniques include Search, Use of Knowledge, and Abstraction.
Search Technique
Definition: Search offers a problem-solving framework for situations lacking a direct
approach. It explores various action sequences until a solution is found.
Advantages:
• Effective for problem-solving.
• Requires coding of applicable operators.
Disadvantages:
• Impractical for large search spaces.
Dr S Raguvaran | CINTEL | SRM University 42
Use of Knowledge
Definition: Involves solving complex problems by manipulating object structures.
Knowledge representation in AI techniques is crucial.
Representation Guidelines:
• Captures generalization.
• Understandable to human preparers.
• Easily adjustable and adaptable.
• Aids in error correction and adapting to changes.
• Supports diverse situations.
Abstraction Technique
Definition: Abstraction separates important features from unimportant ones, aiding
in simplifying processes.
Broad Categories of AI Problems
1. Structured
2. Un-Structured
3. Linear
4. Non-Linear
Dr S Raguvaran | CINTEL | SRM University 43
Dr S Raguvaran | CINTEL | SRM University 44
Structured AI Problems:
Scenario: Managing a database of employee records in a large organization. The goal
is to create an AI system that can efficiently retrieve and update employee
information based on structured data fields like name, employee ID, department, and
salary.
Unstructured AI Problems:
Scenario: Analyzing and summarizing unstructured text data from customer reviews on social
media platforms. The AI system needs to identify sentiments, key topics, and extract relevant
information from unstructured textual data to provide insights for business improvement.
Dr S Raguvaran | CINTEL | SRM University 45
Linear AI Problems:
Scenario: Predicting the future sales of a retail store based on historical sales data.
The goal is to build a linear regression model that correlates factors such as
advertising spend, promotions, and seasonality to forecast the sales in a
straightforward, linear fashion.
Non-Linear AI Problems:
Scenario: Recognizing and classifying objects in images for autonomous vehicles. This
involves solving a non-linear problem where the relationships between pixels and object
features are complex and may require advanced techniques such as neural networks to capture
the intricate patterns in the data.
Dr S Raguvaran | CINTEL | SRM University 46
https://www.google.com/fbx?fbx=tic_tac_toe
Problem Solving with AI
Let us Play A Game tic-tac-toe
How can Artificial Intelligence (AI) be leveraged to improve and optimize the
management of wet waste?
Dr S Raguvaran | CINTEL | SRM University 47
Well-structured problems and ill-structured problems are two categories used to
describe the nature of problems that can be addressed with artificial intelligence
(AI) and other problem-solving approaches. Here's an explanation of each:
Dr S Raguvaran | CINTEL | SRM University 48
Well-Structured Problems:
•Definition: Well-structured problems have clearly defined goals, a finite set of
possible solutions, and a well-understood set of rules or procedures to reach those
solutions.
•Characteristics:
• The problem space is well-defined.
• Clear criteria exist for determining when the problem is solved.
• Solutions can be derived through a systematic and logical process.
• Examples include mathematical equations, puzzles, and optimization problems.
•AI Approach: Well-structured problems are often suitable for algorithmic solutions,
and AI systems can be designed to follow predefined rules and procedures to find
optimal or near-optimal solutions.
Dr S Raguvaran | CINTEL | SRM University 49
Ill-Structured Problems:
•Definition: Ill-structured problems lack clear goals, have a wide range of possible
solutions, and may not have well-defined problem spaces or solution procedures.
These problems often involve ambiguity and uncertainty.
•Characteristics:
• The problem space is not well-defined, and the problem may evolve over time.
• Multiple possible solutions exist, and the criteria for a "good" solution may be
subjective.
• There may be uncertainty or incomplete information.
• Examples include real-world issues like designing a new product, formulating
business strategies, or addressing complex social problems.
•AI Approach: Ill-structured problems are more challenging for traditional AI
approaches, as they require handling ambiguity, adapting to changing conditions, and
incorporating human judgment. AI methods for these problems often involve machine
learning, natural language processing, and other techniques that can handle
complexity and uncertainty.
Dr S Raguvaran | CINTEL | SRM University 50
Scenario 1: You are tasked with solving a Sudoku puzzle where each row, column,
and 3x3 grid must contain all of the digits from 1 to 9. What type of problem is this?
A. Well-Structured B. Ill-Structured
Answer: A. Well-Structured
Scenario 2: A group of students is given the challenge of developing a marketing strategy for
a new product launch. The team needs to consider target audiences, advertising channels, and
budget allocation. What type of problem is this?
A. Well-Structured B. Ill-Structured
Answer: B. Ill-Structured
Dr S Raguvaran | CINTEL | SRM University 51
Scenario 3: Your assignment is to address the challenge of improving public
transportation in a growing city, taking into account factors like traffic patterns,
environmental impact, and user satisfaction. What type of problem is this? A. Well-
Structured B. Ill-Structured
Answer: B. Ill-Structured
Dr S Raguvaran | CINTEL | SRM University 52
Summary
AI Perspectives
Intelligence as Rational Action
Historical Foundations
Interdisciplinary Contributions
Cycles of Progress and Challenges
Dr S Raguvaran | CINTEL | SRM University 53
Concept | Mind Mapping
1. Start with a Central Idea
2. Create Main Branches
3. Add Subtopics and Details
4. Connect Ideas Visually
5. Enhance with Keywords and Colors
Dr S Raguvaran | CINTEL | SRM University 54
Dr S Raguvaran | CINTEL | SRM University 55
AI Models
Dr S Raguvaran | CINTEL | SRM University 56
Dr S Raguvaran | CINTEL | SRM University 57
1.Semiotic models refer to theoretical frameworks or systems that analyze and
interpret signs and symbols within various communication processes. These
models are based on semiotics, the study of signs and symbols and their meanings
in different contexts. Semiotic models help understand how signs convey meaning,
emphasizing the role of language, culture, and context in communication.
AI Models
Dr S Raguvaran | CINTEL | SRM University 58
1.Icon:
1. Definition: An icon is a sign that bears a resemblance or similarity to the thing it
represents.
2. Example: A stylized drawing or diagram of an eye can be an icon representing the
concept of vision.
2.Index:
1. Definition: An index is a sign that has a direct connection or correlation with the
object it signifies. The relationship is based on cause-and-effect or proximity.
2. Example: Smoke is an indexical sign of fire because smoke is typically caused by the
presence of fire.
3.Symbol:
1. Definition: A symbol is a sign where the relationship between the signifier (the symbol
itself) and the signified (the concept it represents) is based on convention or agreement.
2. Example: Words, such as "tree" or "love," are symbols where the connection between
the word and the concept is established through cultural or linguistic conventions.
Dr S Raguvaran | CINTEL | SRM University 59
2. Statistical AI models, often referred to as statistical machine learning models, are a
subset of artificial intelligence (AI) that relies on statistical methods and algorithms to
learn from data and make predictions or decisions. These models use statistical
techniques to analyze patterns, relationships, and probabilities within datasets. Here's
an overview of key aspects:
Common statistical AI models include linear regression, logistic regression, decision
trees, support vector machines, and various types of neural networks. Each model
has its strengths and is suitable for different types of tasks, such as regression,
classification, or clustering.
Data acquisition and learning aspects in AI
Various AI –related topics on data acquisition and machine learning
• Knowledge discovery – Data mining and machine learning
• Computational learning theory (COLT)
• Neural and evolutionary computation
• Intelligent agents and multi-agent systems
• Multi-perspective integrated intelligence
Dr S Raguvaran | CINTEL | SRM University 61
Data Acquisition:
Definition: Data acquisition refers to the process of collecting and gathering
raw information or data from various sources. In the context of AI, high-
quality and relevant data is crucial for training machine learning models. The
effectiveness of AI systems heavily depends on the quality, quantity, and diversity
of the data used for training.
Knowledge Discovery: Data Mining and Machine Learning
Data Mining: Knowledge discovery process that involves extracting patterns and valuable
insights from large datasets using various techniques such as clustering, association rule
mining, and anomaly detection.
Machine Learning: A subset of artificial intelligence focused on developing algorithms
and models that enable systems to learn patterns from data, make predictions, and improve
performance without explicit programming.
Dr S Raguvaran | CINTEL | SRM University 62
Computational Learning Theory (COLT):
COLT is a branch of theoretical computer science dedicated to studying the
mathematical foundations of machine learning. It explores questions about the
efficiency, feasibility, and limitations of learning algorithms, providing
theoretical insights into their behavior. This field helps establish rigorous
frameworks for understanding how machines can learn from data and generalize to
new, unseen instances.
Neural and Evolutionary Computation:
Neural computation focuses on artificial neural networks for learning and
information processing, while evolutionary computation utilizes principles of
biological evolution, such as genetic algorithms, for optimization and problem-
solving, combining to enhance adaptive and intelligent systems.
Dr S Raguvaran | CINTEL | SRM University 63
Intelligent Agents and Multi-Agent Systems:
Intelligent agents are autonomous entities capable of perceiving, reasoning,
and acting to achieve goals, while multi-agent systems involve multiple
interacting intelligent agents collaborating or competing to solve complex
problems, simulating dynamic real-world scenarios.
Multi-Perspective Integrated Intelligence:
Multi-perspective integrated intelligence refers to a holistic approach that combines
insights from diverse viewpoints and sources, fostering a comprehensive understanding to
inform decision-making processes and enhance overall system intelligence. This concept
aims to synergize varied perspectives, promoting a more nuanced and effective approach
to problem-solving in complex environments.
Problem–Solving
Dr S Raguvaran | CINTEL | SRM University 65
Problem Solving with AI “ Formulate, Search, Execute “
design for an agent
Dr S Raguvaran | CINTEL | SRM University 66
Touring in Arad, Romania:
The agent in Arad, Romania, aims to reach Bucharest promptly, simplifying its
decision problem due to a nonrefundable flight the next day. Goal formulation,
driven by the current situation, guides the agent's decision-making in this complex
touring scenario.
• The initial state that the agent starts in /Starting state which agent knows
itself.
• Ex- The initial state for our agent in Romania might be described as In(Arad)
• A description of the possible actions/operators available to the agent. Given a
particular state s, ACTIONS(s) returns the set of actions that can be executed in s. We
say that each of these actions is applicable in s.
• Ex- from the state In(Arad), the applicable actions are {Go(Sibiu), Go(Timisoara),
Go(Zerind)}.
• A description of what each action does; the formal name for this is the transition
model, specified by a function RESULT(s, a) that returns the state that results from
doing action a in state s. We also use the term successor to refer to any state
reachable from a given state by a single action.
• Ex- RESULT(In(Arad),Go(Zerind)) = In(Zerind)
A problem can be defined formally by five components:
Problem Solving with AI
Components of a problem
• Together, the initial state, actions, and transition model implicitly
define the state space of the problem—the set of all states
reachable from the initial state by any sequence of actions. The state
space forms a directed network or graph in which the nodes are
states and the links between nodes are actions. A path in the state
space is a sequence of states connected by a sequence of actions.
• Ex- The map of Romania shown can be interpreted as a state-space graph if we view each
road as standing for two driving actions, one in each direction.
• The goal test, which determines whether a given state is a goal
state. Sometimes there is an explicit set of possible goal states, and
the test simply checks whether the given state is one of them.
• Ex- The agent’s goal in Romania is the singleton set {In(Bucharest )}
Problem Solving with AI
Components of a problem
• A path cost function that assigns a numeric cost to each path. The problem-
solving agent chooses a cost function that reflects its own performance
measure.
• The step cost of taking action a to go from one state ‘s’ to reach state ‘y’ is
denoted by c(s, a, y).
Ex- For the agent trying to get to Bucharest, time is of the essence, so the cost of a
path might be its length in kilometres. We assume that the cost of a path can be
described as the sum of the costs of the individual actions along the path. The step
costs for Romania are shown in Figure as route distances. We assume that step costs
are nonnegative.
• A solution to a problem is an action sequence that leads from the initial state
to a goal state. Solution quality is measured by the path cost function, and an
optimal solution has the lowest path cost among all solutions.
Problem Solving with AI
Components of a problem
Formulating Problems
• Problem Formulation : Choosing relevant set of states &
feasible set of operators for moving from one state to another.
• Search : Is a process of imagining sequences of
operators(actions) applied to initial state and to see which state
reaches goal state.
Toy Problems vs Real-world Problems
• A toy problem is intended to illustrate or exercise
various problem solving methods. It can be given a
concise, exact description.
• A real world problem is one whose solutions
people actually care about. Such problems tend not to
have a single agreed-upon description, but we can give the
general flavor of their formulations.
Dr S Raguvaran | CINTEL | SRM University 72
Problem types and characteristics
1. Deterministic or observable (single-state)
2. Non-observable (multiple-state)
3. Non-deterministic or partially observable
4. Unknown state space
Dr S Raguvaran | CINTEL | SRM University 73
1. Deterministic or observable(Single-state problems)
• Each state is fully observable and it goes to one definite state after any action.
• Here , the goal state is reachable in one single action or sequence of actions.
• Deterministic environments ignore uncertainty.
• Predictable Outcome: Deterministic problems have outcomes that can be
precisely predicted given the initial conditions and a set of defined rules or
equations.
• No Randomness: These problems do not involve randomness or uncertainty in
their solutions.
Ex- Vacuum cleaner with sensor.
Dr S Raguvaran | CINTEL | SRM University 74
2. Non-observable(Multiple-state problems) / conformant problems
• The problem–solving agent does not have any information about the state.
• Solution may or may not be reached.
• The system may exist in multiple states, and some or all of these states may not be
directly visible or measurable.
• Observing the system may not provide complete information about its internal
states.
Conformant Problems:
• Conformant problems refer to scenarios where the agent or system must act based
on incomplete information about the current state.
Dr S Raguvaran | CINTEL | SRM University 75
Example Problem: Autonomous vehicle navigation in an urban environment.
Characteristics:
Non-Observable: Unable to directly observe internal states of other entities.
Multiple-State: Varied road conditions, traffic patterns, and pedestrian behavior.
Conformant Problem: Decisions based on partial sensor information, conforming to traffic
rules and safety standards.
Challenge: Navigating safely through dynamic urban conditions with limited information.
Solution: Develop an advanced autonomous driving system incorporating sensor data analysis
for decision-making while adhering to safety regulations.
Dr S Raguvaran | CINTEL | SRM University 76
3. Non-deterministic(partially observable) problem
• Outcome influenced by inherent randomness or uncertainty.
• System's internal states not entirely visible or measurable.
• Decisions based on incomplete information about the system's state.
• Inherent unpredictability introduces variability in outcomes.
• Lack of full observability requires strategies that account for uncertainty.
• Variability in potential outcomes adds complexity to the problem.
• Decision-making becomes challenging due to uncertainties.
• Examples include games of chance, stochastic processes, and scenarios with
inherent randomness.
Dr S Raguvaran | CINTEL | SRM University 77
•Scenario: Creating a lottery prediction agent.
•Outcome Influence: Lottery numbers determined by random draw,
introducing inherent uncertainty.
•Visibility of States: Internal lottery draws states not observable or
predictable in advance.
•Incomplete Information: Limited details about drawn numbers available
before predictions.
•Unpredictability: Precise lottery numbers cannot be determined due to the
random draw.
•Potential Variability: Numerous possible combinations contribute to
outcome variability.
•Decision-Making Challenges: Developing an agent to adapt strategies for
the unpredictable nature of lottery draws.
•Example: Lottery prediction software using historical data or patterns to
forecast winning numbers.
Dr S Raguvaran | CINTEL | SRM University 78
4.Unknown state space problems
Typically exploration problems
• States and impact of actions are not known
• Undefined State Space: Complete set of possible states not explicitly known.
• Lack of Enumeration: Challenge in listing or specifying all potential states.
• Uncharted Territories: Existence of undiscovered or unexplored states or
conditions.
• Uncertain Boundaries: Boundaries or limits of the state space are unclear.
Dr S Raguvaran | CINTEL | SRM University 79
Scenario: An autonomous exploration robot navigating an unknown planet's
surface.
Undefined State Space: The robot encounters diverse and unanticipated terrains,
and the complete set of possible environmental states is not explicitly known.
Lack of Enumeration: It's challenging to list or predict all potential conditions
the robot might encounter during exploration.
Uncharted Territories: Certain areas of the planet may be unexplored,
introducing unforeseen environmental states.
Uncertain Boundaries: The boundaries or limits of the planet's varied landscapes
are unclear.
Dr S Raguvaran | CINTEL | SRM University 80
Problem Analysis and Representation
1.Compactness
2.Utility
3. Soundness
4. Completeness
5. Generality
6. Transparency
Dr S Raguvaran | CINTEL | SRM University 81
Initial State:
Rod A: [3, 2, 1] (3 disks in
descending order).
Goal State:
Rod B: [3, 2, 1] or
Rod C: [3, 2, 1] (3 disks in
ascending order).
Operators:
Move a disk from the top of
one rod to the top of
another rod, following the
rules of disk movement.
Tower of Hanoi
Dr S Raguvaran | CINTEL | SRM University 82
Water Jug Problem
•Problem: Water Jug Puzzle in artificial intelligence.
•Jugs: Two jugs with capacities 'x' and 'y' liters, and a water source.
•Objective: Measure a specific 'z' liters of water without volume markings.
•Initial State: Both jugs are empty.
•Goal State: One jug contains exactly 'z' liters.
•Operations: Filling, emptying, and pouring between jugs.
•Challenge: Test of problem-solving and state space search skills.
•Approach: Find an efficient sequence of steps to achieve the desired water
measurement.
•Example: Starting with empty jugs, reach a state where one jug holds 'z' liters.
•Step 1: Fill the 4-liter jug completely with water. (Current state: (4, 0))
•Step 2: Empty water from the 4-liter jug into the 3-liter jug, leaving 1 liter of water in the
4-liter jug and the 3-liter jug completely full. (Current state: (1, 3))
•Step 3: Empty water from the 3-liter jug. (Current state: (1, 0))
•Step 4: Pour the water from the 4-liter jug into the 3-liter jug. Now, the 4-liter container
is completely empty, and 1 liter of water is present in the 3-liter jug. (Current state: (0, 1))
•Step 5: Fill the 4-liter jug with water completely again. (Current state: (4, 1))
•Step 6: Transfer water from the 4-liter jug to the 3-liter jug, obtaining 2 liters of water in
the 4-liter jug, which is the required quantity. (Current state: (2, 2))
The sequence of steps efficiently achieves the goal of obtaining 2 liters of water using the
given 4-liter and 3-liter jugs.
Dr S Raguvaran | CINTEL | SRM University 84
•Step 1: Fill the 5-liter jug to its maximum capacity. (Current state: (0, 5))
•Step 2: Transfer 3 liters from the 5-liter jug to the 3-liter jug. (Current state: (3, 2))
•Step 3: Empty the 3-liter jug. (Current state: (0, 2))
•Step 4: Transfer 2 liters from the 5-liter jug to the 3-liter jug. (Current state: (2, 0))
•Step 5: Fill the 5-liter jug to its maximum capacity. (Current state: (2, 5))
•Step 6: Pour water from the 5-liter jug to the 3-liter jug until it's full. This results in
obtaining 4 liters of water in the 5-liter jug, which is the required quantity. (Current state:
(4, 3))
•The sequence of steps efficiently achieves the goal of obtaining 4 liters of water using the
given 5-liter and 3-liter jugs.
Input: X = 3, Y = 5, Z = 4
Dr S Raguvaran | CINTEL | SRM University 85
Problem Space
Definition:
•The problem space refers to the entire set of possible states, actions, and
transitions that a problem-solving agent explores while trying to find a
solution to a problem.
•It encompasses all the possible configurations or arrangements of
elements that the agent can encounter during the problem-solving process.
Key Components:
•States: Represent different configurations or situations within the problem.
•Actions/Operators: Define the permissible moves or transformations between
states.
•Transitions: Specify how the system moves from one state to another based
on actions.
Dr S Raguvaran | CINTEL | SRM University 86
Example:
•In the Tower of Hanoi problem, the problem space includes all possible arrangements
of disks on rods, actions like moving a disk, and transitions between different states.
Search:
Definition:
•Search, in the context of artificial intelligence, refers to the systematic exploration of
the problem space in order to find a solution.
•The goal of a search algorithm is to navigate through the problem space efficiently,
moving from the initial state to the goal state by applying a sequence of actions.
Key Components:
1.Start State: The initial configuration or situation from which the search begins.
2.Goal State: The desired configuration or situation that the agent aims to achieve.
3.Search Strategy: The method or algorithm used to explore the problem space.
Common strategies include depth-first search, breadth-first search, and heuristic-based
search.
Example:
•In the Water Jug Problem, the search involves exploring different states of water levels in the jugs, applying
operations like filling, emptying, and pouring, and moving towards the goal state where a specific water
measurement is achieved.
Dr S Raguvaran | CINTEL | SRM University 87
Connection between Problem Space and Search:
Problem space defines the scope of exploration: It outlines the set of states and
actions available for consideration during the search process.
Search algorithms navigate the problem space: They determine the sequence of
actions that lead from the initial state to the goal state, efficiently exploring the
problem space to find a solution.
Overall Process:
Initialization: Start with the initial state of the problem.
Search: Use a search algorithm to explore the problem space, moving from state
to state.
Goal Test: Check if the current state matches the goal state.
Solution: If a goal state is reached, the sequence of actions leading to it represents the
solution.
Dr S Raguvaran | CINTEL | SRM University 88
INTELLIGENT AGENTS
Definition of an Agent:
An agent is a system capable of perceiving its environment through sensors and
acting upon that environment through actuators.
Components of a Human Agent:
Human agents have sensory organs like eyes and ears as sensors, and hands, legs,
and vocal tract as actuators.
Components of a Robotic Agent:
Robotic agents may have sensors like cameras and infrared range finders, and
actuators like various motors.
Components of a Software Agent:
Software agents receive sensory inputs such as keystrokes, file contents, and
network packets. They act on the environment by displaying on the screen, writing
files, and sending network packets.
Percept:
The term "percept" refers to the agent's perceptual inputs at any given moment.
Dr S Raguvaran | CINTEL | SRM University 89
Dr S Raguvaran | CINTEL | SRM University 90
Tabulating Agent Function:
The agent function, describing an agent's behavior, can be tabulated.
However, for most agents, this table would be very large, potentially infinite
unless a bound is placed on the length of percept sequences considered.
Agent Function vs. Agent Program:
The agent function, an abstract mathematical description, is distinct from the
agent program, which is a concrete implementation running within a physical
system. The agent function is externally characterized, while the agent
program is the internal implementation.
Dr S Raguvaran | CINTEL | SRM University 91
Illustration with Vacuum-Cleaner World:
The concepts are illustrated with a simple example—the vacuum-cleaner world. This
world has two locations, squares A and B, and a vacuum agent that perceives its
location and dirt presence. The agent function can be defined abstractly, and an agent
program provides a concrete implementation. The table in Figure 2.3 and the
program in Figure 2.8 demonstrate this illustration.
Dr S Raguvaran | CINTEL | SRM University 92
Dr S Raguvaran | CINTEL | SRM University 93
Good Behavior: The Concept Of
Rationality
Rational Agent: A rational agent conceptually performs correctly,
evaluated by consequences and measured through desirability of
environment states.
Performance Measure: Crafting measures crucially align with
desired outcomes, avoiding predefined behaviors, and addressing
potential pitfalls and philosophical implications.
Dr S Raguvaran | CINTEL | SRM University 94
Rationality
1. Performance Measure:
Successful and timely delivery of packages.
2. Agent's Prior Knowledge:
Knowledge of the city layout, traffic patterns, and delivery locations.
3. Actions the Agent Can Perform:
Navigating through city streets, avoiding obstacles, and delivering packages.
4. Agent's Percept Sequence:
Real-time data from sensors about the current environment, traffic conditions, and
package status.
Consider an autonomous delivery robot tasked with delivering packages in a city. The
rationality of the robot at any given time depends on the following factors:
Dr S Raguvaran | CINTEL | SRM University 95
What defines the rationality of an agent at any given time?
A. Agent's preferences
B. Performance measure
C. Random actions
D. Current mood
Answer: B. Performance measure
Which factor influences the rational decisions of an agent based on its past
experiences?
A. Agent's current goals
B. Actions performed
C. Agent's percept sequence
D. Environmental constraints
Answer: C. Agent's percept sequence
Dr S Raguvaran | CINTEL | SRM University 96
In the context of a rational agent, what is the significance of the agent's
prior knowledge?
A. Shapes the agent's preferences
B. Guides rational decisions
C. Determines random actions
D. Defines the performance measure
Answer: B. Guides rational decisions
What distinguishes the agent function from the agent program in artificial
intelligence?
A. Both are abstract concepts
B. Agent function is external, while agent program is internal
C. Agent program is abstract, while agent function is concrete
D. They are interchangeable terms
Answer: B. Agent function is external, while
agent program is internal
Dr S Raguvaran | CINTEL | SRM University 97
When designing performance measures for agents, what is the recommended
approach?
A. Define measures based on agent's opinions
B. Prescribe predefined behaviors
C. Align measures with desired environmental outcomes
D. Avoid considering philosophical implications
Answer: C. Align measures with desired
environmental outcomes
Dr S Raguvaran | CINTEL | SRM University 98
Specifying the task environment
PEAS Description for Automated Taxi Driver:
The PEAS (Performance, Environment, Actuators, Sensors) description for an automated taxi driver
involves specifying the task environment, including the performance measure, the environment itself, and
the actuators and sensors of the agent.
Complexity of the Taxi Driver Task:
Unlike the simple vacuum world, the task environment for an automated taxi driver is highly complex and
open-ended. The driving task involves a multitude of novel circumstances, making it an intricate problem
for discussion and design.
Dr S Raguvaran | CINTEL | SRM University 99
Dr S Raguvaran | CINTEL | SRM University 100
Fully Observable vs. Partially Observable:
Fully observable environments provide complete state information relevant to
the agent's actions, while partially observable environments lack certain aspects,
often due to sensor limitations or inaccuracies.
Single Agent vs. Multiagent:
The distinction between single-agent and multiagent environments involves
considering whether entities interact as agents based on their behavior, with
chess being competitive and taxi-driving being partially cooperative and
competitive.
Dr S Raguvaran | CINTEL | SRM University 101
Deterministic vs. Stochastic:
Deterministic environments have actions leading to completely determined
outcomes, while stochastic environments introduce uncertainty, often due to
factors like sensor noise or incomplete observability.
Episodic vs. Sequential:
Episodic environments involve independent atomic episodes, where each
decision is based on the current situation, whereas sequential environments
require considering long-term consequences of actions.
Dr S Raguvaran | CINTEL | SRM University 102
Static vs. Dynamic:
Static environments remain unchanged while the agent deliberates, simplifying
decision-making, whereas dynamic environments continuously evolve, demanding
continuous decision updates, as seen in taxi driving.
Discrete vs. Continuous:
The discrete/continuous distinction applies to the state, time, and actions of the
environment, with chess being discrete, taxi driving being continuous, and the choice
between them influencing problem complexity.
Known vs. Unknown:
The distinction between known and unknown environments refers to the agent's
knowledge about the environment's laws, with known environments having
predefined outcomes for all actions, and unknown environments requiring the agent to
learn through experience.
Dr S Raguvaran | CINTEL | SRM University 103
THE STRUCTURE OF AGENTS
Agent Program Role:
The agent program processes the current percept from sensors and determines the
immediate action for actuators.
Architecture Collaboration:
The agent program's effectiveness relies on collaboration with the architecture,
ensuring alignment with the system's physical capabilities and characteristics.
Dr S Raguvaran | CINTEL | SRM University 104
In the following part of this section, we present Five fundamental types of agent
programs that encapsulate the principles foundational to nearly all intelligent
systems:
1.Simple reflex agents
2.Model-based reflex agents
3.Goal-based agents
4.Utility-based agents
5.Learning Agent
Dr S Raguvaran | CINTEL | SRM University 105
1.Simple Reflex Agents:
• Make decisions based solely on the current percept, ignoring percept
history.
• Example: The vacuum agent's decision depends only on the current
location and dirt presence.
Vacuum Agent as a Simple Reflex Agent:
• Program is smaller compared to the corresponding table.
• Decision-making is specific to the current percept, reducing
possibilities from 4T to 4.
Condition–Action Rules:
• Connections guiding decision-making in simple reflex agents.
• Example: "if car-in-front-is-braking then initiate-braking."
Dr S Raguvaran | CINTEL | SRM University 106
Dr S Raguvaran | CINTEL | SRM University 107
Human Reflexes and Learned Responses:
• Humans exhibit similar condition–action connections.
• Learned responses (e.g., driving) and innate reflexes (e.g.,
blinking) are part of human behavior.
General-Purpose Interpreter:
• Allows the creation of interpreters for condition–action rules.
• Enhances flexibility by adapting to various task environments.
Structure of General Program:
• Schematic representation of a general program with condition–
action rules.
• Provides a framework for connecting percepts to actions in
different environments.
Dr S Raguvaran | CINTEL | SRM University 108
2. Model-Based Reflex Agents:
• Handle partial observability by maintaining an internal state
reflecting unobserved aspects.
• Effective for tasks like driving, where the agent needs to account
for unseen elements.
Internal State in Model-Based Agents:
• The agent's internal state depends on the percept history and
captures unobserved parts of the current state.
• Examples include keeping track of other cars, camera frames, and
key locations.
Dr S Raguvaran | CINTEL | SRM University 109
Dr S Raguvaran | CINTEL | SRM University 110
Updating Internal State:
• Requires knowledge of how the world evolves independently and how the agent's
actions impact the world.
• Knowledge about "how the world works" is implemented as a model of the world.
Model of the World:
• The agent's knowledge about how the world functions, encoded in Boolean
circuits or scientific theories.
• Crucial for predicting outcomes and understanding the consequences of the agent's
actions.
Dr S Raguvaran | CINTEL | SRM University 111
Model-Based Agent Structure:
• Depicts how the current percept is combined with the internal state to update the
description of the current state.
• Utilizes the agent's model of the world to make informed decisions.
UPDATE-STATE Function:
• Key function in the model-based agent program responsible for updating the
internal state.
• Integrates current percepts with the internal state based on the agent's model of
the world.
Dr S Raguvaran | CINTEL | SRM University 112
3.Goal-Based Agents: Goal-based agents make decisions based on both current
environmental states and predefined goals. This dual consideration allows for
more flexible and forward-thinking decision-making compared to reflex agents.
Example Scenario: At a road junction, a goal-based taxi must decide whether to turn
left, right, or go straight based on its intended destination. This decision-making
process demonstrates how goals shape actions in complex environments.
Components of Goal-Based Agents: Goal-based agents consist of two main
components: goal information, defining desirable situations, and a model
representing knowledge about the environment. The integration of these elements
guides action selection.
Dr S Raguvaran | CINTEL | SRM University 113
Dr S Raguvaran | CINTEL | SRM University 114
Action Selection: Decision-making in goal-based agents involves combining
current state, goal details, and the model, with action complexity varying from
immediate satisfaction to intricate sequences.
Consideration of the Future: Unlike reflex agents, goal-based agents explicitly consider
the future, anticipating consequences and evaluating actions for alignment with the
ultimate goal.
Flexibility of Goal-Based Agents: Goal-based agents exhibit flexibility through explicit
knowledge representation, allowing easy modifications influenced by environmental
changes, such as rain.
Adaptability to Changes: The adaptability of goal-based agents shines when facing
changes, like altering destinations, contrasting with reflex agents that may require extensive
rule rewriting for similar adaptations.
Dr S Raguvaran | CINTEL | SRM University 115
Comparison with Reflex Agents: Goal-based agents fundamentally differ from
reflex agents by integrating explicit knowledge and future considerations into
decision-making, while reflex agents rely solely on condition-action rules.
Illustration of Flexibility: An illustration of flexibility involves the agent
adjusting behavior in response to rain, updating knowledge of brake effectiveness,
highlighting the adaptability of goal-based systems.
Dr S Raguvaran | CINTEL | SRM University 116
4. Introduction to Utility-Based Agents: Utility-based agents go beyond goals,
incorporating a utility function to assess the desirability of different actions based
on factors like speed, safety, reliability, or cost.
Utility Function and Performance Measure: The utility function internalizes the
performance measure, offering a quantitative measure (utility) rather than a
binary one (happy/unhappy). Alignment with the external performance measure
ensures rational decision-making.
Handling Conflicting Goals: In situations with conflicting goals, the utility function
helps specify appropriate trade-offs, allowing the agent to make rational decisions.
Multiple Uncertain Goals: When facing multiple uncertain goals, utility provides a way to
weigh the likelihood of success against the importance of each goal, enabling rational
decision-making.
Dr S Raguvaran | CINTEL | SRM University 117
Dr S Raguvaran | CINTEL | SRM University 118
Decision-Making under Uncertainty: Utility-based agents, dealing with partial
observability and stochasticity, maximize expected utility by choosing actions
that yield the highest average utility, given probabilities and utilities of outcomes.
Rationality Constraint: Rational utility-based agents follow a local constraint by
maximizing expected utility, turning the global definition of rationality into a
program expressing rational-agent designs.
Dr S Raguvaran | CINTEL | SRM University 119
Scenario: Autonomous Vehicle Decision-Making in Traffic
Context: Imagine an autonomous vehicle navigating through a busy urban
environment. The vehicle's primary goal is to reach its destination safely and
efficiently. However, it encounters a challenging scenario where it needs to make
a split-second decision due to unexpected circumstances.
Dr S Raguvaran | CINTEL | SRM University 120
Goal-Oriented Decision:
•Situation: The vehicle is on a tight
schedule, heading towards its
destination with passengers.
•Challenge: A traffic jam has caused
a significant delay, jeopardizing the
timely arrival at the destination.
•Goal-Driven Decision: The
autonomous vehicle decides to take a
shortcut through a narrow side street,
aiming to bypass the traffic and meet
the time deadline.
Utility-Oriented Decision:
•Situation: While navigating through
the side street, the autonomous vehicle
encounters a complex intersection with
pedestrians and cyclists.
•Challenge: Choosing the optimal path
poses a dilemma between maximizing
utility (efficiency) and ensuring safety.
•Utility-Driven Decision: The vehicle,
prioritizing safety over efficiency,
cautiously yields to pedestrians and
cyclists, sacrificing some efficiency for
a safer route.
Dr S Raguvaran | CINTEL | SRM University 121
1.Learning in AI:
1. Learning is a preferred method for creating advanced AI systems, allowing
them to adapt to unknown environments and become more competent over
time.
2.Components of a Learning Agent:
1. A learning agent consists of four main components: Performance Element,
Learning Element, Critic, and Problem Generator.
3.Performance Element and Learning Element:
1. The Performance Element selects external actions based on percepts, while the
Learning Element is responsible for making improvements based on feedback
from the Critic.
5.Learning Agent
Dr S Raguvaran | CINTEL | SRM University 122
Role of the Critic:
The Critic provides feedback
on how well the agent is
performing with respect to a
fixed performance standard, as
percepts alone do not indicate
the agent's success.
Problem Generator's Role in
Exploration:
The Problem Generator
suggests exploratory actions to
the agent, allowing it to gather
new and informative
experiences. This exploration
is crucial for discovering better
long-term actions.
Dr S Raguvaran | CINTEL | SRM University 123
What is the primary limitation of simple reflex agents?
•A. Lack of flexibility.
•B. Limited memory.
•C. Inability to perceive the environment.
•D. Dependence on goals.
•A. Lack of flexibility.
What is the decision-making process of a simple reflex agent based on?
A. Goals and utility.
B. Current percept only.
C. Learning from past experiences.
D. Future predictions.
•B. Current percept only.
How does a simple reflex agent respond to changes in the environment?
•A. By adapting its rules.
•B. By considering long-term goals.
•C. By utilizing a utility function.
•D. By learning from experience.
A. By adapting its rules.
What distinguishes model-based reflex agents from simple reflex agents?
•A. They lack a model.
•B. They rely solely on goals.
•C. They incorporate a model of the environment.
•D. They prioritize utility.
C. They incorporate a model of the environment.
How does a model-based reflex agent handle changes in the
environment?
•A. It adapts its rules.
•B. It learns from experience.
•C. It consults its internal model.
•D. It ignores environmental changes.
C. It consults its internal model.
What is the primary focus of a goal-based agent?
•A. Reacting to percepts.
•B. Learning from experience.
•C. Achieving specific objectives.
•D. Modeling the environment.
C. Achieving specific objectives.
How does a goal-based agent make decisions at a road junction?
•A. Based on immediate percepts.
•B. By learning from past experiences.
•C. Considering the ultimate destination.
•D. Relying on a utility function.
C. Considering the ultimate destination.
How does utility-based decision-making differ from goal-based decision-making?
•A. Goals involve explicit representation.
•B. Utility focuses on achieving objectives.
•C. Utility quantifies desirability.
•D. Goals consider immediate percepts.
C. Utility quantifies desirability.
In what situations might utility-based agents outperform goal-based agents?
A. In highly dynamic environments.
B. When there are conflicting goals.
C. When explicit knowledge is crucial.
D. In situations with clear, predefined goals.
B. When there are conflicting goals.
Which component of a learning agent is responsible for suggesting exploratory
actions?
•A. Performance element.
•B. Learning element.
•C. Critic.
•D. Problem generator.
D. Problem generator.
CONSTRAINT SATISFACTION
PROBLEMS
Constraint Satisfaction Problems (CSPs) are a class of problems where the goal is to
find a consistent assignment of values to a set of variables, each subject to constraints.
Components of a CSP:
•Variables: These represent the entities for which we need to find values. Each variable
has a domain, which is the set of possible values it can take.
•Domains: The domains are the allowed values for each variable. The goal is to find a
combination of values that satisfies all constraints.
•Constraints: These are restrictions on the possible combinations of values for the
variables. Constraints define the relationships between variables and limit the valid
assignments.
A constraint satisfaction problem consists of three components, X,D, and C:
1. X is a set of variables, {X1, . . . ,Xn}.
2. D is a set of domains, {D1, . . . ,Dn}, one for each variable.
3. C is a set of constraints that specify allowable combinations of values.
For example, if X1 and X2 both have the domain {A,B}, then the constraint saying the
two variables must have different values
can be written as (X1,X2), [(A,B), (B,A)] or as (X1,X2),X1 != X2.
1.State Space and Solution:
• State: Defined by an assignment of values to variables, {Xi = vi, Xj = vj, ...}.
• Complete Assignment: Assignment that does not violate any constraints.
• Partial Assignment: Assignment with values for some variables.
2.Consistent and Legal Assignment:
• Consistent Assignment: Does not violate any constraints.
• Legal Assignment: Another term for a consistent assignment.
3.Solution to CSP:
• Solution: A consistent, complete assignment.
• Partial Solution: A consistent, partial assignment.
4.CSP Solving Process:
• Define the state space by considering all possible assignments.
• Explore the space systematically, considering constraints.
• A solution is a consistent, complete assignment that satisfies all constraints.
Example
problem: Map
coloring
6.Advantages of Formulating as CSP:
• CSPs offer a natural representation for a wide range of problems.
• Utilizing a CSP-solving system is often more convenient than designing custom solutions
using other search techniques.
7.Constraint Propagation Efficiency:
• CSP solvers can quickly eliminate large portions of the search space.
• Constraint propagation, as seen in the Australia problem, efficiently reduces the number
of assignments to be considered.
8.State-Space Search vs. CSPs:
• In regular state-space search, the question is binary: Is this specific state a goal or not?
• With CSPs, upon identifying a partial assignment that is not a solution, it allows for
listing significant steps, providing a more detailed and constructive approach.
9.Advantages of CSPs in Search:
• CSPs can list significant steps, making it more expressive than binary goal-checking in
state-space search.
• CSPs allow for a more nuanced exploration of possibilities and reasons for failure.
1. Introduction to Node Consistency:
•Node consistency is a property of variables in Constraint Satisfaction Problems
(CSPs).
•In a CSP, variables have domains (possible values they can take) and constraints
that define allowable combinations of values.
2. Unary Constraints:
•A unary constraint is a constraint on a single variable.
•Node consistency focuses on ensuring that all values in a variable's domain satisfy its
unary constraints.
3. Example - Australia Map-Coloring Problem:
•Consider the Australia map-coloring problem.
•Assume South Australians dislike the color green, and we have a variable SA with the
initial domain {red, green, blue}.
4. Making a Variable Node-Consistent:
•To make the variable SA node-consistent, we eliminate values from its domain that
violate its unary constraint.
•In this case, we eliminate 'green' because South Australians dislike it, resulting in the
reduced domain {red, blue} for SA.
5. Node-Consistent Variable:
•A variable is node-consistent if all values in its domain satisfy its unary constraints.
•In the example, SA is now node-consistent with the domain {red, blue}.
6. Node-Consistent Network:
•A network is node-consistent if every variable in the network is node-consistent.
•Achieving node consistency involves iteratively enforcing unary constraints on each
variable in the CSP.
1. Introduction to Arc Consistency:
•Arc consistency is another property of variables in Constraint Satisfaction Problems (CSPs).
•In arc consistency, the focus is on ensuring that every value in a variable's domain satisfies its
binary constraints with other variables.
2. Binary Constraints:
•Binary constraints involve relationships between two variables.
•A variable is arc-consistent if every value in its domain satisfies the variable's binary constraints.
3. Example - Constraint Y = X^2:
•Consider the constraint Y = X^2 where the domain of both X and Y is the set of digits.
•The binary constraint is explicitly defined as {(X, Y), (0, 0), (1, 1), (2, 4), (3, 9)}.
4. Making a Variable Arc-Consistent:
•To make variable X arc-consistent with respect to Y, reduce X's domain to values where there exists
a corresponding value in Y's domain that satisfies the binary constraint.
•For example, if X's domain is {0, 1, 2, 3}, Y becomes arc-consistent with the domain {0, 1, 4, 9}.
5. Arc-Consistent Network:
•A network is arc-consistent if every variable is arc-consistent with every other variable.
•Achieving arc consistency involves enforcing binary constraints between variables in the CSP.
Dr S Raguvaran | CINTEL | SRM University 138
AC-3 Algorithm:
1. Initialization:
•Begin with an initial CSP where each variable has an associated domain of possible
values, and there are binary constraints between some pairs of variables.
2. Queue Initialization:
•Create a queue containing all the arcs in the CSP initially. An arc is a pair of variables
connected by a constraint.
3. Processing Arcs:
•While the queue is not empty, pop an arc (Xi, Xj) from the queue.
•Check if the domain of Xi has been modified due to the values in the domain of Xj. If so,
update Xi's domain and add all arcs (Xk, Xi) to the queue, where Xk is a neighbor of Xi
and Xk ≠ Xj.
4. Arc Consistency:
•For each value in the domain of Xi, check if there is at least one value in the domain of Xj
that satisfies the binary constraint on (Xi, Xj).
•If not, remove the inconsistent value from the domain of Xi.
5. Repeat until Queue is Empty:
•Continue processing arcs until the queue becomes empty.
6. Result:
•The AC-3 algorithm updates the domains of variables based on the binary constraints,
ensuring that each variable becomes arc-consistent with respect to every other variable.
Example with Inconsistent Values:
Consider the following constraint: {(A, B), (B, C), (C, D)} with initial domains {1,
2, 3} for A, B, C, and D. Let's apply AC-3:
1.Initialization:
1. Initial CSP: {(A, B), (B, C), (C, D)} with domains {1, 2, 3} for A, B, C, and
D.
2.Queue Initialization:
1. Queue: {(A, B), (B, C), (C, D)}
3.Processing Arcs:
1. Process (A, B): No changes.
2. Process (B, C): No changes.
3. Process (C, D): No changes.
1.Arc Consistency:
1. Check (A, B): No inconsistent values.
2. Check (B, C): No inconsistent values.
3. Check (C, D): Suppose (C, D) has the constraint {(1, 3), (2, 3)}.
1.Domain of C = {1, 2, 3}, Domain of D = {1, 2, 3}.
2.Both values 1 and 2 in the domain of C have no valid matching value in the
domain of D based on the constraint.
3.Remove 1 and 2 from the domain of C.
2.Repeat until Queue is Empty:
1. The queue is now empty.
3.Result:
1. The domains become {3} for C and D, as values 1 and 2 were inconsistent.
1. Introduction to Path Consistency:
•Path consistency is a stronger notion of consistency compared to arc consistency.
•While arc consistency focuses on tightening binary constraints using arcs, path
consistency looks at triples of variables to infer implicit constraints.
2. Motivation - Limitation of Arc Consistency:
•In certain CSPs, arc consistency may not provide enough inference. For example, in
the Australia map-coloring problem with only two colors allowed, arc consistency
cannot find a solution.
3. Path Consistency Definition:
•A two-variable set {Xi, Xj} is path-consistent with respect to a third variable Xm if,
for every assignment {Xi = a, Xj = b} consistent with the constraints on {Xi, Xj},
there is an assignment to Xm that satisfies the constraints on {Xi, Xm} and {Xm,
Xj}.
4. Path Consistency Example - Australia Map Coloring:
•Consider the Australia map-coloring problem with only two colors allowed (red and blue).
•Make the set {WA, SA} path-consistent with respect to NT.
•Enumerate consistent assignments: {WA = red, SA = blue} and {WA = blue, SA = red}.
•Analyze the impact on NT: In both assignments, NT cannot be red or blue (conflicts with
either WA or SA).
•Eliminate inconsistent assignments, leading to the conclusion that there is no valid solution
for {WA, SA}.
Cryptarithmetic puzzles
1.Nature of the Problem:
1. Cryptarithmetic puzzles involve assigning digits to letters in a mathematical
expression.
2. The puzzle typically consists of an arithmetic equation where digits are replaced by
letters.
2.Constraint Satisfaction Problem (CSP):
1. Cryptarithmetic puzzles can be formulated as CSPs, where the goal is to find a valid
assignment of digits to letters that satisfies specific constraints.
1.Goal:
1. The objective is to find a consistent assignment of digits to letters that makes
the arithmetic equation true.
2.Constraints:
1. Each letter represents a different digit (1 to 9 or 0 to 9, depending on the
context).
2. No two letters can represent the same digit (Alldiff constraint).
3.Alldiff Constraint:
1. The Alldiff constraint ensures that all variables (letters) must have different
values.
2. Prevents the repetition of digits within the set of variables.
S E N D
+ M O R E
---------
M O N E Y
Example:
•An example of a cryptarithmetic puzzle is:
Global Constraint in CSPs:
•A global constraint involves an arbitrary number of variables in a Constraint Satisfaction
Problem (CSP).
•The term "global" doesn't necessarily mean involving all variables but refers to constraints that
go beyond unary or binary constraints.
Alldiff Constraint:
•The Alldiff constraint (All Different) is a common global constraint used in CSPs.
•It ensures that all variables involved in the constraint must have different values.
Application in Cryptarithmetic Puzzles:
• In cryptarithmetic puzzles, the Alldiff constraint is applied to the set of variables representing the letters
{S, E, N, D, M, O, R, Y}.
• It ensures that each letter represents a different digit, and no two letters can have the same digit.
Example Illustration:
• The Alldiff constraint is illustrated with an example where values are assigned to variables: S=9, E=5,
N=6, D=7, M=1, O=0, R=8, Y=2.
• This assignment satisfies the Alldiff constraint as each digit appears only once.
Solving cryptarithmetic puzzles involves exploring various digit assignments to the letters
to satisfy the given constraints. The following is one possible solution for the example
cryptarithmetic puzzle:
How to Solve
| this Puzzle?
9 5 6 7
+ 1 0 8 5
---------
1 0 6 5 2
T W O
+ T W O
---------
F O U R
9 8 7
+ 9 8 7
---------
1 9 7 4
T – 9 W – 8 O – 7 R -4 U – 7 F - 1
C R O S S
+ R O A D S
-------------
D A N G E R
9 6 2 3 3
+ 6 2 5 1 3
---------------
1 5 8 7 4 6
C – 9 R – 6 O – 2 S – 3 A-5 D-1
Representing CSP as a Search Problem
Room Painting CSP Problem Statement
You are tasked with painting the rooms in a house, and certain constraints must be adhered to in order to create a
harmonious color scheme. Each room needs to be assigned one of three colors: Red, Green, or Blue. The goal is to find a
valid assignment of colors to rooms while satisfying the following constraints:
Adjacent Rooms Constraint:
No two adjacent rooms can be painted with the same
color.
Master Bedroom Constraint:
The color of the Master Bedroom must be different
from the other bedrooms.
Living Room and Guest Room Constraint:
The Living Room and the Guest Room must be
painted with distinct colors.
Neutral Rooms Constraint:
Washrooms, the Store Room, and the Study Room
must be painted with neutral colors (e.g., White).
Living Room --- Bedroom 1
| |
| |
Washroom 1 Bedroom 2
| |
| |
Study Room --- Master Bedroom
| |
| |
Washroom 2 Guest Room
|
Store Room
S0​={(LivingRoom,?),(Bedroom1,?),(Bedroom2,?),(MasterBedroom,?),(Washroom
1,?),(Washroom2,?),(StoreRoom,?),(StudyRoom,?),(GuestRoom,?)}
Initial State:
Step 4:
Explore MasterBedroom colors:
S4​={(LivingRoom,Red),(Bedroom1,Green),(Bedroom2,Red),(MasterBedroom,Blue
),…}
Step 5:
Explore other rooms based on constraints:
S5​={(LivingRoom,Red),(Bedroom1,Green),(Bedroom2,Red),(MasterBedroom,Blue
),(Washroom1,?),…}
1.Explore State Space: Begin by choosing a color for the first room and exploring
possible color assignments for subsequent rooms.
2.Backtracking: If a constraint is violated, backtrack to the previous decision point
and explore alternative color assignments.
3.Continue Exploration: Iterate through the state space, making choices and
backtracking as needed, until a complete assignment satisfying all constraints is
found or all possibilities are exhausted.
4.Complexity Warning: Recognize the potential for extensive exploration due to the
complexity of the problem.
5.User Guidance: If you have specific preferences, want to focus on particular
aspects, or simulate more steps, feel free to provide guidance for a tailored
exploration.
BACKTRACKING SEARCH FOR CSPS
1. Backtracking Search:
•Backtracking is a depth-first search algorithm used to solve CSPs.
•States represent partial assignments, and actions involve adding a variable with a value to
the assignment.
•The naive approach has a high branching factor, resulting in an impractical search tree.
2. Commutativity in CSPs:
•CSPs exhibit commutativity, meaning the order of applying actions does not affect the
outcome.
•Utilizing commutativity, backtracking search focuses on one variable at a time instead of
exploring all variable assignments simultaneously.
3. Backtracking Algorithm:
•Backtracking search chooses one variable at a time, backtracking when a variable has no
legal values left.
•It maintains a single representation of a state and alters that representation instead of
creating new ones.
4. Improving Performance without Heuristics:
•Unlike uninformed search algorithms, backtracking can efficiently solve CSPs
without domain-specific heuristic functions.
•Backtracking-Search works without the need for a domain-specific initial state,
action function, transition model, or goal test.
5. Variable and Value Ordering:
•Variable Selection (Select-Unassigned-Variable):
• Commonly, Minimum Remaining Values (MRV) heuristic is used, choosing
the variable with the fewest legal values.
• Another approach is the Degree Heuristic, selecting the variable involved in
the most constraints on other unassigned variables.
•Value Ordering (Order-Domain-Values):
• Least Constraining Value Heuristic can be effective, preferring the value that
rules out the fewest choices for neighboring variables.
6. Interleaving Search and Inference:
•Forward Checking:
• After assigning a variable, forward checking establishes arc consistency by deleting
inconsistent values from neighboring variables.
• Detects some inconsistencies during the search, leading to more efficient pruning of
the search tree.
•Maintaining Arc Consistency (MAC):
• MAC is an algorithm that calls AC-3 after a variable assignment, making only the
relevant arcs arc-consistent.
• More powerful than forward checking as it can detect inconsistencies that forward
checking may miss.
Intelligent Backtracking and Conflict-Directed Backjumping:
1. Chronological Backtracking:
•Traditional backtracking involves revisiting the most recent decision point when a
branch fails.
•In chronological backtracking, the search proceeds in a fixed variable order.
2. Conflict Sets:
•A conflict set is a set of assignments that are in conflict with some value for a
variable.
•In the example: {Q=red, NSW=green, V=blue} is the conflict set for SA.
3. Backjumping:
•Backjumping involves backtracking to the most recent assignment in the conflict set
when a failure occurs.
•For example, if SA has a conflict set, backjump to the most recent assignment in that
set (e.g., V) and try a new value.
4. Relation to Forward Checking:
•Forward checking can naturally provide conflict sets during the search.
•Whenever a value is deleted from a variable's domain, add the assignment to the
conflict set.
5. Redundancy of Simple Backjumping:
•Backjumping occurs when every value in a domain conflicts with the current
assignment.
•Forward checking detects and prevents such scenarios, making simple backjumping
redundant.
6. Conflict-Directed Backjumping:
•Despite redundancy, the idea of backjumping based on reasons for failure is valuable.
•In conflict-directed backjumping, the conflict set for a variable is not only the
immediate conflicting variables but also those that caused subsequent variables to
have no consistent solution.
•It goes beyond the simple conflict set and considers the deeper set of preceding
variables that led to the failure of a branch.

More Related Content

Similar to 18CSC305J – Artificial Intelligence - UNIT 1.pptx

Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning AgentsPsychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning AgentsWilly Marroquin (WillyDevNET)
 
CS 561a: Introduction to Artificial Intelligence
CS 561a: Introduction to Artificial IntelligenceCS 561a: Introduction to Artificial Intelligence
CS 561a: Introduction to Artificial Intelligencebutest
 
Statistical Analysis of Results in Music Information Retrieval: Why and How
Statistical Analysis of Results in Music Information Retrieval: Why and HowStatistical Analysis of Results in Music Information Retrieval: Why and How
Statistical Analysis of Results in Music Information Retrieval: Why and HowJulián Urbano
 
Essential concepts for machine learning
Essential concepts for machine learning Essential concepts for machine learning
Essential concepts for machine learning pyingkodi maran
 
A step towards machine learning at accionlabs
A step towards machine learning at accionlabsA step towards machine learning at accionlabs
A step towards machine learning at accionlabsChetan Khatri
 
IT201 Basics of Intelligent Systems-1.pptx
IT201 Basics of Intelligent Systems-1.pptxIT201 Basics of Intelligent Systems-1.pptx
IT201 Basics of Intelligent Systems-1.pptxshashankbhadouria4
 
Artificial Intelligence power point presentation document
Artificial Intelligence power point presentation documentArtificial Intelligence power point presentation document
Artificial Intelligence power point presentation documentDavid Raj Kanthi
 
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...Margaret-Anne Storey
 
Computational Thinking in the Workforce and Next Generation Science Standards...
Computational Thinking in the Workforce and Next Generation Science Standards...Computational Thinking in the Workforce and Next Generation Science Standards...
Computational Thinking in the Workforce and Next Generation Science Standards...Josh Sheldon
 
Week1- Introduction.pptx
Week1- Introduction.pptxWeek1- Introduction.pptx
Week1- Introduction.pptxfahmi324663
 
Inverse Modeling for Cognitive Science "in the Wild"
Inverse Modeling for Cognitive Science "in the Wild"Inverse Modeling for Cognitive Science "in the Wild"
Inverse Modeling for Cognitive Science "in the Wild"Aalto University
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptEELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptDaliaMagdy12
 
Artificial Intelligence and The Complexity
Artificial Intelligence and The ComplexityArtificial Intelligence and The Complexity
Artificial Intelligence and The ComplexityHendri Karisma
 
Data science in 10 steps
Data science in 10 stepsData science in 10 steps
Data science in 10 stepsQuantUniversity
 

Similar to 18CSC305J – Artificial Intelligence - UNIT 1.pptx (20)

Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning AgentsPsychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents
Psychlab: A Psychology Laboratory for Deep Reinforcement Learning Agents
 
n01.ppt
n01.pptn01.ppt
n01.ppt
 
CS 561a: Introduction to Artificial Intelligence
CS 561a: Introduction to Artificial IntelligenceCS 561a: Introduction to Artificial Intelligence
CS 561a: Introduction to Artificial Intelligence
 
Statistical Analysis of Results in Music Information Retrieval: Why and How
Statistical Analysis of Results in Music Information Retrieval: Why and HowStatistical Analysis of Results in Music Information Retrieval: Why and How
Statistical Analysis of Results in Music Information Retrieval: Why and How
 
Essential concepts for machine learning
Essential concepts for machine learning Essential concepts for machine learning
Essential concepts for machine learning
 
A step towards machine learning at accionlabs
A step towards machine learning at accionlabsA step towards machine learning at accionlabs
A step towards machine learning at accionlabs
 
Lecture29
Lecture29Lecture29
Lecture29
 
1.introduction to ai
1.introduction to ai1.introduction to ai
1.introduction to ai
 
IT201 Basics of Intelligent Systems-1.pptx
IT201 Basics of Intelligent Systems-1.pptxIT201 Basics of Intelligent Systems-1.pptx
IT201 Basics of Intelligent Systems-1.pptx
 
Artificial Intelligence power point presentation document
Artificial Intelligence power point presentation documentArtificial Intelligence power point presentation document
Artificial Intelligence power point presentation document
 
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
Publish or Perish: Questioning the Impact of Our Research on the Software Dev...
 
Computational Thinking in the Workforce and Next Generation Science Standards...
Computational Thinking in the Workforce and Next Generation Science Standards...Computational Thinking in the Workforce and Next Generation Science Standards...
Computational Thinking in the Workforce and Next Generation Science Standards...
 
Week1- Introduction.pptx
Week1- Introduction.pptxWeek1- Introduction.pptx
Week1- Introduction.pptx
 
Inverse Modeling for Cognitive Science "in the Wild"
Inverse Modeling for Cognitive Science "in the Wild"Inverse Modeling for Cognitive Science "in the Wild"
Inverse Modeling for Cognitive Science "in the Wild"
 
Basic quantitative research
Basic quantitative researchBasic quantitative research
Basic quantitative research
 
module_1_ppt.pdf
module_1_ppt.pdfmodule_1_ppt.pdf
module_1_ppt.pdf
 
Thesis
ThesisThesis
Thesis
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptEELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
 
Artificial Intelligence and The Complexity
Artificial Intelligence and The ComplexityArtificial Intelligence and The Complexity
Artificial Intelligence and The Complexity
 
Data science in 10 steps
Data science in 10 stepsData science in 10 steps
Data science in 10 steps
 

Recently uploaded

Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...soniya singh
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationkaushalgiri8080
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...gurkirankumar98700
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptkotipi9215
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfPower Karaoke
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...stazi3110
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software DevelopersVinodh Ram
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...Christina Lin
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackVICTOR MAESTRE RAMIREZ
 
Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...aditisharan08
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...kellynguyen01
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWave PLM
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about usDynamic Netsoft
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxTier1 app
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 

Recently uploaded (20)

Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
Russian Call Girls in Karol Bagh Aasnvi ➡️ 8264348440 💋📞 Independent Escort S...
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
Project Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanationProject Based Learning (A.I).pptx detail explanation
Project Based Learning (A.I).pptx detail explanation
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.ppt
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdf
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
 
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...Call Girls In Mukherjee Nagar 📱  9999965857  🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
Call Girls In Mukherjee Nagar 📱 9999965857 🤩 Delhi 🫦 HOT AND SEXY VVIP 🍎 SE...
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
Professional Resume Template for Software Developers
Professional Resume Template for Software DevelopersProfessional Resume Template for Software Developers
Professional Resume Template for Software Developers
 
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
ODSC - Batch to Stream workshop - integration of Apache Spark, Cassandra, Pos...
 
Cloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStackCloud Management Software Platforms: OpenStack
Cloud Management Software Platforms: OpenStack
 
Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...Unit 1.1 Excite Part 1, class 9, cbse...
Unit 1.1 Excite Part 1, class 9, cbse...
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
What is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need ItWhat is Fashion PLM and Why Do You Need It
What is Fashion PLM and Why Do You Need It
 
DNT_Corporate presentation know about us
DNT_Corporate presentation know about usDNT_Corporate presentation know about us
DNT_Corporate presentation know about us
 
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptxKnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
KnowAPIs-UnknownPerf-jaxMainz-2024 (1).pptx
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 

18CSC305J – Artificial Intelligence - UNIT 1.pptx

  • 1. 18CSC305J – Artificial Intelligence Dr S Raguvaran | CINTEL | SRM University Dr S Raguvaran | CINTEL | SRM University 1
  • 2. Dr S Raguvaran | CINTEL | SRM University 2 18CSC305J – Artificial Intelligence (Lab | DO 2 | Batch 1) Code: bo4wppl 18CSC305J – Artificial Intelligence (Lab | DO 5 | Batch 2) Code: ptmzhtj 18CSC305J - Artificial Intelligence (Theory) Code : v7wadnh Google Classroom Codes
  • 3. Dr S Raguvaran | CINTEL | SRM University 3 Course Outcomes CO1 Formulate a problem and build intelligent agents CO2 Apply appropriate searching techniques to solve a real-world problem CO3 Analyze the problem and infer new Knowledge using suitable Knowledge representation schemes CO4 Prepare a plan and solve real-world problems using learning algorithms CO5 Design an expert system and implement advanced techniques in Artificial Intelligence agents
  • 4. Dr S Raguvaran | CINTEL | SRM University 4 Overall Assessment Plan CLA 1 (10 marks) CLA 2 (15 marks) CLA 3 (15 marks) CLA 4 (10 marks) THEORY LAB THEORY LAB THEORY LAB THEORY LAB 5 Marks (CT 1) (Unit 1) 5 marks (LAB – 3 EXP) 7.5 Marks (CT 2) (Unit 2 & 3) 5 Marks (4 EXP) + 2.5 Marks (PROJECT) 2.5 Marks (CT 3) (Unit 4 & 5) + 5 Marks (PROJEC T) 5 Marks (3 EXP) +2.5 Marks (PROJEC T) 5 Marks (Assignme nt / Quiz / ST / Hackerrank / …) 5 Marks (PROJECT)
  • 5. Test Schedule S.No. DATE TEST TOPICS DURATION 1 01-02-2024 Cycle Test – I Unit I 1 Hours 2 21-03-2024 Cycle Test – II Unit II and III 2 Hours 3 30-04-2024 Cycle Test – III Unit IV and V 1 Hours
  • 6. Theory Assessment Plan CYCLE TEST PATTERN CT-1 Pattern – 25 marks 10 MCQs, 3 * 5-mark questions (out of 4) Portion – Unit 1 CT-2 Pattern – 50 marks (Open Book Examination) 5 * 10-mark questions (out of 7) Portion – Unit 2 & 3 CT-3 Pattern – 25 marks 10 MCQs, 3 * 5-mark questions (out of 4) Portion – Unit 4 & 5
  • 7. Lab Assessment Plan 1 Lab 1: Implementation of toy problems CLAP1: 5 marks 3 Exp= 3 marks Viva= 2 marks 2 Lab 2: Developing agent programs for real world problems 3 Lab 3: Implementation of constraint satisfaction problems 4 Lab4: Implementation and Analysis of DFS and BFS for same application 5 Lab 5: Developing Best first search and A* Algorithm for real world problems CLAP2: 7.5 marks 4 Exp= 5 marks Project Implementation-(Review 1) = 2.5 marks 6 Lab 6: Implementation of unification for real world problems. 7 Lab 7: Implementation of uncertain methods for an application (Fuzzy logic/ Dempster Shafer Theory/Monty Hall) 8 Lab 8: Implementation of learning algorithms for an application CLAP3: 7.5 marks 9 Lab 9: Implementation of NLP programs 3 Exp= 5 marks Project Implementation-(Final Review) = 2.5 marks 10 Lab 10: Applying deep learning methods to solve an application Course Project:  Documentation – 5 marks  Flow – 1 mark  Explanation – 2 marks  Properly documented – 2 marks CLAP4: 5 marks
  • 8. Rubrics for Lab Exercises  Program Documentation  Aim  Steps/ Procedure  Implementation/Code  O/P (Screenshots)  Result  Rubrics for Lab Programs: Problem Explanation: 2 Marks Implementation: 3 Marks Coding Standards: 2 Marks Output: 3 Marks Total: 10 Marks
  • 9. Rubrics for Mini Project Case Study: (Total 15 Marks) Review-1: (2.5 marks – CT2 (Lab)) Team and Title (societal benefit) Selection – 2 marks Problem statement – 1 mark Objective with technical depth – 2 marks Total 5 marks – converted to 2.5 marks. Review-2: (2.5 Marks – CT3 (Lab)) Proposed Workflow – 2 marks Implementation – 5 marks Presentation (communication, individual contribution, question and answers) – 3 marks Total 10 marks – converted to 2.5 marks.
  • 10. Rubrics for Mini Project Review-3: (5 Marks – CT3 Theory) Project demonstration, Explanation – 5 marks Presentation (communication, individual contribution, question and answers) - 3 marks Github Upload – 2 marks Total 10 marks – converted to 5 marks. Instructions: Team Members: 3 max (Each team should do a Unique project) Team Members’ contributions need to be measured and graded accordingly.
  • 11. Unit 1 List of Topics • Introduction to AI-AI techniques • Problem solving with AI • AI Models, Data acquisition and learning aspects in AI • Problem solving- Problem solving process, Formulating problems • Problem types and characteristics • Problem space and search • Intelligent agent • Rationality and Rational agent with performance measures • Flexibility and Intelligent agents • Task environment and its properties •Types of agents •Other aspects of agents •Constraint satisfaction problems(CSP) •Crypto arithmetic puzzles •CSP as a search problem-constrains and representation •CSP-Backtracking, Role of heuristic •CSP-Forward checking and constraint propagation •CSP-Intelligent backtracking
  • 12. Artificial Intelligence - Introduction WHAT IS AI? Dr S Raguvaran | CINTEL | SRM University 12
  • 13. Artificial Intelligence - Introduction What if we give the following abilities to our Conventional Car? Learning Reasoning Problem-solving Perception Speech Recognition Language Understanding Dr S Raguvaran | CINTEL | SRM University 13
  • 14. Artificial Intelligence - Introduction WHAT IS AI? Dr S Raguvaran | CINTEL | SRM University 14
  • 15. Artificial Intelligence - Introduction General Definition Artificial Intelligence (AI) is a field of computer science that aims to create machines or systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, speech recognition, and language understanding. Dr S Raguvaran | CINTEL | SRM University 15
  • 16. Views of Artificial Intelligence “[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning . . .” (Bellman, 1978) “The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992) “The art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil, 1990) “Computational Intelligence is the study of the design of intelligent agents.” (Poole et al., 1998) Dr S Raguvaran | CINTEL | SRM University 16
  • 17. Views of Artificial Intelligence 01 02 03 04 Thinking Humanly Acting Humanly Thinking Rationally Acting Rationally Dr S Raguvaran | CINTEL | SRM University 17
  • 18. The Four Categories of AI 1. Acting humanly: The Turing Test approach 2. Thinking humanly: The cognitive modeling approach 4. Acting rationally: The rational agent approach 3. Thinking rationally: The “laws of thought” approach Dr S Raguvaran | CINTEL | SRM University 18
  • 19. 1. Acting humanly: The Turing Test approach Alan Turing Dr S Raguvaran | CINTEL | SRM University 19
  • 20. 1. Acting humanly: The Turing Test approach The Turing Test, introduced by Alan Turing, evaluates a machine's ability to convincingly imitate human responses in a conversation. If a human judge cannot reliably distinguish between machine and human based on responses alone, the machine is considered to have passed the test, indicating a high level of artificial intelligence. The test assesses natural language understanding and conversational behavior. Dr S Raguvaran | CINTEL | SRM University 20
  • 21. Patient ID Age Gender Symptoms Medical History 1 35 Female Fever, Cough, Fatigue None 2 28 Male Headache, Nausea, Vomiting Migraine 3 40 Female Fever, Cough, Shortness of Breath None 4 55 Male Abdominal Pain, Nausea High Cholesterol 5 60 Male Chest Pain, Shortness of Breath, Sweating High Blood Pressure Activity : Automated Reasoning Diagnosis Common Cold Migraine COVID-19 (Corona) Gallbladder Stone Heart Attack Dr S Raguvaran | CINTEL | SRM University 21
  • 22. 1. Acting humanly: The Turing Test approach To pass the computer would need to possess the following capabilities: 1. Natural language processing to enable it to communicate successfully in English; 2. Knowledge representation to store what it knows or hears; 3. Automated reasoning to use the stored information to answer questions and to draw new conclusions 4. Machine learning to adapt to new circumstances and to detect and extrapolate patterns. Dr S Raguvaran | CINTEL | SRM University 22
  • 23. 2. Thinking humanly: The cognitive modeling approach Understanding Human Thought Verification through Behaviour Cognitive Science vs. AI Dr S Raguvaran | CINTEL | SRM University 23
  • 24. 2. Thinking humanly: The cognitive modeling approach Understanding Human Thought To claim that a program thinks like a human, we need insights into how humans think. Three approaches include introspection, psychological experiments, and brain imaging. The goal is to develop a precise theory of the mind, express it as a computer program, and verify its behavior against human actions. Dr S Raguvaran | CINTEL | SRM University 24
  • 25. 2. Thinking humanly: The cognitive modeling approach Cognitive Science vs. AI Cognitive science involves experimental investigation of actual humans or animals, while AI assumes the reader has only a computer for experimentation. Both fields continue to influence each other, particularly in areas like computer vision, where neurophysiological evidence informs computational models. Dr S Raguvaran | CINTEL | SRM University 25
  • 26. 3. Thinking rationally: The “laws of thought” approach Syllogisms "Socrates is a man; all men are mortal; Socrates is mortal Dr S Raguvaran | CINTEL | SRM University 26
  • 27. 3. Thinking rationally: The “laws of thought” approach Syllogisms You should always follow your dreams. I just spent the entire day dreaming about winning the lottery. I'm practically a millionaire by bedtime. Dr S Raguvaran | CINTEL | SRM University 27
  • 28. 3. Thinking rationally: The “laws of thought” approach Aristotle's Contribution: Aristotle, a Greek philosopher, made early attempts to codify "right thinking" through syllogisms, providing patterns for irrefutable reasoning processes. Aristotle Logicist Aspirations in AI: The logicist tradition in AI aims to build intelligent systems based on programs capable of solving problems using logical reasoning. Development of Logic: Logicians in the 19th century created a precise notation for expressing statements about various objects and their relations, extending beyond the realm of numbers. By 1965, programs capable of solving any solvable problem described in logical notation had been developed, marking the emergence of the logicist tradition in artificial intelligence. Dr S Raguvaran | CINTEL | SRM University 28
  • 29. Dr S Raguvaran | CINTEL | SRM University 29 Choose the appropriate synonyms for the word "rational“ : 1. Logical 2. Reasonable 3. Intelligent 4. Coherent 5. Judicious 6. Sensible 7. Wise 8. Sound 9. Enlightened 10. Reasoned 11.All the above
  • 30. Dr S Raguvaran | CINTEL | SRM University 30 4. Acting rationally: The rational agent approach Definition of Agent: An agent is something that acts, and in the context of computer science, agents are expected to operate autonomously, perceive their environment, persist over time, adapt to change, and create and pursue goals. Rational Agent: A rational agent is one that acts to achieve the best outcome or, in uncertain situations, the best expected outcome. Rational agents go beyond correct inferences, as correct inference is just one mechanism for achieving rationality. Rationality also involves situations where no provably correct action exists. Abstractly, an agent is a function from percept histories to actions: [f: P*  A]
  • 31. Dr S Raguvaran | CINTEL | SRM University 31 Rational agents go beyond correct inferences | Why? Exploration-Exploitation Tradeoffs: Exploration involves trying out new actions or strategies to gather more information about the environment. Exploitation involves choosing the known, optimal actions based on current knowledge. In the context of rationality, exploration is akin to considering situations where no provably correct action is known. It involves taking risks to discover potentially better strategies.
  • 32. Dr S Raguvaran | CINTEL | SRM University 32 Advantages of Rational-Agent Approach: 1. The rational-agent approach is more general than the "laws of thought" approach, encompassing various mechanisms for achieving rationality. 2. It is more amenable to scientific development than approaches based on human behavior or thought, as the standard of rationality is well-defined and can be unpacked to generate provably rational agent designs. Complexity of Rationality: 1. Achieving perfect rationality (always doing the right thing) is not feasible in complicated environments due to high computational demands. 2. Constructing rational agents involves addressing a wide variety of issues, despite the apparent simplicity of the problem statement. 4. Acting rationally: The rational agent approach
  • 33. Dr S Raguvaran | CINTEL | SRM University 33 AI Approach Ability 1. Acting humanly: The Turing Test approach a. Addresses the AI system's ability to act in a way that maximizes the achievement of its goals. 2. Thinking humanly: The cognitive modeling approach b. Assesses the ability of the AI system to exhibit behavior indistinguishable from that of a human. 3. Thinking rationally: The “laws of thought” approach c. Evaluates the AI system's internal processes to match human-like thinking and reasoning. 4. Acting rationally: The rational agent approach d. Focuses on the AI system's ability to follow logical principles and make accurate inferences. Match The Following
  • 34. Dr S Raguvaran | CINTEL | SRM University 34 AI Approach Ability 1. Acting humanly: The Turing Test approach b. Assesses the ability of the AI system to exhibit behavior indistinguishable from that of a human. 2. Thinking humanly: The cognitive modeling approach c. Evaluates the AI system's internal processes to match human-like thinking and reasoning. 3. Thinking rationally: The “laws of thought” approach d. Focuses on the AI system's ability to follow logical principles and make accurate inferences. 4. Acting rationally: The rational agent approach a. Addresses the AI system's ability to act in a way that maximizes the achievement of its goals. Match The Following |Answer
  • 35. Dr S Raguvaran | CINTEL | SRM University 35 Which AI approach focuses on making decisions to achieve the best outcome or expected outcome? a. Acting humanly: The Turing Test approach b. Thinking humanly: The cognitive modeling approach c. Thinking rationally: The "laws of thought" approach d. Acting rationally: The rational agent approach d. Acting rationally: The rational agent approach
  • 36. Dr S Raguvaran | CINTEL | SRM University 36 THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
  • 37. Dr S Raguvaran | CINTEL | SRM University 37 THE HISTORY OF ARTIFICIAL INTELLIGENCE
  • 38. Dr S Raguvaran | CINTEL | SRM University 38 1. Natural Language Processing (NLP) 2. Computer Vision 3. Machine Learning and Deep Learning 4. Autonomous Vehicles 5. Robotics 6. Healthcare Applications 7. Generative Adversarial Networks (GANs) 8. Quantum Computing 9. Explainable AI (XAI) 10. Edge Computing for AI The State of The Art : AI
  • 39. Dr S Raguvaran | CINTEL | SRM University 39 Advantages of Artificial Intelligence 1. More powerful and more useful computers 2. New and improved interfaces 3. Solving new problems 4. Better handling of information 5. Relieves information overload 6. Conversion of information into knowledge Disadvantages 1. Increased costs 2. Difficulty with software development - slow and expensive 3. Few experienced programmers 4. Few practical products have reached the market as yet.
  • 40. Dr S Raguvaran | CINTEL | SRM University 40 AI Techniques Challenges and Diversity in Artificial Intelligence: A Spectrum of Applications and Complexities 1.Diverse Applications: •AI is applied across various domains, such as medical and manufacturing. 2.Day-to-Day Problems: •AI addresses everyday challenges, contributing to daily life applications. 3.Identification and Authentication: •AI plays a vital role in solving security-related identification and authentication problems. 4.Classification Problems: •Decision-making systems often involve classification challenges tackled by AI algorithms. 5.Interdependent and Cross-Domain Issues: •AI deals with challenges that span multiple domains, including the complex nature of Cyber-Physical Systems. 6.Computational Complexity: •AI problems often require significant computational resources due to their complexity. 7.Explainability of AI Techniques: •The transparency and interpretability of AI techniques, especially in deep learning, pose challenges. 8.Ethical Considerations: •AI introduces ethical concerns, such as bias in algorithms and privacy implications.
  • 41. Dr S Raguvaran | CINTEL | SRM University 41 Introduction to AI Techniques Definition: AI techniques are methods that facilitate knowledge acquisition. The primary AI techniques include Search, Use of Knowledge, and Abstraction. Search Technique Definition: Search offers a problem-solving framework for situations lacking a direct approach. It explores various action sequences until a solution is found. Advantages: • Effective for problem-solving. • Requires coding of applicable operators. Disadvantages: • Impractical for large search spaces.
  • 42. Dr S Raguvaran | CINTEL | SRM University 42 Use of Knowledge Definition: Involves solving complex problems by manipulating object structures. Knowledge representation in AI techniques is crucial. Representation Guidelines: • Captures generalization. • Understandable to human preparers. • Easily adjustable and adaptable. • Aids in error correction and adapting to changes. • Supports diverse situations. Abstraction Technique Definition: Abstraction separates important features from unimportant ones, aiding in simplifying processes.
  • 43. Broad Categories of AI Problems 1. Structured 2. Un-Structured 3. Linear 4. Non-Linear Dr S Raguvaran | CINTEL | SRM University 43
  • 44. Dr S Raguvaran | CINTEL | SRM University 44 Structured AI Problems: Scenario: Managing a database of employee records in a large organization. The goal is to create an AI system that can efficiently retrieve and update employee information based on structured data fields like name, employee ID, department, and salary. Unstructured AI Problems: Scenario: Analyzing and summarizing unstructured text data from customer reviews on social media platforms. The AI system needs to identify sentiments, key topics, and extract relevant information from unstructured textual data to provide insights for business improvement.
  • 45. Dr S Raguvaran | CINTEL | SRM University 45 Linear AI Problems: Scenario: Predicting the future sales of a retail store based on historical sales data. The goal is to build a linear regression model that correlates factors such as advertising spend, promotions, and seasonality to forecast the sales in a straightforward, linear fashion. Non-Linear AI Problems: Scenario: Recognizing and classifying objects in images for autonomous vehicles. This involves solving a non-linear problem where the relationships between pixels and object features are complex and may require advanced techniques such as neural networks to capture the intricate patterns in the data.
  • 46. Dr S Raguvaran | CINTEL | SRM University 46 https://www.google.com/fbx?fbx=tic_tac_toe Problem Solving with AI Let us Play A Game tic-tac-toe How can Artificial Intelligence (AI) be leveraged to improve and optimize the management of wet waste?
  • 47. Dr S Raguvaran | CINTEL | SRM University 47 Well-structured problems and ill-structured problems are two categories used to describe the nature of problems that can be addressed with artificial intelligence (AI) and other problem-solving approaches. Here's an explanation of each:
  • 48. Dr S Raguvaran | CINTEL | SRM University 48 Well-Structured Problems: •Definition: Well-structured problems have clearly defined goals, a finite set of possible solutions, and a well-understood set of rules or procedures to reach those solutions. •Characteristics: • The problem space is well-defined. • Clear criteria exist for determining when the problem is solved. • Solutions can be derived through a systematic and logical process. • Examples include mathematical equations, puzzles, and optimization problems. •AI Approach: Well-structured problems are often suitable for algorithmic solutions, and AI systems can be designed to follow predefined rules and procedures to find optimal or near-optimal solutions.
  • 49. Dr S Raguvaran | CINTEL | SRM University 49 Ill-Structured Problems: •Definition: Ill-structured problems lack clear goals, have a wide range of possible solutions, and may not have well-defined problem spaces or solution procedures. These problems often involve ambiguity and uncertainty. •Characteristics: • The problem space is not well-defined, and the problem may evolve over time. • Multiple possible solutions exist, and the criteria for a "good" solution may be subjective. • There may be uncertainty or incomplete information. • Examples include real-world issues like designing a new product, formulating business strategies, or addressing complex social problems. •AI Approach: Ill-structured problems are more challenging for traditional AI approaches, as they require handling ambiguity, adapting to changing conditions, and incorporating human judgment. AI methods for these problems often involve machine learning, natural language processing, and other techniques that can handle complexity and uncertainty.
  • 50. Dr S Raguvaran | CINTEL | SRM University 50 Scenario 1: You are tasked with solving a Sudoku puzzle where each row, column, and 3x3 grid must contain all of the digits from 1 to 9. What type of problem is this? A. Well-Structured B. Ill-Structured Answer: A. Well-Structured Scenario 2: A group of students is given the challenge of developing a marketing strategy for a new product launch. The team needs to consider target audiences, advertising channels, and budget allocation. What type of problem is this? A. Well-Structured B. Ill-Structured Answer: B. Ill-Structured
  • 51. Dr S Raguvaran | CINTEL | SRM University 51 Scenario 3: Your assignment is to address the challenge of improving public transportation in a growing city, taking into account factors like traffic patterns, environmental impact, and user satisfaction. What type of problem is this? A. Well- Structured B. Ill-Structured Answer: B. Ill-Structured
  • 52. Dr S Raguvaran | CINTEL | SRM University 52 Summary AI Perspectives Intelligence as Rational Action Historical Foundations Interdisciplinary Contributions Cycles of Progress and Challenges
  • 53. Dr S Raguvaran | CINTEL | SRM University 53 Concept | Mind Mapping 1. Start with a Central Idea 2. Create Main Branches 3. Add Subtopics and Details 4. Connect Ideas Visually 5. Enhance with Keywords and Colors
  • 54. Dr S Raguvaran | CINTEL | SRM University 54
  • 55. Dr S Raguvaran | CINTEL | SRM University 55 AI Models
  • 56. Dr S Raguvaran | CINTEL | SRM University 56
  • 57. Dr S Raguvaran | CINTEL | SRM University 57 1.Semiotic models refer to theoretical frameworks or systems that analyze and interpret signs and symbols within various communication processes. These models are based on semiotics, the study of signs and symbols and their meanings in different contexts. Semiotic models help understand how signs convey meaning, emphasizing the role of language, culture, and context in communication. AI Models
  • 58. Dr S Raguvaran | CINTEL | SRM University 58 1.Icon: 1. Definition: An icon is a sign that bears a resemblance or similarity to the thing it represents. 2. Example: A stylized drawing or diagram of an eye can be an icon representing the concept of vision. 2.Index: 1. Definition: An index is a sign that has a direct connection or correlation with the object it signifies. The relationship is based on cause-and-effect or proximity. 2. Example: Smoke is an indexical sign of fire because smoke is typically caused by the presence of fire. 3.Symbol: 1. Definition: A symbol is a sign where the relationship between the signifier (the symbol itself) and the signified (the concept it represents) is based on convention or agreement. 2. Example: Words, such as "tree" or "love," are symbols where the connection between the word and the concept is established through cultural or linguistic conventions.
  • 59. Dr S Raguvaran | CINTEL | SRM University 59 2. Statistical AI models, often referred to as statistical machine learning models, are a subset of artificial intelligence (AI) that relies on statistical methods and algorithms to learn from data and make predictions or decisions. These models use statistical techniques to analyze patterns, relationships, and probabilities within datasets. Here's an overview of key aspects: Common statistical AI models include linear regression, logistic regression, decision trees, support vector machines, and various types of neural networks. Each model has its strengths and is suitable for different types of tasks, such as regression, classification, or clustering.
  • 60. Data acquisition and learning aspects in AI Various AI –related topics on data acquisition and machine learning • Knowledge discovery – Data mining and machine learning • Computational learning theory (COLT) • Neural and evolutionary computation • Intelligent agents and multi-agent systems • Multi-perspective integrated intelligence
  • 61. Dr S Raguvaran | CINTEL | SRM University 61 Data Acquisition: Definition: Data acquisition refers to the process of collecting and gathering raw information or data from various sources. In the context of AI, high- quality and relevant data is crucial for training machine learning models. The effectiveness of AI systems heavily depends on the quality, quantity, and diversity of the data used for training. Knowledge Discovery: Data Mining and Machine Learning Data Mining: Knowledge discovery process that involves extracting patterns and valuable insights from large datasets using various techniques such as clustering, association rule mining, and anomaly detection. Machine Learning: A subset of artificial intelligence focused on developing algorithms and models that enable systems to learn patterns from data, make predictions, and improve performance without explicit programming.
  • 62. Dr S Raguvaran | CINTEL | SRM University 62 Computational Learning Theory (COLT): COLT is a branch of theoretical computer science dedicated to studying the mathematical foundations of machine learning. It explores questions about the efficiency, feasibility, and limitations of learning algorithms, providing theoretical insights into their behavior. This field helps establish rigorous frameworks for understanding how machines can learn from data and generalize to new, unseen instances. Neural and Evolutionary Computation: Neural computation focuses on artificial neural networks for learning and information processing, while evolutionary computation utilizes principles of biological evolution, such as genetic algorithms, for optimization and problem- solving, combining to enhance adaptive and intelligent systems.
  • 63. Dr S Raguvaran | CINTEL | SRM University 63 Intelligent Agents and Multi-Agent Systems: Intelligent agents are autonomous entities capable of perceiving, reasoning, and acting to achieve goals, while multi-agent systems involve multiple interacting intelligent agents collaborating or competing to solve complex problems, simulating dynamic real-world scenarios. Multi-Perspective Integrated Intelligence: Multi-perspective integrated intelligence refers to a holistic approach that combines insights from diverse viewpoints and sources, fostering a comprehensive understanding to inform decision-making processes and enhance overall system intelligence. This concept aims to synergize varied perspectives, promoting a more nuanced and effective approach to problem-solving in complex environments.
  • 65. Dr S Raguvaran | CINTEL | SRM University 65 Problem Solving with AI “ Formulate, Search, Execute “ design for an agent
  • 66. Dr S Raguvaran | CINTEL | SRM University 66 Touring in Arad, Romania: The agent in Arad, Romania, aims to reach Bucharest promptly, simplifying its decision problem due to a nonrefundable flight the next day. Goal formulation, driven by the current situation, guides the agent's decision-making in this complex touring scenario.
  • 67. • The initial state that the agent starts in /Starting state which agent knows itself. • Ex- The initial state for our agent in Romania might be described as In(Arad) • A description of the possible actions/operators available to the agent. Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in s. We say that each of these actions is applicable in s. • Ex- from the state In(Arad), the applicable actions are {Go(Sibiu), Go(Timisoara), Go(Zerind)}. • A description of what each action does; the formal name for this is the transition model, specified by a function RESULT(s, a) that returns the state that results from doing action a in state s. We also use the term successor to refer to any state reachable from a given state by a single action. • Ex- RESULT(In(Arad),Go(Zerind)) = In(Zerind) A problem can be defined formally by five components: Problem Solving with AI Components of a problem
  • 68. • Together, the initial state, actions, and transition model implicitly define the state space of the problem—the set of all states reachable from the initial state by any sequence of actions. The state space forms a directed network or graph in which the nodes are states and the links between nodes are actions. A path in the state space is a sequence of states connected by a sequence of actions. • Ex- The map of Romania shown can be interpreted as a state-space graph if we view each road as standing for two driving actions, one in each direction. • The goal test, which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. • Ex- The agent’s goal in Romania is the singleton set {In(Bucharest )} Problem Solving with AI Components of a problem
  • 69. • A path cost function that assigns a numeric cost to each path. The problem- solving agent chooses a cost function that reflects its own performance measure. • The step cost of taking action a to go from one state ‘s’ to reach state ‘y’ is denoted by c(s, a, y). Ex- For the agent trying to get to Bucharest, time is of the essence, so the cost of a path might be its length in kilometres. We assume that the cost of a path can be described as the sum of the costs of the individual actions along the path. The step costs for Romania are shown in Figure as route distances. We assume that step costs are nonnegative. • A solution to a problem is an action sequence that leads from the initial state to a goal state. Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions. Problem Solving with AI Components of a problem
  • 70. Formulating Problems • Problem Formulation : Choosing relevant set of states & feasible set of operators for moving from one state to another. • Search : Is a process of imagining sequences of operators(actions) applied to initial state and to see which state reaches goal state.
  • 71. Toy Problems vs Real-world Problems • A toy problem is intended to illustrate or exercise various problem solving methods. It can be given a concise, exact description. • A real world problem is one whose solutions people actually care about. Such problems tend not to have a single agreed-upon description, but we can give the general flavor of their formulations.
  • 72. Dr S Raguvaran | CINTEL | SRM University 72 Problem types and characteristics 1. Deterministic or observable (single-state) 2. Non-observable (multiple-state) 3. Non-deterministic or partially observable 4. Unknown state space
  • 73. Dr S Raguvaran | CINTEL | SRM University 73 1. Deterministic or observable(Single-state problems) • Each state is fully observable and it goes to one definite state after any action. • Here , the goal state is reachable in one single action or sequence of actions. • Deterministic environments ignore uncertainty. • Predictable Outcome: Deterministic problems have outcomes that can be precisely predicted given the initial conditions and a set of defined rules or equations. • No Randomness: These problems do not involve randomness or uncertainty in their solutions. Ex- Vacuum cleaner with sensor.
  • 74. Dr S Raguvaran | CINTEL | SRM University 74 2. Non-observable(Multiple-state problems) / conformant problems • The problem–solving agent does not have any information about the state. • Solution may or may not be reached. • The system may exist in multiple states, and some or all of these states may not be directly visible or measurable. • Observing the system may not provide complete information about its internal states. Conformant Problems: • Conformant problems refer to scenarios where the agent or system must act based on incomplete information about the current state.
  • 75. Dr S Raguvaran | CINTEL | SRM University 75 Example Problem: Autonomous vehicle navigation in an urban environment. Characteristics: Non-Observable: Unable to directly observe internal states of other entities. Multiple-State: Varied road conditions, traffic patterns, and pedestrian behavior. Conformant Problem: Decisions based on partial sensor information, conforming to traffic rules and safety standards. Challenge: Navigating safely through dynamic urban conditions with limited information. Solution: Develop an advanced autonomous driving system incorporating sensor data analysis for decision-making while adhering to safety regulations.
  • 76. Dr S Raguvaran | CINTEL | SRM University 76 3. Non-deterministic(partially observable) problem • Outcome influenced by inherent randomness or uncertainty. • System's internal states not entirely visible or measurable. • Decisions based on incomplete information about the system's state. • Inherent unpredictability introduces variability in outcomes. • Lack of full observability requires strategies that account for uncertainty. • Variability in potential outcomes adds complexity to the problem. • Decision-making becomes challenging due to uncertainties. • Examples include games of chance, stochastic processes, and scenarios with inherent randomness.
  • 77. Dr S Raguvaran | CINTEL | SRM University 77 •Scenario: Creating a lottery prediction agent. •Outcome Influence: Lottery numbers determined by random draw, introducing inherent uncertainty. •Visibility of States: Internal lottery draws states not observable or predictable in advance. •Incomplete Information: Limited details about drawn numbers available before predictions. •Unpredictability: Precise lottery numbers cannot be determined due to the random draw. •Potential Variability: Numerous possible combinations contribute to outcome variability. •Decision-Making Challenges: Developing an agent to adapt strategies for the unpredictable nature of lottery draws. •Example: Lottery prediction software using historical data or patterns to forecast winning numbers.
  • 78. Dr S Raguvaran | CINTEL | SRM University 78 4.Unknown state space problems Typically exploration problems • States and impact of actions are not known • Undefined State Space: Complete set of possible states not explicitly known. • Lack of Enumeration: Challenge in listing or specifying all potential states. • Uncharted Territories: Existence of undiscovered or unexplored states or conditions. • Uncertain Boundaries: Boundaries or limits of the state space are unclear.
  • 79. Dr S Raguvaran | CINTEL | SRM University 79 Scenario: An autonomous exploration robot navigating an unknown planet's surface. Undefined State Space: The robot encounters diverse and unanticipated terrains, and the complete set of possible environmental states is not explicitly known. Lack of Enumeration: It's challenging to list or predict all potential conditions the robot might encounter during exploration. Uncharted Territories: Certain areas of the planet may be unexplored, introducing unforeseen environmental states. Uncertain Boundaries: The boundaries or limits of the planet's varied landscapes are unclear.
  • 80. Dr S Raguvaran | CINTEL | SRM University 80 Problem Analysis and Representation 1.Compactness 2.Utility 3. Soundness 4. Completeness 5. Generality 6. Transparency
  • 81. Dr S Raguvaran | CINTEL | SRM University 81 Initial State: Rod A: [3, 2, 1] (3 disks in descending order). Goal State: Rod B: [3, 2, 1] or Rod C: [3, 2, 1] (3 disks in ascending order). Operators: Move a disk from the top of one rod to the top of another rod, following the rules of disk movement. Tower of Hanoi
  • 82. Dr S Raguvaran | CINTEL | SRM University 82 Water Jug Problem •Problem: Water Jug Puzzle in artificial intelligence. •Jugs: Two jugs with capacities 'x' and 'y' liters, and a water source. •Objective: Measure a specific 'z' liters of water without volume markings. •Initial State: Both jugs are empty. •Goal State: One jug contains exactly 'z' liters. •Operations: Filling, emptying, and pouring between jugs. •Challenge: Test of problem-solving and state space search skills. •Approach: Find an efficient sequence of steps to achieve the desired water measurement. •Example: Starting with empty jugs, reach a state where one jug holds 'z' liters.
  • 83. •Step 1: Fill the 4-liter jug completely with water. (Current state: (4, 0)) •Step 2: Empty water from the 4-liter jug into the 3-liter jug, leaving 1 liter of water in the 4-liter jug and the 3-liter jug completely full. (Current state: (1, 3)) •Step 3: Empty water from the 3-liter jug. (Current state: (1, 0)) •Step 4: Pour the water from the 4-liter jug into the 3-liter jug. Now, the 4-liter container is completely empty, and 1 liter of water is present in the 3-liter jug. (Current state: (0, 1)) •Step 5: Fill the 4-liter jug with water completely again. (Current state: (4, 1)) •Step 6: Transfer water from the 4-liter jug to the 3-liter jug, obtaining 2 liters of water in the 4-liter jug, which is the required quantity. (Current state: (2, 2)) The sequence of steps efficiently achieves the goal of obtaining 2 liters of water using the given 4-liter and 3-liter jugs.
  • 84. Dr S Raguvaran | CINTEL | SRM University 84 •Step 1: Fill the 5-liter jug to its maximum capacity. (Current state: (0, 5)) •Step 2: Transfer 3 liters from the 5-liter jug to the 3-liter jug. (Current state: (3, 2)) •Step 3: Empty the 3-liter jug. (Current state: (0, 2)) •Step 4: Transfer 2 liters from the 5-liter jug to the 3-liter jug. (Current state: (2, 0)) •Step 5: Fill the 5-liter jug to its maximum capacity. (Current state: (2, 5)) •Step 6: Pour water from the 5-liter jug to the 3-liter jug until it's full. This results in obtaining 4 liters of water in the 5-liter jug, which is the required quantity. (Current state: (4, 3)) •The sequence of steps efficiently achieves the goal of obtaining 4 liters of water using the given 5-liter and 3-liter jugs. Input: X = 3, Y = 5, Z = 4
  • 85. Dr S Raguvaran | CINTEL | SRM University 85 Problem Space Definition: •The problem space refers to the entire set of possible states, actions, and transitions that a problem-solving agent explores while trying to find a solution to a problem. •It encompasses all the possible configurations or arrangements of elements that the agent can encounter during the problem-solving process. Key Components: •States: Represent different configurations or situations within the problem. •Actions/Operators: Define the permissible moves or transformations between states. •Transitions: Specify how the system moves from one state to another based on actions.
  • 86. Dr S Raguvaran | CINTEL | SRM University 86 Example: •In the Tower of Hanoi problem, the problem space includes all possible arrangements of disks on rods, actions like moving a disk, and transitions between different states. Search: Definition: •Search, in the context of artificial intelligence, refers to the systematic exploration of the problem space in order to find a solution. •The goal of a search algorithm is to navigate through the problem space efficiently, moving from the initial state to the goal state by applying a sequence of actions. Key Components: 1.Start State: The initial configuration or situation from which the search begins. 2.Goal State: The desired configuration or situation that the agent aims to achieve. 3.Search Strategy: The method or algorithm used to explore the problem space. Common strategies include depth-first search, breadth-first search, and heuristic-based search. Example: •In the Water Jug Problem, the search involves exploring different states of water levels in the jugs, applying operations like filling, emptying, and pouring, and moving towards the goal state where a specific water measurement is achieved.
  • 87. Dr S Raguvaran | CINTEL | SRM University 87 Connection between Problem Space and Search: Problem space defines the scope of exploration: It outlines the set of states and actions available for consideration during the search process. Search algorithms navigate the problem space: They determine the sequence of actions that lead from the initial state to the goal state, efficiently exploring the problem space to find a solution. Overall Process: Initialization: Start with the initial state of the problem. Search: Use a search algorithm to explore the problem space, moving from state to state. Goal Test: Check if the current state matches the goal state. Solution: If a goal state is reached, the sequence of actions leading to it represents the solution.
  • 88. Dr S Raguvaran | CINTEL | SRM University 88 INTELLIGENT AGENTS Definition of an Agent: An agent is a system capable of perceiving its environment through sensors and acting upon that environment through actuators. Components of a Human Agent: Human agents have sensory organs like eyes and ears as sensors, and hands, legs, and vocal tract as actuators. Components of a Robotic Agent: Robotic agents may have sensors like cameras and infrared range finders, and actuators like various motors. Components of a Software Agent: Software agents receive sensory inputs such as keystrokes, file contents, and network packets. They act on the environment by displaying on the screen, writing files, and sending network packets. Percept: The term "percept" refers to the agent's perceptual inputs at any given moment.
  • 89. Dr S Raguvaran | CINTEL | SRM University 89
  • 90. Dr S Raguvaran | CINTEL | SRM University 90 Tabulating Agent Function: The agent function, describing an agent's behavior, can be tabulated. However, for most agents, this table would be very large, potentially infinite unless a bound is placed on the length of percept sequences considered. Agent Function vs. Agent Program: The agent function, an abstract mathematical description, is distinct from the agent program, which is a concrete implementation running within a physical system. The agent function is externally characterized, while the agent program is the internal implementation.
  • 91. Dr S Raguvaran | CINTEL | SRM University 91 Illustration with Vacuum-Cleaner World: The concepts are illustrated with a simple example—the vacuum-cleaner world. This world has two locations, squares A and B, and a vacuum agent that perceives its location and dirt presence. The agent function can be defined abstractly, and an agent program provides a concrete implementation. The table in Figure 2.3 and the program in Figure 2.8 demonstrate this illustration.
  • 92. Dr S Raguvaran | CINTEL | SRM University 92
  • 93. Dr S Raguvaran | CINTEL | SRM University 93 Good Behavior: The Concept Of Rationality Rational Agent: A rational agent conceptually performs correctly, evaluated by consequences and measured through desirability of environment states. Performance Measure: Crafting measures crucially align with desired outcomes, avoiding predefined behaviors, and addressing potential pitfalls and philosophical implications.
  • 94. Dr S Raguvaran | CINTEL | SRM University 94 Rationality 1. Performance Measure: Successful and timely delivery of packages. 2. Agent's Prior Knowledge: Knowledge of the city layout, traffic patterns, and delivery locations. 3. Actions the Agent Can Perform: Navigating through city streets, avoiding obstacles, and delivering packages. 4. Agent's Percept Sequence: Real-time data from sensors about the current environment, traffic conditions, and package status. Consider an autonomous delivery robot tasked with delivering packages in a city. The rationality of the robot at any given time depends on the following factors:
  • 95. Dr S Raguvaran | CINTEL | SRM University 95 What defines the rationality of an agent at any given time? A. Agent's preferences B. Performance measure C. Random actions D. Current mood Answer: B. Performance measure Which factor influences the rational decisions of an agent based on its past experiences? A. Agent's current goals B. Actions performed C. Agent's percept sequence D. Environmental constraints Answer: C. Agent's percept sequence
  • 96. Dr S Raguvaran | CINTEL | SRM University 96 In the context of a rational agent, what is the significance of the agent's prior knowledge? A. Shapes the agent's preferences B. Guides rational decisions C. Determines random actions D. Defines the performance measure Answer: B. Guides rational decisions What distinguishes the agent function from the agent program in artificial intelligence? A. Both are abstract concepts B. Agent function is external, while agent program is internal C. Agent program is abstract, while agent function is concrete D. They are interchangeable terms Answer: B. Agent function is external, while agent program is internal
  • 97. Dr S Raguvaran | CINTEL | SRM University 97 When designing performance measures for agents, what is the recommended approach? A. Define measures based on agent's opinions B. Prescribe predefined behaviors C. Align measures with desired environmental outcomes D. Avoid considering philosophical implications Answer: C. Align measures with desired environmental outcomes
  • 98. Dr S Raguvaran | CINTEL | SRM University 98 Specifying the task environment PEAS Description for Automated Taxi Driver: The PEAS (Performance, Environment, Actuators, Sensors) description for an automated taxi driver involves specifying the task environment, including the performance measure, the environment itself, and the actuators and sensors of the agent. Complexity of the Taxi Driver Task: Unlike the simple vacuum world, the task environment for an automated taxi driver is highly complex and open-ended. The driving task involves a multitude of novel circumstances, making it an intricate problem for discussion and design.
  • 99. Dr S Raguvaran | CINTEL | SRM University 99
  • 100. Dr S Raguvaran | CINTEL | SRM University 100 Fully Observable vs. Partially Observable: Fully observable environments provide complete state information relevant to the agent's actions, while partially observable environments lack certain aspects, often due to sensor limitations or inaccuracies. Single Agent vs. Multiagent: The distinction between single-agent and multiagent environments involves considering whether entities interact as agents based on their behavior, with chess being competitive and taxi-driving being partially cooperative and competitive.
  • 101. Dr S Raguvaran | CINTEL | SRM University 101 Deterministic vs. Stochastic: Deterministic environments have actions leading to completely determined outcomes, while stochastic environments introduce uncertainty, often due to factors like sensor noise or incomplete observability. Episodic vs. Sequential: Episodic environments involve independent atomic episodes, where each decision is based on the current situation, whereas sequential environments require considering long-term consequences of actions.
  • 102. Dr S Raguvaran | CINTEL | SRM University 102 Static vs. Dynamic: Static environments remain unchanged while the agent deliberates, simplifying decision-making, whereas dynamic environments continuously evolve, demanding continuous decision updates, as seen in taxi driving. Discrete vs. Continuous: The discrete/continuous distinction applies to the state, time, and actions of the environment, with chess being discrete, taxi driving being continuous, and the choice between them influencing problem complexity. Known vs. Unknown: The distinction between known and unknown environments refers to the agent's knowledge about the environment's laws, with known environments having predefined outcomes for all actions, and unknown environments requiring the agent to learn through experience.
  • 103. Dr S Raguvaran | CINTEL | SRM University 103 THE STRUCTURE OF AGENTS Agent Program Role: The agent program processes the current percept from sensors and determines the immediate action for actuators. Architecture Collaboration: The agent program's effectiveness relies on collaboration with the architecture, ensuring alignment with the system's physical capabilities and characteristics.
  • 104. Dr S Raguvaran | CINTEL | SRM University 104 In the following part of this section, we present Five fundamental types of agent programs that encapsulate the principles foundational to nearly all intelligent systems: 1.Simple reflex agents 2.Model-based reflex agents 3.Goal-based agents 4.Utility-based agents 5.Learning Agent
  • 105. Dr S Raguvaran | CINTEL | SRM University 105 1.Simple Reflex Agents: • Make decisions based solely on the current percept, ignoring percept history. • Example: The vacuum agent's decision depends only on the current location and dirt presence. Vacuum Agent as a Simple Reflex Agent: • Program is smaller compared to the corresponding table. • Decision-making is specific to the current percept, reducing possibilities from 4T to 4. Condition–Action Rules: • Connections guiding decision-making in simple reflex agents. • Example: "if car-in-front-is-braking then initiate-braking."
  • 106. Dr S Raguvaran | CINTEL | SRM University 106
  • 107. Dr S Raguvaran | CINTEL | SRM University 107 Human Reflexes and Learned Responses: • Humans exhibit similar condition–action connections. • Learned responses (e.g., driving) and innate reflexes (e.g., blinking) are part of human behavior. General-Purpose Interpreter: • Allows the creation of interpreters for condition–action rules. • Enhances flexibility by adapting to various task environments. Structure of General Program: • Schematic representation of a general program with condition– action rules. • Provides a framework for connecting percepts to actions in different environments.
  • 108. Dr S Raguvaran | CINTEL | SRM University 108 2. Model-Based Reflex Agents: • Handle partial observability by maintaining an internal state reflecting unobserved aspects. • Effective for tasks like driving, where the agent needs to account for unseen elements. Internal State in Model-Based Agents: • The agent's internal state depends on the percept history and captures unobserved parts of the current state. • Examples include keeping track of other cars, camera frames, and key locations.
  • 109. Dr S Raguvaran | CINTEL | SRM University 109
  • 110. Dr S Raguvaran | CINTEL | SRM University 110 Updating Internal State: • Requires knowledge of how the world evolves independently and how the agent's actions impact the world. • Knowledge about "how the world works" is implemented as a model of the world. Model of the World: • The agent's knowledge about how the world functions, encoded in Boolean circuits or scientific theories. • Crucial for predicting outcomes and understanding the consequences of the agent's actions.
  • 111. Dr S Raguvaran | CINTEL | SRM University 111 Model-Based Agent Structure: • Depicts how the current percept is combined with the internal state to update the description of the current state. • Utilizes the agent's model of the world to make informed decisions. UPDATE-STATE Function: • Key function in the model-based agent program responsible for updating the internal state. • Integrates current percepts with the internal state based on the agent's model of the world.
  • 112. Dr S Raguvaran | CINTEL | SRM University 112 3.Goal-Based Agents: Goal-based agents make decisions based on both current environmental states and predefined goals. This dual consideration allows for more flexible and forward-thinking decision-making compared to reflex agents. Example Scenario: At a road junction, a goal-based taxi must decide whether to turn left, right, or go straight based on its intended destination. This decision-making process demonstrates how goals shape actions in complex environments. Components of Goal-Based Agents: Goal-based agents consist of two main components: goal information, defining desirable situations, and a model representing knowledge about the environment. The integration of these elements guides action selection.
  • 113. Dr S Raguvaran | CINTEL | SRM University 113
  • 114. Dr S Raguvaran | CINTEL | SRM University 114 Action Selection: Decision-making in goal-based agents involves combining current state, goal details, and the model, with action complexity varying from immediate satisfaction to intricate sequences. Consideration of the Future: Unlike reflex agents, goal-based agents explicitly consider the future, anticipating consequences and evaluating actions for alignment with the ultimate goal. Flexibility of Goal-Based Agents: Goal-based agents exhibit flexibility through explicit knowledge representation, allowing easy modifications influenced by environmental changes, such as rain. Adaptability to Changes: The adaptability of goal-based agents shines when facing changes, like altering destinations, contrasting with reflex agents that may require extensive rule rewriting for similar adaptations.
  • 115. Dr S Raguvaran | CINTEL | SRM University 115 Comparison with Reflex Agents: Goal-based agents fundamentally differ from reflex agents by integrating explicit knowledge and future considerations into decision-making, while reflex agents rely solely on condition-action rules. Illustration of Flexibility: An illustration of flexibility involves the agent adjusting behavior in response to rain, updating knowledge of brake effectiveness, highlighting the adaptability of goal-based systems.
  • 116. Dr S Raguvaran | CINTEL | SRM University 116 4. Introduction to Utility-Based Agents: Utility-based agents go beyond goals, incorporating a utility function to assess the desirability of different actions based on factors like speed, safety, reliability, or cost. Utility Function and Performance Measure: The utility function internalizes the performance measure, offering a quantitative measure (utility) rather than a binary one (happy/unhappy). Alignment with the external performance measure ensures rational decision-making. Handling Conflicting Goals: In situations with conflicting goals, the utility function helps specify appropriate trade-offs, allowing the agent to make rational decisions. Multiple Uncertain Goals: When facing multiple uncertain goals, utility provides a way to weigh the likelihood of success against the importance of each goal, enabling rational decision-making.
  • 117. Dr S Raguvaran | CINTEL | SRM University 117
  • 118. Dr S Raguvaran | CINTEL | SRM University 118 Decision-Making under Uncertainty: Utility-based agents, dealing with partial observability and stochasticity, maximize expected utility by choosing actions that yield the highest average utility, given probabilities and utilities of outcomes. Rationality Constraint: Rational utility-based agents follow a local constraint by maximizing expected utility, turning the global definition of rationality into a program expressing rational-agent designs.
  • 119. Dr S Raguvaran | CINTEL | SRM University 119 Scenario: Autonomous Vehicle Decision-Making in Traffic Context: Imagine an autonomous vehicle navigating through a busy urban environment. The vehicle's primary goal is to reach its destination safely and efficiently. However, it encounters a challenging scenario where it needs to make a split-second decision due to unexpected circumstances.
  • 120. Dr S Raguvaran | CINTEL | SRM University 120 Goal-Oriented Decision: •Situation: The vehicle is on a tight schedule, heading towards its destination with passengers. •Challenge: A traffic jam has caused a significant delay, jeopardizing the timely arrival at the destination. •Goal-Driven Decision: The autonomous vehicle decides to take a shortcut through a narrow side street, aiming to bypass the traffic and meet the time deadline. Utility-Oriented Decision: •Situation: While navigating through the side street, the autonomous vehicle encounters a complex intersection with pedestrians and cyclists. •Challenge: Choosing the optimal path poses a dilemma between maximizing utility (efficiency) and ensuring safety. •Utility-Driven Decision: The vehicle, prioritizing safety over efficiency, cautiously yields to pedestrians and cyclists, sacrificing some efficiency for a safer route.
  • 121. Dr S Raguvaran | CINTEL | SRM University 121 1.Learning in AI: 1. Learning is a preferred method for creating advanced AI systems, allowing them to adapt to unknown environments and become more competent over time. 2.Components of a Learning Agent: 1. A learning agent consists of four main components: Performance Element, Learning Element, Critic, and Problem Generator. 3.Performance Element and Learning Element: 1. The Performance Element selects external actions based on percepts, while the Learning Element is responsible for making improvements based on feedback from the Critic. 5.Learning Agent
  • 122. Dr S Raguvaran | CINTEL | SRM University 122 Role of the Critic: The Critic provides feedback on how well the agent is performing with respect to a fixed performance standard, as percepts alone do not indicate the agent's success. Problem Generator's Role in Exploration: The Problem Generator suggests exploratory actions to the agent, allowing it to gather new and informative experiences. This exploration is crucial for discovering better long-term actions.
  • 123. Dr S Raguvaran | CINTEL | SRM University 123 What is the primary limitation of simple reflex agents? •A. Lack of flexibility. •B. Limited memory. •C. Inability to perceive the environment. •D. Dependence on goals. •A. Lack of flexibility. What is the decision-making process of a simple reflex agent based on? A. Goals and utility. B. Current percept only. C. Learning from past experiences. D. Future predictions. •B. Current percept only.
  • 124. How does a simple reflex agent respond to changes in the environment? •A. By adapting its rules. •B. By considering long-term goals. •C. By utilizing a utility function. •D. By learning from experience. A. By adapting its rules. What distinguishes model-based reflex agents from simple reflex agents? •A. They lack a model. •B. They rely solely on goals. •C. They incorporate a model of the environment. •D. They prioritize utility. C. They incorporate a model of the environment.
  • 125. How does a model-based reflex agent handle changes in the environment? •A. It adapts its rules. •B. It learns from experience. •C. It consults its internal model. •D. It ignores environmental changes. C. It consults its internal model. What is the primary focus of a goal-based agent? •A. Reacting to percepts. •B. Learning from experience. •C. Achieving specific objectives. •D. Modeling the environment. C. Achieving specific objectives.
  • 126. How does a goal-based agent make decisions at a road junction? •A. Based on immediate percepts. •B. By learning from past experiences. •C. Considering the ultimate destination. •D. Relying on a utility function. C. Considering the ultimate destination. How does utility-based decision-making differ from goal-based decision-making? •A. Goals involve explicit representation. •B. Utility focuses on achieving objectives. •C. Utility quantifies desirability. •D. Goals consider immediate percepts. C. Utility quantifies desirability.
  • 127. In what situations might utility-based agents outperform goal-based agents? A. In highly dynamic environments. B. When there are conflicting goals. C. When explicit knowledge is crucial. D. In situations with clear, predefined goals. B. When there are conflicting goals. Which component of a learning agent is responsible for suggesting exploratory actions? •A. Performance element. •B. Learning element. •C. Critic. •D. Problem generator. D. Problem generator.
  • 128. CONSTRAINT SATISFACTION PROBLEMS Constraint Satisfaction Problems (CSPs) are a class of problems where the goal is to find a consistent assignment of values to a set of variables, each subject to constraints. Components of a CSP: •Variables: These represent the entities for which we need to find values. Each variable has a domain, which is the set of possible values it can take. •Domains: The domains are the allowed values for each variable. The goal is to find a combination of values that satisfies all constraints. •Constraints: These are restrictions on the possible combinations of values for the variables. Constraints define the relationships between variables and limit the valid assignments.
  • 129. A constraint satisfaction problem consists of three components, X,D, and C: 1. X is a set of variables, {X1, . . . ,Xn}. 2. D is a set of domains, {D1, . . . ,Dn}, one for each variable. 3. C is a set of constraints that specify allowable combinations of values. For example, if X1 and X2 both have the domain {A,B}, then the constraint saying the two variables must have different values can be written as (X1,X2), [(A,B), (B,A)] or as (X1,X2),X1 != X2.
  • 130. 1.State Space and Solution: • State: Defined by an assignment of values to variables, {Xi = vi, Xj = vj, ...}. • Complete Assignment: Assignment that does not violate any constraints. • Partial Assignment: Assignment with values for some variables. 2.Consistent and Legal Assignment: • Consistent Assignment: Does not violate any constraints. • Legal Assignment: Another term for a consistent assignment. 3.Solution to CSP: • Solution: A consistent, complete assignment. • Partial Solution: A consistent, partial assignment. 4.CSP Solving Process: • Define the state space by considering all possible assignments. • Explore the space systematically, considering constraints. • A solution is a consistent, complete assignment that satisfies all constraints.
  • 132.
  • 133.
  • 134. 6.Advantages of Formulating as CSP: • CSPs offer a natural representation for a wide range of problems. • Utilizing a CSP-solving system is often more convenient than designing custom solutions using other search techniques. 7.Constraint Propagation Efficiency: • CSP solvers can quickly eliminate large portions of the search space. • Constraint propagation, as seen in the Australia problem, efficiently reduces the number of assignments to be considered. 8.State-Space Search vs. CSPs: • In regular state-space search, the question is binary: Is this specific state a goal or not? • With CSPs, upon identifying a partial assignment that is not a solution, it allows for listing significant steps, providing a more detailed and constructive approach. 9.Advantages of CSPs in Search: • CSPs can list significant steps, making it more expressive than binary goal-checking in state-space search. • CSPs allow for a more nuanced exploration of possibilities and reasons for failure.
  • 135. 1. Introduction to Node Consistency: •Node consistency is a property of variables in Constraint Satisfaction Problems (CSPs). •In a CSP, variables have domains (possible values they can take) and constraints that define allowable combinations of values. 2. Unary Constraints: •A unary constraint is a constraint on a single variable. •Node consistency focuses on ensuring that all values in a variable's domain satisfy its unary constraints. 3. Example - Australia Map-Coloring Problem: •Consider the Australia map-coloring problem. •Assume South Australians dislike the color green, and we have a variable SA with the initial domain {red, green, blue}.
  • 136. 4. Making a Variable Node-Consistent: •To make the variable SA node-consistent, we eliminate values from its domain that violate its unary constraint. •In this case, we eliminate 'green' because South Australians dislike it, resulting in the reduced domain {red, blue} for SA. 5. Node-Consistent Variable: •A variable is node-consistent if all values in its domain satisfy its unary constraints. •In the example, SA is now node-consistent with the domain {red, blue}. 6. Node-Consistent Network: •A network is node-consistent if every variable in the network is node-consistent. •Achieving node consistency involves iteratively enforcing unary constraints on each variable in the CSP.
  • 137. 1. Introduction to Arc Consistency: •Arc consistency is another property of variables in Constraint Satisfaction Problems (CSPs). •In arc consistency, the focus is on ensuring that every value in a variable's domain satisfies its binary constraints with other variables. 2. Binary Constraints: •Binary constraints involve relationships between two variables. •A variable is arc-consistent if every value in its domain satisfies the variable's binary constraints. 3. Example - Constraint Y = X^2: •Consider the constraint Y = X^2 where the domain of both X and Y is the set of digits. •The binary constraint is explicitly defined as {(X, Y), (0, 0), (1, 1), (2, 4), (3, 9)}. 4. Making a Variable Arc-Consistent: •To make variable X arc-consistent with respect to Y, reduce X's domain to values where there exists a corresponding value in Y's domain that satisfies the binary constraint. •For example, if X's domain is {0, 1, 2, 3}, Y becomes arc-consistent with the domain {0, 1, 4, 9}. 5. Arc-Consistent Network: •A network is arc-consistent if every variable is arc-consistent with every other variable. •Achieving arc consistency involves enforcing binary constraints between variables in the CSP.
  • 138. Dr S Raguvaran | CINTEL | SRM University 138
  • 139. AC-3 Algorithm: 1. Initialization: •Begin with an initial CSP where each variable has an associated domain of possible values, and there are binary constraints between some pairs of variables. 2. Queue Initialization: •Create a queue containing all the arcs in the CSP initially. An arc is a pair of variables connected by a constraint. 3. Processing Arcs: •While the queue is not empty, pop an arc (Xi, Xj) from the queue. •Check if the domain of Xi has been modified due to the values in the domain of Xj. If so, update Xi's domain and add all arcs (Xk, Xi) to the queue, where Xk is a neighbor of Xi and Xk ≠ Xj.
  • 140. 4. Arc Consistency: •For each value in the domain of Xi, check if there is at least one value in the domain of Xj that satisfies the binary constraint on (Xi, Xj). •If not, remove the inconsistent value from the domain of Xi. 5. Repeat until Queue is Empty: •Continue processing arcs until the queue becomes empty. 6. Result: •The AC-3 algorithm updates the domains of variables based on the binary constraints, ensuring that each variable becomes arc-consistent with respect to every other variable.
  • 141. Example with Inconsistent Values: Consider the following constraint: {(A, B), (B, C), (C, D)} with initial domains {1, 2, 3} for A, B, C, and D. Let's apply AC-3: 1.Initialization: 1. Initial CSP: {(A, B), (B, C), (C, D)} with domains {1, 2, 3} for A, B, C, and D. 2.Queue Initialization: 1. Queue: {(A, B), (B, C), (C, D)} 3.Processing Arcs: 1. Process (A, B): No changes. 2. Process (B, C): No changes. 3. Process (C, D): No changes.
  • 142. 1.Arc Consistency: 1. Check (A, B): No inconsistent values. 2. Check (B, C): No inconsistent values. 3. Check (C, D): Suppose (C, D) has the constraint {(1, 3), (2, 3)}. 1.Domain of C = {1, 2, 3}, Domain of D = {1, 2, 3}. 2.Both values 1 and 2 in the domain of C have no valid matching value in the domain of D based on the constraint. 3.Remove 1 and 2 from the domain of C. 2.Repeat until Queue is Empty: 1. The queue is now empty. 3.Result: 1. The domains become {3} for C and D, as values 1 and 2 were inconsistent.
  • 143. 1. Introduction to Path Consistency: •Path consistency is a stronger notion of consistency compared to arc consistency. •While arc consistency focuses on tightening binary constraints using arcs, path consistency looks at triples of variables to infer implicit constraints. 2. Motivation - Limitation of Arc Consistency: •In certain CSPs, arc consistency may not provide enough inference. For example, in the Australia map-coloring problem with only two colors allowed, arc consistency cannot find a solution. 3. Path Consistency Definition: •A two-variable set {Xi, Xj} is path-consistent with respect to a third variable Xm if, for every assignment {Xi = a, Xj = b} consistent with the constraints on {Xi, Xj}, there is an assignment to Xm that satisfies the constraints on {Xi, Xm} and {Xm, Xj}.
  • 144. 4. Path Consistency Example - Australia Map Coloring: •Consider the Australia map-coloring problem with only two colors allowed (red and blue). •Make the set {WA, SA} path-consistent with respect to NT. •Enumerate consistent assignments: {WA = red, SA = blue} and {WA = blue, SA = red}. •Analyze the impact on NT: In both assignments, NT cannot be red or blue (conflicts with either WA or SA). •Eliminate inconsistent assignments, leading to the conclusion that there is no valid solution for {WA, SA}.
  • 145. Cryptarithmetic puzzles 1.Nature of the Problem: 1. Cryptarithmetic puzzles involve assigning digits to letters in a mathematical expression. 2. The puzzle typically consists of an arithmetic equation where digits are replaced by letters. 2.Constraint Satisfaction Problem (CSP): 1. Cryptarithmetic puzzles can be formulated as CSPs, where the goal is to find a valid assignment of digits to letters that satisfies specific constraints.
  • 146. 1.Goal: 1. The objective is to find a consistent assignment of digits to letters that makes the arithmetic equation true. 2.Constraints: 1. Each letter represents a different digit (1 to 9 or 0 to 9, depending on the context). 2. No two letters can represent the same digit (Alldiff constraint). 3.Alldiff Constraint: 1. The Alldiff constraint ensures that all variables (letters) must have different values. 2. Prevents the repetition of digits within the set of variables.
  • 147. S E N D + M O R E --------- M O N E Y Example: •An example of a cryptarithmetic puzzle is:
  • 148. Global Constraint in CSPs: •A global constraint involves an arbitrary number of variables in a Constraint Satisfaction Problem (CSP). •The term "global" doesn't necessarily mean involving all variables but refers to constraints that go beyond unary or binary constraints. Alldiff Constraint: •The Alldiff constraint (All Different) is a common global constraint used in CSPs. •It ensures that all variables involved in the constraint must have different values. Application in Cryptarithmetic Puzzles: • In cryptarithmetic puzzles, the Alldiff constraint is applied to the set of variables representing the letters {S, E, N, D, M, O, R, Y}. • It ensures that each letter represents a different digit, and no two letters can have the same digit. Example Illustration: • The Alldiff constraint is illustrated with an example where values are assigned to variables: S=9, E=5, N=6, D=7, M=1, O=0, R=8, Y=2. • This assignment satisfies the Alldiff constraint as each digit appears only once.
  • 149.
  • 150. Solving cryptarithmetic puzzles involves exploring various digit assignments to the letters to satisfy the given constraints. The following is one possible solution for the example cryptarithmetic puzzle: How to Solve | this Puzzle? 9 5 6 7 + 1 0 8 5 --------- 1 0 6 5 2
  • 151. T W O + T W O --------- F O U R 9 8 7 + 9 8 7 --------- 1 9 7 4 T – 9 W – 8 O – 7 R -4 U – 7 F - 1
  • 152. C R O S S + R O A D S ------------- D A N G E R 9 6 2 3 3 + 6 2 5 1 3 --------------- 1 5 8 7 4 6 C – 9 R – 6 O – 2 S – 3 A-5 D-1
  • 153. Representing CSP as a Search Problem Room Painting CSP Problem Statement You are tasked with painting the rooms in a house, and certain constraints must be adhered to in order to create a harmonious color scheme. Each room needs to be assigned one of three colors: Red, Green, or Blue. The goal is to find a valid assignment of colors to rooms while satisfying the following constraints: Adjacent Rooms Constraint: No two adjacent rooms can be painted with the same color. Master Bedroom Constraint: The color of the Master Bedroom must be different from the other bedrooms. Living Room and Guest Room Constraint: The Living Room and the Guest Room must be painted with distinct colors. Neutral Rooms Constraint: Washrooms, the Store Room, and the Study Room must be painted with neutral colors (e.g., White).
  • 154. Living Room --- Bedroom 1 | | | | Washroom 1 Bedroom 2 | | | | Study Room --- Master Bedroom | | | | Washroom 2 Guest Room | Store Room
  • 156. Step 4: Explore MasterBedroom colors: S4​={(LivingRoom,Red),(Bedroom1,Green),(Bedroom2,Red),(MasterBedroom,Blue ),…} Step 5: Explore other rooms based on constraints: S5​={(LivingRoom,Red),(Bedroom1,Green),(Bedroom2,Red),(MasterBedroom,Blue ),(Washroom1,?),…}
  • 157. 1.Explore State Space: Begin by choosing a color for the first room and exploring possible color assignments for subsequent rooms. 2.Backtracking: If a constraint is violated, backtrack to the previous decision point and explore alternative color assignments. 3.Continue Exploration: Iterate through the state space, making choices and backtracking as needed, until a complete assignment satisfying all constraints is found or all possibilities are exhausted. 4.Complexity Warning: Recognize the potential for extensive exploration due to the complexity of the problem. 5.User Guidance: If you have specific preferences, want to focus on particular aspects, or simulate more steps, feel free to provide guidance for a tailored exploration.
  • 158. BACKTRACKING SEARCH FOR CSPS 1. Backtracking Search: •Backtracking is a depth-first search algorithm used to solve CSPs. •States represent partial assignments, and actions involve adding a variable with a value to the assignment. •The naive approach has a high branching factor, resulting in an impractical search tree. 2. Commutativity in CSPs: •CSPs exhibit commutativity, meaning the order of applying actions does not affect the outcome. •Utilizing commutativity, backtracking search focuses on one variable at a time instead of exploring all variable assignments simultaneously. 3. Backtracking Algorithm: •Backtracking search chooses one variable at a time, backtracking when a variable has no legal values left. •It maintains a single representation of a state and alters that representation instead of creating new ones.
  • 159. 4. Improving Performance without Heuristics: •Unlike uninformed search algorithms, backtracking can efficiently solve CSPs without domain-specific heuristic functions. •Backtracking-Search works without the need for a domain-specific initial state, action function, transition model, or goal test. 5. Variable and Value Ordering: •Variable Selection (Select-Unassigned-Variable): • Commonly, Minimum Remaining Values (MRV) heuristic is used, choosing the variable with the fewest legal values. • Another approach is the Degree Heuristic, selecting the variable involved in the most constraints on other unassigned variables. •Value Ordering (Order-Domain-Values): • Least Constraining Value Heuristic can be effective, preferring the value that rules out the fewest choices for neighboring variables.
  • 160. 6. Interleaving Search and Inference: •Forward Checking: • After assigning a variable, forward checking establishes arc consistency by deleting inconsistent values from neighboring variables. • Detects some inconsistencies during the search, leading to more efficient pruning of the search tree. •Maintaining Arc Consistency (MAC): • MAC is an algorithm that calls AC-3 after a variable assignment, making only the relevant arcs arc-consistent. • More powerful than forward checking as it can detect inconsistencies that forward checking may miss.
  • 161. Intelligent Backtracking and Conflict-Directed Backjumping: 1. Chronological Backtracking: •Traditional backtracking involves revisiting the most recent decision point when a branch fails. •In chronological backtracking, the search proceeds in a fixed variable order. 2. Conflict Sets: •A conflict set is a set of assignments that are in conflict with some value for a variable. •In the example: {Q=red, NSW=green, V=blue} is the conflict set for SA. 3. Backjumping: •Backjumping involves backtracking to the most recent assignment in the conflict set when a failure occurs. •For example, if SA has a conflict set, backjump to the most recent assignment in that set (e.g., V) and try a new value.
  • 162. 4. Relation to Forward Checking: •Forward checking can naturally provide conflict sets during the search. •Whenever a value is deleted from a variable's domain, add the assignment to the conflict set. 5. Redundancy of Simple Backjumping: •Backjumping occurs when every value in a domain conflicts with the current assignment. •Forward checking detects and prevents such scenarios, making simple backjumping redundant. 6. Conflict-Directed Backjumping: •Despite redundancy, the idea of backjumping based on reasons for failure is valuable. •In conflict-directed backjumping, the conflict set for a variable is not only the immediate conflicting variables but also those that caused subsequent variables to have no consistent solution. •It goes beyond the simple conflict set and considers the deeper set of preceding variables that led to the failure of a branch.