SlideShare a Scribd company logo
1 of 48
Artificial Intelligence and
machine Learning
Architecture of Agent
• Architecture is the machinery that the agent
executes on.
• An agent program is an implementation of an
agent function.
• An agent function is a map from the percept
sequence
Characteristic of an Agent
• Intelligent personal assistants:
• Autonomous robots:
• Gaming agents:
• Fraud detection agents:
• Traffic management agents: etc.,
Types of Agents
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents
• Learning Agent
• Multi-agent systems
• Hierarchical agents
Simple Reflex Agents
• act only on the basis of
the current percept
• If the condition is true,
then the action is taken,
else not
• This agent function only
succeeds when the
environment is fully
observable
Problems with Simple reflex agents are
• Very limited intelligence.
• No knowledge of non-
perceptual parts of the
state.
• Usually too big to
generate and store.
• If there occurs any
change in the
environment, then the
collection of rules needs
to be updated.
•
Model-Based Reflex Agents
Model-Based Reflex Agents
• A model-based agent can handle partially
observable environments
• The agent has to keep track of the internal
state which is adjusted by each percept and that
depends on the percept history
• Updating the state requires information about:
• How the world evolves independently from the
agent?
• How do the agent’s actions affect the world?
Goal-Based Agents
Goal-Based Agents
• agents take decisions based on how far they
are currently from their goal(description of
desirable situations.
Utility-Based Agents
• They choose actions based on a preference (utility) for
each state.
• Utility describes how “happy” the agent is
• Sometimes achieving the desired goal is not enough.
• We may look for a quicker, safer, cheaper trip to reach a
destination.
• gent happiness should be taken into consideration.
• A utility function maps a state onto a real number which
describes the associated degree of happiness.
Learning Agent
Learning Agent
• that can learn from its past experiences or it has learning capabilities.
• It starts to act with basic knowledge and then is able to act and adapt
automatically through learning.
• A learning agent has mainly four conceptual components, which are:
• Learning element: It is responsible for making improvements by learning
from the environment.
• Critic: The learning element takes feedback from critics which describes
how well the agent is doing with respect to a fixed performance standard.
• Performance element: It is responsible for selecting external action.
• Problem Generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
•
Multi-Agent Systems
• A multi-agent system (MAS) is a system
composed of multiple interacting agents that are
designed to work together to achieve a common
goal.
• These agents may be autonomous or semi-
autonomous and are capable of perceiving their
environment, making decisions, and taking action
to achieve the common objective.
• MAS can be used in a variety of applications,
including transportation systems, robotics, and
social networks.
Classificattion MAS
• MAS can be classified into different types based on
their characteristics, such as whether the agents
have the same or different goals, whether the agents
are cooperative or competitive, and whether the
agents are homogeneous or heterogeneous.
• In a homogeneous MAS, all the agents have the
same capabilities, goals, and behaviors.
• In contrast, in a heterogeneous MAS, the agents have
different capabilities, goals, and behaviors.
Hierarchical Agents
• These agents are organized into a hierarchy, with
high-level agents overseeing the behavior of
lower-level agents.
• The high-level agents provide goals and
constraints, while the low-level agents carry out
specific tasks.
• Hierarchical agents are useful in complex
environments with many tasks and sub-tasks.
• They are particularly useful in environments
where there are many tasks and sub-tasks that
need to be coordinated and prioritized.
Agent Environment
• An environment in artificial intelligence is the
surrounding of the agent.
– Fully Observable vs Partially Observable
– Deterministic vs Stochastic
– Competitive vs Collaborative
– Single-agent vs Multi-agent
– Static vs Dynamic
– Discrete vs Continuous
– Episodic vs Sequential
– Known vs Unknown
Fully Observable vs Partially
Observable
• When an agent sensor is capable to sense or access the
complete state of an agent at each point in time, it is said
to be a fully observable environment else it is partially
observable.
• An environment is called unobservable when the agent has
no sensors in all environments.
• Examples:
– Chess – the board is fully observable, and so are the opponent’s
moves.
– Driving – the environment is partially observable because
what’s around the corner is not known.
•
Deterministic vs Stochastic
• When a uniqueness in the agent’s current state
completely determines the next state of the agent,
the environment is said to be deterministic.
• The stochastic environment is random in nature
which is not unique and cannot be completely
determined by the agent.
• Examples:
– Chess – there would be only a few possible moves for a
coin at the current state and these moves can be
determined.
– Self-Driving Cars- the actions of a self-driving car are not
unique, it varies time to time.
Competitive vs Collaborative
• An agent is said to be in a competitive environment when
it competes against another agent to optimize the
output.
• The game of chess is competitive as the agents compete
with each other to win the game which is the output.
• An agent is said to be in a collaborative environment
when multiple agents cooperate to produce the desired
output.
• When multiple self-driving cars are found on the roads,
they cooperate with each other to avoid collisions and
reach their destination which is the output desired.
Single-agent vs Multi-agent
• An environment consisting of only one agent is
said to be a single-agent environment.
• A person left alone in a maze is an example of
the single-agent system.
• An environment involving more than one agent
is a multi-agent environment.
• The game of football is multi-agent as it involves
11 players in each team.
Dynamic vs Static
• An environment that keeps constantly changing
itself when the agent is up with some action is
said to be dynamic.
• A roller coaster ride is dynamic as it is set in
motion and the environment keeps changing
every instant.
• An idle environment with no change in its state is
called a static environment.
• An empty house is static as there’s no change in
the surroundings when an agent enters.
Discrete vs Continuous
• If an environment consists of a finite number of actions
that can be deliberated in the environment to obtain the
output, it is said to be a discrete environment.
• The game of chess is discrete as it has only a finite
number of moves. The number of moves might vary with
every game, but still, it’s finite.
• The environment in which the actions are performed
cannot be numbered i.e. is not discrete, is said to be
continuous.
• Self-driving cars are an example of continuous
environments as their actions are driving, parking, etc.
which cannot be numbered.
Episodic vs Sequential
• In an Episodic task environment, each of the agent’s actions
is divided into atomic incidents or episodes. There is no
dependency between current and previous incidents. In each
incident, an agent receives input from the environment and
then performs the corresponding action.
• Example: Consider an example of Pick and Place robot, which
is used to detect defective parts from the conveyor belts.
Here, every time robot(agent) will make the decision on the
current part i.e. there is no dependency between current and
previous decisions.
• In a Sequential environment, the previous decisions can
affect all future decisions. The next action of the agent
depends on what action he has taken previously and what
action he is supposed to take in the future.
•
Known vs Unknown
• In a known environment, the output for all
probable actions is given.
• Obviously, in case of unknown environment,
for an agent to make a decision, it has to gain
knowledge about how the environment
works.
PEAS
• PEAS – Performance measure, Environment,
Actuator and Sensro
• PEAS is used to categorize similar agents
together.
• Rational Agent: The rational agent considers all
possibilities and chooses to perform a highly
efficient action. For example, it chooses the
shortest path with low cost for high efficiency
PEAS
• Performance Measure: Performance measure is the unit to define
the success of an agent. Performance varies with agents based on
their different precepts.
• Environment: Environment is the surrounding of an agent at every
instant. It keeps changing with time if the agent is set in motion.
There are 5 major types of environments:
– Fully Observable & Partially Observable
– Episodic & Sequential
– Static & Dynamic
– Discrete & Continuous
– Deterministic & Stochastic
• Actuator: An actuator is a part of the agent that delivers the output
of action to the environment.
• Sensor: Sensors are the receptive parts of an agent that takes in the
input for the agent.
PEAS examples
Agent
Performance
Measure
Environment Actuator Sensor
Hospital Management
System
Patient’s health,
Admission process,
Payment
Hospital, Doctors,
Patients
Prescription,
Diagnosis, Scan report
Symptoms, Patient’s
response
Automated Car Drive
The comfortable trip,
Safety, Maximum
Distance
Roads, Traffic,
Vehicles
Steering wheel,
Accelerator, Brake,
Mirror
Camera, GPS,
Odometer
Subject Tutoring
Maximize scores,
Improvement is
students
Classroom, Desk,
Chair, Board, Staff,
Students
Smart displays,
Corrections
Eyes, Ears, Notebooks
Part-picking robot
Percentage of parts in
correct bins
Conveyor belt with
parts; bins
Jointed arms and
hand
Camera, joint angle
sensors
Satellite image
analysis system
Correct image
categorization
Downlink from
orbiting satellite
Display categorization
of scene
Color pixel arrays
Reasoning:
• Thus Reasoning can be defined as the logical
process of drawing conclusions, making
predictions or constructing approaches
towards a particular thought with the help of
existing knowledge.
Deductive Reasoning:
• Deductive Reasoning is the strategic approach
that uses available facts, information or
knowledge to draw valid conclusions.
• examples are: People who are aged 20 or above
are active users of the internet.
• Out of the total number of students present in
the class, the ratio of boys is more than the
girls.
Inductive Reasoning: I
• Set of facts
• Inductive reasoning is associated with the
hypothesis-generating approach rather than
drawing any particular conclusion
• All the students present in the classroom are
from London.
• Always the hottest temperature is recorded in
Death Valley.
Common Sense Reasoning:
• Common sense reasoning is the most occurred
type of reasoning in daily life events
• It is the type of reasoning which comes from
experiences.
• whenever in the next point of time it faces a
similar type of situation then it uses its previous
experiences to draw a conclusion
Monotonic Reasoning:
• it uses facts, information and knowledge to
draw a conclusion about the problem.
– The Sahara desert of the world is one of the most
spectacular deserts.
– One of the longest rivers in the world is the Nile
River.
Abductive Reasoning:
• It begins with an incomplete set of facts,
information and knowledge and then
proceeds to find the most deserving
explanation and conclusion.
• It draws conclusions based on what facts you
know at present rather than collecting some
outdated facts and information.
Logic
• Logic can be defined as the proof or
validation behind any reason provided
• Logic, as per the definition of the Oxford
dictionary, is "the reasoning conducted
or assessed according to strict
principles and validity"
Propositional Logic :
A proposition is basically a declarative sentence that has a truth value.
Truth value can either be true or false,
1. but it needs to be assigned any of the two values and not be ambiguous.
2. The purpose of using propositional logic is to analyze a statement,
individually or compositely.
For example :
The following statements :
•If x is real, then x2 > 0
•What is your name?
•(a+b)2 = 100
•This statement is false.
•This statement is true.
Are not propositions because they do not have a truth value.
They are ambiguous.
Propositional Logic
But the following statements :
• 1. (a+b)2 = a2 + 2ab + b2
• 2. If x is real, then x2 >= 0
• 3. If x is real, then x2 < 0
• 4. The sun rises in the east.
• 5. The sun rises in the west.
• Are all propositions because they have a specific
truth value, true or false.
• The branch of logic that deals with proposition is
propositional logic.
Predicate Logic
Predicates are properties, additional information to better
express the subject of the sentence. A quantified predicate is a
proposition , that is, when you assign values to a predicate with
variables it can be made a proposition.
For example :
• In P(x) : x>5, x is the subject or the variable and ‘>5’ is the
predicate.
• P(7) : 7>5 is a proposition where we are assigning values to
the variable x, and it has a truth value, i.e. True.
• The set of values that the variables of the predicate can
assume is called the Universe or Domain of Discourse or
Domain of Predicate.
Forward Chaining
 Forward chaining is a method of reasoning in artificial intelligence in
which inference rules are applied to existing data to extract additional
data until an endpoint (goal) is achieved.
Forward Chaining Steps
 In the first step, the system is given one or more than one constraints.
 Then the rules are searched in the knowledge base for each constraint.
The rules that fulfil the condition are selected(i.e., IF part).
 Now each rule is able to produce new conditions from the conclusion
of the invoked one.As a result, THEN part is again included in the
existing one.
 The added conditions are processed again by repeating step 2. The
process will end if there is no new conditions exist.
4
2
Properties of forward chaining
 The process uses a down-up approach (bottom to top).
 It starts from an initial state and uses facts to make a conclusion.
 This approach is data-driven.
 It’s employed in expert systems and production rule system.
4
3
Examples of forward chaining
 Asimple example of forward chaining can be explained in the following
sequence.
 A
 A->B
 B
 Ais the starting point.A->B represents a fact. This fact is used to achieve a
decision B.
 Apractical example will go as follows;
 Tom is running (A)
 If a person is running, he will sweat (A->B)
 Therefore, Tom is sweating. (B)
4
4
Backward Chaining
 Backward chaining is a concept in artificial intelligence that involves
backtracking from the endpoint or goal to steps that led to the
endpoint.
 This type of chaining starts from the goal and moves backward to
comprehend the steps that were taken to attain this goal.
4
5
Backward Chaining Steps
Firstly, the goal state and the rules are selected where the
goal state reside in the THEN part as the conclusion.
From the IF part of the selected rule the sub goals are
made to be satisfied for the goal state to be true.
Set initial conditions important to satisfy all the sub goals.
Verify whether the provided initial state matches with the
established states. If it fulfils the condition then the goal is
the solution otherwise other goal state is selected.
4
6
Properties of backward chaining
 The process uses an up-down approach (top to bottom).
 It’s a goal-driven method of reasoning.
 The endpoint (goal) is subdivided into sub-goals to prove the truth of
facts.
 Abackward chaining algorithm is employed in inference engines,
game theories, and complex database systems.
 The modus ponens inference rule is used as the basis for the backward
chaining process. This rule states that if both the conditional statement
(p->q) and the antecedent (p) are true, then we can infer the
subsequent (q).
4
7
Example of backward chaining
 The information provided in the previous example (forward chaining) can be used
to provide a simple explanation of backward chaining. Backward chaining can be
explained in the following sequence.
 B
 A->B
 A
 B is the goal or endpoint, that is used as the starting point for backward tracking.A
is the initial state.A->B is a fact that must be asserted to arrive at the endpoint B.
 Apractical example of backward chaining will go as follows:
 Tom is sweating (B).
 If a person is running, he will sweat (A->B).
 Tom is running (A).
9

More Related Content

Similar to Artificial Intelligence and Machine Learning.pptx

Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environmentVajira Thambawita
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)mufassirin
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agentsmarada0033
 
Types of Artificial Intelligence.ppt
Types of Artificial Intelligence.pptTypes of Artificial Intelligence.ppt
Types of Artificial Intelligence.pptGEETHAS668001
 
Artificial intelligence(03)
Artificial intelligence(03)Artificial intelligence(03)
Artificial intelligence(03)Nazir Ahmed
 
Intelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceIntelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceKuppusamy P
 
Detail about agent with it's types in AI
Detail about agent with it's types in AI Detail about agent with it's types in AI
Detail about agent with it's types in AI bhubohara
 
Artificial Intelligence_Environment.pptx
Artificial Intelligence_Environment.pptxArtificial Intelligence_Environment.pptx
Artificial Intelligence_Environment.pptxRatnakar Mikkili
 
mosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.pptmosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.pptdanymorales34
 
CS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptxCS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptxethiouniverse
 
Structure of agents
Structure of agentsStructure of agents
Structure of agentsMANJULA_AP
 

Similar to Artificial Intelligence and Machine Learning.pptx (20)

Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environment
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)
 
Artificial Intelligence - An Introduction
Artificial Intelligence - An IntroductionArtificial Intelligence - An Introduction
Artificial Intelligence - An Introduction
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
 
Types of Artificial Intelligence.ppt
Types of Artificial Intelligence.pptTypes of Artificial Intelligence.ppt
Types of Artificial Intelligence.ppt
 
Agents1
Agents1Agents1
Agents1
 
Artificial intelligence(03)
Artificial intelligence(03)Artificial intelligence(03)
Artificial intelligence(03)
 
Intelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial IntelligenceIntelligent (Knowledge Based) agent in Artificial Intelligence
Intelligent (Knowledge Based) agent in Artificial Intelligence
 
4.1.pdf
4.1.pdf4.1.pdf
4.1.pdf
 
Detail about agent with it's types in AI
Detail about agent with it's types in AI Detail about agent with it's types in AI
Detail about agent with it's types in AI
 
M2 agents
M2 agentsM2 agents
M2 agents
 
Intelligent Agents
Intelligent Agents Intelligent Agents
Intelligent Agents
 
Artificial Intelligence_Environment.pptx
Artificial Intelligence_Environment.pptxArtificial Intelligence_Environment.pptx
Artificial Intelligence_Environment.pptx
 
Intelligent Agents
Intelligent AgentsIntelligent Agents
Intelligent Agents
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
mosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.pptmosfet3inteliggent ageent preserve2ss.ppt
mosfet3inteliggent ageent preserve2ss.ppt
 
Unit-1.pptx
Unit-1.pptxUnit-1.pptx
Unit-1.pptx
 
CS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptxCS Artificial intelligence chapter 2.pptx
CS Artificial intelligence chapter 2.pptx
 
Lec 2 agents
Lec 2 agentsLec 2 agents
Lec 2 agents
 
Structure of agents
Structure of agentsStructure of agents
Structure of agents
 

Recently uploaded

ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxRaymartEstabillo3
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...jaredbarbolino94
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxsocialsciencegdgrohi
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfadityarao40181
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,Virag Sontakke
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 

Recently uploaded (20)

ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptxEPANDING THE CONTENT OF AN OUTLINE using notes.pptx
EPANDING THE CONTENT OF AN OUTLINE using notes.pptx
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...Historical philosophical, theoretical, and legal foundations of special and i...
Historical philosophical, theoretical, and legal foundations of special and i...
 
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptxHistory Class XII Ch. 3 Kinship, Caste and Class (1).pptx
History Class XII Ch. 3 Kinship, Caste and Class (1).pptx
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Biting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdfBiting mechanism of poisonous snakes.pdf
Biting mechanism of poisonous snakes.pdf
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,भारत-रोम व्यापार.pptx, Indo-Roman Trade,
भारत-रोम व्यापार.pptx, Indo-Roman Trade,
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 

Artificial Intelligence and Machine Learning.pptx

  • 2. Architecture of Agent • Architecture is the machinery that the agent executes on. • An agent program is an implementation of an agent function. • An agent function is a map from the percept sequence
  • 4. • Intelligent personal assistants: • Autonomous robots: • Gaming agents: • Fraud detection agents: • Traffic management agents: etc.,
  • 5. Types of Agents • Simple Reflex Agents • Model-Based Reflex Agents • Goal-Based Agents • Utility-Based Agents • Learning Agent • Multi-agent systems • Hierarchical agents
  • 6. Simple Reflex Agents • act only on the basis of the current percept • If the condition is true, then the action is taken, else not • This agent function only succeeds when the environment is fully observable
  • 7. Problems with Simple reflex agents are • Very limited intelligence. • No knowledge of non- perceptual parts of the state. • Usually too big to generate and store. • If there occurs any change in the environment, then the collection of rules needs to be updated. •
  • 9. Model-Based Reflex Agents • A model-based agent can handle partially observable environments • The agent has to keep track of the internal state which is adjusted by each percept and that depends on the percept history • Updating the state requires information about: • How the world evolves independently from the agent? • How do the agent’s actions affect the world?
  • 11. Goal-Based Agents • agents take decisions based on how far they are currently from their goal(description of desirable situations.
  • 13. • They choose actions based on a preference (utility) for each state. • Utility describes how “happy” the agent is • Sometimes achieving the desired goal is not enough. • We may look for a quicker, safer, cheaper trip to reach a destination. • gent happiness should be taken into consideration. • A utility function maps a state onto a real number which describes the associated degree of happiness.
  • 15. Learning Agent • that can learn from its past experiences or it has learning capabilities. • It starts to act with basic knowledge and then is able to act and adapt automatically through learning. • A learning agent has mainly four conceptual components, which are: • Learning element: It is responsible for making improvements by learning from the environment. • Critic: The learning element takes feedback from critics which describes how well the agent is doing with respect to a fixed performance standard. • Performance element: It is responsible for selecting external action. • Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences. •
  • 16. Multi-Agent Systems • A multi-agent system (MAS) is a system composed of multiple interacting agents that are designed to work together to achieve a common goal. • These agents may be autonomous or semi- autonomous and are capable of perceiving their environment, making decisions, and taking action to achieve the common objective. • MAS can be used in a variety of applications, including transportation systems, robotics, and social networks.
  • 17. Classificattion MAS • MAS can be classified into different types based on their characteristics, such as whether the agents have the same or different goals, whether the agents are cooperative or competitive, and whether the agents are homogeneous or heterogeneous. • In a homogeneous MAS, all the agents have the same capabilities, goals, and behaviors. • In contrast, in a heterogeneous MAS, the agents have different capabilities, goals, and behaviors.
  • 18. Hierarchical Agents • These agents are organized into a hierarchy, with high-level agents overseeing the behavior of lower-level agents. • The high-level agents provide goals and constraints, while the low-level agents carry out specific tasks. • Hierarchical agents are useful in complex environments with many tasks and sub-tasks. • They are particularly useful in environments where there are many tasks and sub-tasks that need to be coordinated and prioritized.
  • 19. Agent Environment • An environment in artificial intelligence is the surrounding of the agent. – Fully Observable vs Partially Observable – Deterministic vs Stochastic – Competitive vs Collaborative – Single-agent vs Multi-agent – Static vs Dynamic – Discrete vs Continuous – Episodic vs Sequential – Known vs Unknown
  • 20. Fully Observable vs Partially Observable • When an agent sensor is capable to sense or access the complete state of an agent at each point in time, it is said to be a fully observable environment else it is partially observable. • An environment is called unobservable when the agent has no sensors in all environments. • Examples: – Chess – the board is fully observable, and so are the opponent’s moves. – Driving – the environment is partially observable because what’s around the corner is not known. •
  • 21. Deterministic vs Stochastic • When a uniqueness in the agent’s current state completely determines the next state of the agent, the environment is said to be deterministic. • The stochastic environment is random in nature which is not unique and cannot be completely determined by the agent. • Examples: – Chess – there would be only a few possible moves for a coin at the current state and these moves can be determined. – Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
  • 22. Competitive vs Collaborative • An agent is said to be in a competitive environment when it competes against another agent to optimize the output. • The game of chess is competitive as the agents compete with each other to win the game which is the output. • An agent is said to be in a collaborative environment when multiple agents cooperate to produce the desired output. • When multiple self-driving cars are found on the roads, they cooperate with each other to avoid collisions and reach their destination which is the output desired.
  • 23. Single-agent vs Multi-agent • An environment consisting of only one agent is said to be a single-agent environment. • A person left alone in a maze is an example of the single-agent system. • An environment involving more than one agent is a multi-agent environment. • The game of football is multi-agent as it involves 11 players in each team.
  • 24. Dynamic vs Static • An environment that keeps constantly changing itself when the agent is up with some action is said to be dynamic. • A roller coaster ride is dynamic as it is set in motion and the environment keeps changing every instant. • An idle environment with no change in its state is called a static environment. • An empty house is static as there’s no change in the surroundings when an agent enters.
  • 25. Discrete vs Continuous • If an environment consists of a finite number of actions that can be deliberated in the environment to obtain the output, it is said to be a discrete environment. • The game of chess is discrete as it has only a finite number of moves. The number of moves might vary with every game, but still, it’s finite. • The environment in which the actions are performed cannot be numbered i.e. is not discrete, is said to be continuous. • Self-driving cars are an example of continuous environments as their actions are driving, parking, etc. which cannot be numbered.
  • 26. Episodic vs Sequential • In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or episodes. There is no dependency between current and previous incidents. In each incident, an agent receives input from the environment and then performs the corresponding action. • Example: Consider an example of Pick and Place robot, which is used to detect defective parts from the conveyor belts. Here, every time robot(agent) will make the decision on the current part i.e. there is no dependency between current and previous decisions. • In a Sequential environment, the previous decisions can affect all future decisions. The next action of the agent depends on what action he has taken previously and what action he is supposed to take in the future. •
  • 27. Known vs Unknown • In a known environment, the output for all probable actions is given. • Obviously, in case of unknown environment, for an agent to make a decision, it has to gain knowledge about how the environment works.
  • 28. PEAS • PEAS – Performance measure, Environment, Actuator and Sensro • PEAS is used to categorize similar agents together. • Rational Agent: The rational agent considers all possibilities and chooses to perform a highly efficient action. For example, it chooses the shortest path with low cost for high efficiency
  • 29. PEAS • Performance Measure: Performance measure is the unit to define the success of an agent. Performance varies with agents based on their different precepts. • Environment: Environment is the surrounding of an agent at every instant. It keeps changing with time if the agent is set in motion. There are 5 major types of environments: – Fully Observable & Partially Observable – Episodic & Sequential – Static & Dynamic – Discrete & Continuous – Deterministic & Stochastic • Actuator: An actuator is a part of the agent that delivers the output of action to the environment. • Sensor: Sensors are the receptive parts of an agent that takes in the input for the agent.
  • 30. PEAS examples Agent Performance Measure Environment Actuator Sensor Hospital Management System Patient’s health, Admission process, Payment Hospital, Doctors, Patients Prescription, Diagnosis, Scan report Symptoms, Patient’s response Automated Car Drive The comfortable trip, Safety, Maximum Distance Roads, Traffic, Vehicles Steering wheel, Accelerator, Brake, Mirror Camera, GPS, Odometer Subject Tutoring Maximize scores, Improvement is students Classroom, Desk, Chair, Board, Staff, Students Smart displays, Corrections Eyes, Ears, Notebooks Part-picking robot Percentage of parts in correct bins Conveyor belt with parts; bins Jointed arms and hand Camera, joint angle sensors Satellite image analysis system Correct image categorization Downlink from orbiting satellite Display categorization of scene Color pixel arrays
  • 31. Reasoning: • Thus Reasoning can be defined as the logical process of drawing conclusions, making predictions or constructing approaches towards a particular thought with the help of existing knowledge.
  • 32. Deductive Reasoning: • Deductive Reasoning is the strategic approach that uses available facts, information or knowledge to draw valid conclusions. • examples are: People who are aged 20 or above are active users of the internet. • Out of the total number of students present in the class, the ratio of boys is more than the girls.
  • 33. Inductive Reasoning: I • Set of facts • Inductive reasoning is associated with the hypothesis-generating approach rather than drawing any particular conclusion • All the students present in the classroom are from London. • Always the hottest temperature is recorded in Death Valley.
  • 34. Common Sense Reasoning: • Common sense reasoning is the most occurred type of reasoning in daily life events • It is the type of reasoning which comes from experiences. • whenever in the next point of time it faces a similar type of situation then it uses its previous experiences to draw a conclusion
  • 35. Monotonic Reasoning: • it uses facts, information and knowledge to draw a conclusion about the problem. – The Sahara desert of the world is one of the most spectacular deserts. – One of the longest rivers in the world is the Nile River.
  • 36. Abductive Reasoning: • It begins with an incomplete set of facts, information and knowledge and then proceeds to find the most deserving explanation and conclusion. • It draws conclusions based on what facts you know at present rather than collecting some outdated facts and information.
  • 37. Logic • Logic can be defined as the proof or validation behind any reason provided • Logic, as per the definition of the Oxford dictionary, is "the reasoning conducted or assessed according to strict principles and validity"
  • 38. Propositional Logic : A proposition is basically a declarative sentence that has a truth value. Truth value can either be true or false, 1. but it needs to be assigned any of the two values and not be ambiguous. 2. The purpose of using propositional logic is to analyze a statement, individually or compositely. For example : The following statements : •If x is real, then x2 > 0 •What is your name? •(a+b)2 = 100 •This statement is false. •This statement is true. Are not propositions because they do not have a truth value. They are ambiguous.
  • 39. Propositional Logic But the following statements : • 1. (a+b)2 = a2 + 2ab + b2 • 2. If x is real, then x2 >= 0 • 3. If x is real, then x2 < 0 • 4. The sun rises in the east. • 5. The sun rises in the west. • Are all propositions because they have a specific truth value, true or false. • The branch of logic that deals with proposition is propositional logic.
  • 40. Predicate Logic Predicates are properties, additional information to better express the subject of the sentence. A quantified predicate is a proposition , that is, when you assign values to a predicate with variables it can be made a proposition. For example : • In P(x) : x>5, x is the subject or the variable and ‘>5’ is the predicate. • P(7) : 7>5 is a proposition where we are assigning values to the variable x, and it has a truth value, i.e. True. • The set of values that the variables of the predicate can assume is called the Universe or Domain of Discourse or Domain of Predicate.
  • 41. Forward Chaining  Forward chaining is a method of reasoning in artificial intelligence in which inference rules are applied to existing data to extract additional data until an endpoint (goal) is achieved.
  • 42. Forward Chaining Steps  In the first step, the system is given one or more than one constraints.  Then the rules are searched in the knowledge base for each constraint. The rules that fulfil the condition are selected(i.e., IF part).  Now each rule is able to produce new conditions from the conclusion of the invoked one.As a result, THEN part is again included in the existing one.  The added conditions are processed again by repeating step 2. The process will end if there is no new conditions exist. 4 2
  • 43. Properties of forward chaining  The process uses a down-up approach (bottom to top).  It starts from an initial state and uses facts to make a conclusion.  This approach is data-driven.  It’s employed in expert systems and production rule system. 4 3
  • 44. Examples of forward chaining  Asimple example of forward chaining can be explained in the following sequence.  A  A->B  B  Ais the starting point.A->B represents a fact. This fact is used to achieve a decision B.  Apractical example will go as follows;  Tom is running (A)  If a person is running, he will sweat (A->B)  Therefore, Tom is sweating. (B) 4 4
  • 45. Backward Chaining  Backward chaining is a concept in artificial intelligence that involves backtracking from the endpoint or goal to steps that led to the endpoint.  This type of chaining starts from the goal and moves backward to comprehend the steps that were taken to attain this goal. 4 5
  • 46. Backward Chaining Steps Firstly, the goal state and the rules are selected where the goal state reside in the THEN part as the conclusion. From the IF part of the selected rule the sub goals are made to be satisfied for the goal state to be true. Set initial conditions important to satisfy all the sub goals. Verify whether the provided initial state matches with the established states. If it fulfils the condition then the goal is the solution otherwise other goal state is selected. 4 6
  • 47. Properties of backward chaining  The process uses an up-down approach (top to bottom).  It’s a goal-driven method of reasoning.  The endpoint (goal) is subdivided into sub-goals to prove the truth of facts.  Abackward chaining algorithm is employed in inference engines, game theories, and complex database systems.  The modus ponens inference rule is used as the basis for the backward chaining process. This rule states that if both the conditional statement (p->q) and the antecedent (p) are true, then we can infer the subsequent (q). 4 7
  • 48. Example of backward chaining  The information provided in the previous example (forward chaining) can be used to provide a simple explanation of backward chaining. Backward chaining can be explained in the following sequence.  B  A->B  A  B is the goal or endpoint, that is used as the starting point for backward tracking.A is the initial state.A->B is a fact that must be asserted to arrive at the endpoint B.  Apractical example of backward chaining will go as follows:  Tom is sweating (B).  If a person is running, he will sweat (A->B).  Tom is running (A). 9