SlideShare a Scribd company logo
1 of 154
Artificial Intelligence
UNIT-1
Syllabus:
• Concept of AI,
• History,
• Current Status,
• Scope,
• Intelligent Agents,
• Environments,
• Problem
Formulations,
• Review of Tree
and Graph
Structures,
• State Space
Representation,
Search Graph and
• Search Tree.
Outline
– What is AI?
– Brief History of AI
– AI Problems and Applications
• Artificial Intelligence Pursures creating the
computers or machines as inteligent as human
beings
• What is AI?
• According to the father of Artificial
Intelligence,JohnMcCarthy,it is “The science and
engineering of making intelligent machines
,espectially intelligent computer programs”.
• Artificial intelligence is a way of making a
computer-controlled robot, or software think
intelligently,in the similar manner the intelligent
humans think.
• AI is Accomplishedby studying how
human brain thinks, and how humans
learn,decide and Work(TLDW) while trying
to solve a problem.
What is AI?
What is Artificial Intelligence (AI)?
• “the exciting new effort to make computers think …
machines with minds, in the full and literal sense.”
(Haugeland, 1985)
• “the automation of activities that we associates with
human thinking, activities such as decision making,
problem solving, learning…” (Bellman, 1978)
• Systems that think like humans.
“the study of mental faculties through the
use of computational model.” (Charniak
and McDermett, 1985)
“the study of the computations that make it
possible to perceive, reason and act.” (Winston,
1984)
• Systems that think rationally
“the art of creating machines that perform
functions that require intelligence when
performed by people.” (Kurzweil, 1990)
“The study of how to make computers thinks at
which, at the movement, people are better.”
(Rich and Knight, 1984)
• Systems that act like humans.
“a field of study that seeks to explain and
emulate intelligent behavior in terms of
computational process.” (Schalkoff, 1990)
“the branch of computer science that is
concerned with the automation of intelligent
behaviour.” (Luger and Stubblefield, 1993)
• Systems that act rationally
The overall behaviour of the
system should be human like.
it could be achieved by
observation.
The foundations of AI
one or multiple
areas can
contribute to build
an Intelligent
system.
What is AI Technique?
In the real world,the knowledge has some unwelcomed
properties -
• its volume is huge,next to unimaginable
• It is not well-organized or well -formatted
• it keeps changing constantly
AI technique is a manner to organize and use the
knowledge efficiently in such a way that -
• It should be preceivable by the people who provide it
• it should be easily modifiable to correct errors.
• It should be useful in many situations through it is
incomplete and inaccurate.
Applications of AI
• Gaming (G)
• Natural Language Processing(NLP)
• Expert Systems(ES)
• Vision Systems(VS)
• Speech Recognition(SR)
• Hand Writing Recogition(HWR)
• Intelligent Robots(IR)
• Gaming:
• AI plays Crucial role in strategic games
such as Chess,poker,Tic-Toc-Toe etc.
where machine can think of largenumber
of possible positions based on Heuristic
knowledge.
• NLP:
it is possible to interact with the computer
that understands natural language spoken
by humans.
• Expert Systems:
• these are some applications which integrate machine,
software,and special information to impart.
• An expert system is a computer program that is
designed to solve complex problems and to provide
decision-making ability like a human expert. It performs
this by extracting knowledge from its knowledge base
using the reasoning and inference rules according to the
user queries.
• Vision Systems:
• These systems understand,interpret,and
comprehend visual input on the computer.
• ex:
• Spying aeroplane-it takes the
photographs which are used to figure out
spatial information or map of the areas.
• Doctors use clinical expert system to
diagnose the patient.
• Speech Recognition :
• some intelligent systems are capable of
hearing and comprehending the language
in terms of sentences and their meaning
while a human talks to it.
• HandWriting Recognition:
• it reads text written on paper by pen or
screen by a stylus. it can recognize the
shapes of letters and convert into editable
text.
• Intelligent Robots:
• robots are able to perform the task given
by human
• they are capable of learning from their
mistakes and they adapt to the new
environment.
History of AI
The birth of AI(1952-1956)
Scope of AI
AI in Science and Research
AI in Cyber Security
AI in Data Analysis
AI in Transport
AI in Home
AI in Health Care
1. AI in Science and Research
• AI is making lots of progress in the scientific sector.
Artificial Intelligence can handle large quantities of data
and processes it quicker than human minds.
• AI is already making breakthroughs in this field. A great
example is ‘Eve,’ which is an AI-based robot. It
discovered an ingredient of toothpaste that can cure a
dangerous disease like Malaria.
• Biotechnology is another field where researchers are
using AI to design microorganisms for industrial
applications.
2. AI in Cyber Security
• Cybersecurity is another field that’s benefitting
from AI. As organizations are transferring their
data to IT networks and cloud, the threat of
hackers is becoming more significant
• Cognitive AI is an excellent example of this field.
It detects and analyses threats, while also
providing insights to the analysts for making
better-informed decisions
• Another field is fraud detection. AI can help in
detecting frauds and help organizations and
people in avoiding scams.(Ex: RCNN)
3. AI in Data Analysis
• The scope of AI in data analytics is rising
rapidly.
• Another example of AI applications in this
sector is predicting outcomes from data.
Such systems use the analytics data to
predict results and the appropriate course
of action to achieve those results.
• EX: Helixa.ai
4. AI in Transport
• Airplanes have been using autopilot to steer
them in the air since 1912. An autopilot system
controls the trajectory of a plane,ships and
space craft.
• future scope of AI is quite broad is driverless
cars
• Experts believe self-driving cars will bring many
long-term and short-term benefits, including
lower emissions and enhanced road safety. For
example, self-driving cars will be free from
human errors, which account for 90% of traffic
accidents.(Tesla,Uber)
5. AI in Home
• Amazon Echo and Google Home are
popular smart home devices that let you
perform various tasks with just voice
commands.
• Smart assistants are also present in
mobile phones. Apple’s Siri and Google
Assistant
• Microsoft also has a smart assistant,
which is called Cortana.
6. AI in Healthcare
• For example, the Knight Career Institute and
Intel have made a collaborative cancer cloud.
This cloud takes data from the medical history of
cancer (and similar) patients to help doctors in
making a better diagnosis.
• Many major organizations, including IBM and
Microsoft, are collaborating with medical
institutions to solve the various problems present
in the healthcare sector.
INTELLIGENT AGENTS AND
ENVIRONMENTS
• An agent is anything that can be viewed
as perceiving its environment through
sensors and acting upon that environment
through actuators.
• Three types of Agents
• Human
• Robot
• Software
• A human agent has eyes, ears, and other
organs for sensors and hands, legs, vocal tract,
and so on for actuators.
• A robotic agent might have cameras and
infrared range finders for sensors and various
motors for actuators.
• A software agent receives keystrokes, file
contents, and network packets as sensory inputs
and acts on the environment by displaying on
the screen, writing files, and sending network
packets.
• Encoded bit strings and its programs and actions
• Sensor: Sensor is a device which detects
the change in the environment and sends
the information to other electronic devices.
An agent observes its environment
through sensors.
• Actuators: Actuators are the component
of machines that converts energy into
motion. The actuators are only responsible
for moving and controlling a system. An
actuator can be an electric motor, gears,
rails, etc.
• Effectors: Effectors are the devices which
affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins,
and display screen
Terminology
OmniScience, Learning,Autonomy
An omniscient agent knows the actual outcome of
its actions and can act accordingly; but
omniscience is
impossible in reality.
OmniScience:
Omniscience is the ability to know everything that's
known and could be known.
Rationality maximizes expected performance,
while perfection maximizes actual performance
• LEARNING: a rational agent not only to gather
information but also to learn as much as possible from
what it perceives. The agent’s initial configuration could
reflect some prior knowledge of the environment, but as
the agent gains experience this may be modified and
augmented.
• A rational agent should be autonomous—it should learn
what it can to compensate for partial or incorrect prior
knowledge
Intelligent Agents:
• An intelligent agent is an autonomous entity which act
upon an environment using sensors and actuators for
achieving goals. An intelligent agent may learn from the
environment to achieve their goals.
• A thermostat is an example of an intelligent agent.
• Following are the main four rules for an AI agent:
• Rule 1: An AI agent must have the ability to perceive the
environment.
• Rule 2: The observation must be used to make
decisions.
• Rule 3: Decision should result in an action.
• Rule 4: The action taken by an AI agent must be a
rational action.
The Concept of Rationality:
• Rationally is nothing but status of being
resonable ,sensible and having good
sense of judgment.
• Rationally is concerned with expected
actions and results depending upon what
the agent has perceived.
• Performing actions with the aim of
obtaining useful information is an
important of rationality.
What is rational at any given time
depends on four things:
• The performance measure that defines the
criterion of success.
• The agent’s prior knowledge of the environment.
• The actions that the agent can perform.
• The sequence of percepts
This leads to a definition of a rational agent:
Rational Agent
• A rational agent is one that does the right
thing—conceptually speaking, every entry
in the table for the agent function is filled
out correctly.
• Obviously, doing the right thing is better
than doing the wrong thing, but what does
it mean to do the right thing?
A Nature of environment
• PEAS is a type of model on which an AI agent
works upon. When we define an AI agent or
rational agent, then we can group its properties
under PEAS representation model. It is made up
of four words:
• P: Performance measure
• E: Environment
• A: Actuators
• S: Sensors
• Here performance measure is the objective for
the success of an agent's behavior
• Performance: Safety, time, legal drive, comfort
• Environment: Roads, other vehicles, road
signs, pedestrian
• Actuators: Steering, accelerator, brake, signal,
horn
• Sensors: Camera, GPS, speedometer,
odometer, accelerometer, sonar.
Types of Environments
Environment Properties:
• Fully observable vs. partially observable
• Deterministic vs. stochastic / strategic
• Episodic vs. sequential
• Static vs. dynamic
• Discrete vs. continuous
• Single agent vs. multiagent
• Known vs. unknown
• Complete vs Incomplete
Fully Observable vs. Partially Observable
• Fully observable environments: if the sensors detect
all aspects that are relevant to the choice of action;
relevance, in turn, depends on the performance
measure.
• A fully observable AI environment has access to all
required information to complete target task.
Ex: Image recognition operates in fully observable
domains.
• An environment might be partially observable because
of noisy and inaccurate sensors or because parts of the
state are simply missing from the sensor data
• for example, a vacuum agent with only a local dirt sensor
cannot tell whether there is dirt in other squares, and an
automated taxi cannot see what other drivers are
Deterministic vs. Stochastic
• Deterministic:if the next state of the environment is completely
determined by the current state and the action executed by the
agent
• Stochastic = the next state has some uncertainty associated with it.
In other words, deterministic environments ignore uncertainty.
Most real world AI environments are not deterministic. Instead, they can
be classified as stochastic. Self-driving vehicles are a classic example
of stochastic AI processes.
• Uncertainty could come from randomness, lack of a good
environment model, or lack of complete sensor coverage
· Strategic environment if the environment is deterministic except
for the actions of other agents
• Examples:
• Non-deterministic environment: physical world: Robot on Mars
• Deterministic environment: Tic Tac Toe game
Episodic vs. sequential:
• In an episodic task environment, the agent’s experience
is divided into atomic episodes. In each episode the
agent receives a percept and then performs a single
action.
• Crucially, the next episode does not depend on the
actions taken in previous episodes.
• Many classification tasks are episodic.
• For example, an agent that has to spot defective parts
on an assembly line bases each decision on the current
part, regardless of previous decisions; moreover, the
current decision doesn’t affect whether the next part is
defective.
• Episodic environment: mail sorting system
sequential
• In sequential environments, on the other hand,
the current decision could affect all future
decisions.
• Chess and taxi driving are sequential: in both
cases, short-term actions can have long-term
consequences.
• Episodic environments are much simpler than
sequential environments because the agent
does not need to think ahead.
Static vs. Dynamic
• If the environment can change while an
agent is deliberating, then we say the
environment is dynamic for that agent;
• EX: Taxi driving is clearly dynamic
• Static environments are easy to deal with
because the agent need not keep looking
at the world while it is deciding on an
action, nor need it worry about the
passage of time.
• Ex: Crossword puzzles are static.
• If the environment itself does not change
with the passage of time but the agent’s
performance score does, then we say the
environment is semidynamic.
• Ex:Chess, when played with a clock,
Discrete vs. continuous
Discrete = time moves in fixed steps, usually
with one measurement per step (and
perhaps one action, but could be no action).
E.g. the chess environment has a finite
number of distinct states (excluding the
clock).
Continuous = Signals constantly coming into
sensors, actions continually changing.
E.g. driving a car
Single agent vs. multi agent:
• An agent operating by itself in an
environment is single agent!
• For example, an agent solving a
crossword puzzle by itself is clearly in a
single-agent environment.
• Multi agent is when other agents are
present!
• E.g Chess (Two players)
• A strict definition of an other agent is anything that changes from
step to step.
• A stronger definition is that it must sense and act
• Competitive or co-operative Multi-agent environments
• Human users are an example of another agent in a system
• E.g. Other players in a football team (or opposing team), wind and
waves in a sailing agent, other cars in a taxi drive.
Known Vs. Unknown
• Known environment :the outcomes (or
outcome probabilities if the environment is
stochastic) for all actions are given.
• E.g: In solitaire card games
• if the environment is unknown, the agent
will have to learn how it works in order to
make good decisions.
• E.g New video game
Complete vs. Incomplete
• Complete AI environments are those on
which, at any give time, we have enough
information to complete a branch of the
problem.
• E.g Chess
• Incomplete environments as AI strategies
can’t anticipate many moves in advance
and, instead, they focus on finding a good
‘equilibrium” at any given time.
• E.g Poker
Structure of an AI Agent
• The task of AI is to design an agent
program which implements the agent
function. The structure of an intelligent
agent is a combination of architecture and
agent program. It can be viewed as:
• Agent = Architecture + Agent program
• Following are the main three terms involved in
the structure of an AI agent:
• Architecture: Architecture is machinery that an AI
agent executes on.
• Agent Function: Agent function is used to map a
percept to an action.
f:P* → A
Agent program: Agent program is an
implementation of agent function. An agent
program executes on the physical architecture to
produce function f.
Types of Agents
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents
• Learning Agents
Simple -reflex Agents
The Simple reflex agents are the simplest
agents. These agents take decisions on the basis
of the current percepts and ignore the rest of
the percept history.
Simple Reflex agents:
• These agents only succeed in the fully
observable environment.
• The Simple reflex agent does not consider
any part of percepts history during their
decision and action process
• The Simple reflex agent works on
Condition-action rule, which means it
maps the current state to action. Such
as a Room Cleaner agent, it works only if
there is dirt in the room.
Problems for the simple reflex agent design approach:
• They have very limited intelligence
• They do not have knowledge of non-perceptual parts of
the current state
• Mostly too big to generate and to store.
• Not adaptive to changes in the environment.
Model Based Reflex Agents
The Model-based agent can work in a partially observable environment,
and track the situation.
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is
called a Model-based agent.
Internal State: It is a representation of the current state based on
percept histsry.
• These agents have the model, "which is
knowledge of the world" and based on the
model they perform actions.
• Updating the agent state requires
information about:
• How the world evolves
• How the agent's action affects the world.
Goal Based Agents
• The knowledge of the current state environment is not
always sufficient to decide for an agent to what to do.
• The agent needs to know its goal which describes
desirable situations.
• Goal-based agents expand the capabilities of the model-
based agent by having the "goal" information.
• They choose an action, so that they can achieve the
goal.
• These agents may have to consider a long sequence of
possible actions before deciding whether the goal is
achieved or not. Such considerations of different
scenario are called searching and planning, which
makes an agent proactive.
Untility Agents
• These agents are similar to the goal-based
agent but provide an extra component of utility
measurement which makes them different by
providing a measure of success at a given state.
• Utility-based agent act based not only goals but
also the best way to achieve the goal.
• The Utility-based agent is useful when there are
multiple possible alternatives, and an agent has
to choose in order to perform the best action.
• The utility function maps each state to a real
number to check how efficiently each action
achieves the goals.
Learning Agents
• A learning agent in AI is the type of agent
which can learn from its past experiences,
or it has learning capabilities.
• It starts to act with basic knowledge and
then able to act and adapt automatically
through learning.
• A learning agent has mainly four conceptual
components, which are:
• Learning element: It is responsible for making
improvements by learning from environment
• Critic: Learning element takes feedback from critic
which describes that how well the agent is doing with
respect to a fixed performance standard.
• Performance element: It is responsible for selecting
external action
• Problem generator: This component is responsible for
suggesting actions that will lead to new and informative
experiences.
• Hence, learning agents are able to learn, analyze
performance, and look for new ways to improve the
performance.
PROBLEM FORMULATION
• We have seen that the reflex agents,
whose actions are a direct mapping from
the states of the environment,
consumes a large space to store the
mapping table and is inflexible.
• The goal-based agents consider the
long-term actions and the desirability of
the outcome, which is easier to train and
is adaptable to the changing
environment.
• There are two kinds of goal-based agents:
• problem-solving agents
• planning agents.
• Problem-solving agents consider each states of the
world as indivisible, with no internal structure of the
states visible to the problem-solving algorithms. (atomic
representation)
• E.g., finding a driving route,each state is a city.
• AI algorithms: search, games, Markov decision
processes, hidden Markov models, etc.
• Planning agents split up each state into
variables and establishes relationship
between them.
• Factored and structure representation:
• Factored representation :Splits up the
each state into fixed set of variables or
attributes (each of one have value)
• E.g., GPS location, amount of gas in the
tank.
• AI algorithms: constraint satisfaction, and
Bayesian networks
Structured reprasentation:
Relationships between the objects of a state
can be explicitly expressed.
AI algorithms: first order logic, knowledge-
based learning, natural language
understanding
2 Steps performed by Problem-solving
agent
Goal Formulation:
It is the first and simplest step in problem-solving.
It organizes the steps/sequence required to
formulate one goal out of multiple goals as well as
actions to achieve that goal. Goal formulation is
based on the current situation and the agent’s
performance measure.
Problem Formulation: It is the most important
step of problem-solving which decides what
actions should be taken to achieve the formulated
goal.
Road map : part of romania
Problem definition and formulation
• Before we jump on to finding the algorithm
for evaluating the problem and searching
for the solution,
• we first need to define and formulate the
problem.
• Problem formulation involves deciding
what actions and states to consider, given
the goal.
• The initial state, the actions and the transition model
together define the state space of the problem — the set
of all states reachable by any sequence of actions.
• the graphical representation of the state space of the
traveling problem.
• A path in the state space is a sequence of states
connected by a sequence of actions.
• The solution to the given problem is defined as the
sequence of actions from the initial state to the goal
states. The quality of the solution is measured by the
cost function of the path, and an optimal solution has
the lowest path cost among all the solutions.
Example problems
• Toy problems
– those intended to illustrate or exercise various
problem-solving methods
– E.g., puzzle, chess, etc.
• Real-world problems
– tend to be more difficult and whose solutions
people actually care about
– E.g., Design, planning, etc.
Toy problems
• Example: vacuum world
Number of states: 8
Initial state: Any
Number of actions: 4
 left, right, suck,
noOp
Goal: clean up all dirt
 Goal states: {7, 8}
 Path Cost:
 Each step costs 1
The 8-puzzle
• States:
– a state description specifies the location of each of
the eight tiles and blank in one of the nine squares
• Initial State:
– Any state in state space
• Successor function:
– the blank moves Left, Right, Up, or Down
• Goal test:
– current state matches the goal configuration
• Path cost:
– each step costs 1, so the path cost is just the length
of the path
The 8-queens
• There are two ways to formulate the
problem
• All of them have the common followings:
– Goal test: 8 queens on board, not attacking
to each other
– Path cost: zero
The 8-queens
• (1) Incremental formulation
– involves operators that augment the state
description starting from an empty state
– Each action adds a queen to the state
– States:
• any arrangement of 0 to 8 queens on board
– Successor function:
• add a queen to any empty square
The 8-queens
• (2) Complete-state formulation
– starts with all 8 queens on the board
– move the queens individually around
– States:
• any arrangement of 8 queens, one per column in
the leftmost columns
– Operators: move an attacked queen to a row,
not attacked by any other
Conclusion
The right formulation makes a big difference to the size of the search
space
Terminalogy
• Search: It identifies all the best possible
sequence of actions to reach the goal state from
the current state. It takes a problem as an input
and returns solution as its output.
• Solution: It finds the best algorithm out of
various algorithms, which may be proven as the
best optimal solution.
• Execution: It executes the best optimal
solution from the searching algorithms to reach
the goal state from the current state.
Searching for solutions
State Space Search:
• Defining problem precisely(initial ,goal)
• Analyse the problem (Technique)
• Isolate and represent the task knowledge that
necessary to solve the problem
• choose the best problem sovling technique, and
apply it to solve the particular problem.
Searching for solutions
• Finding out a solution is done by
–searching through the state space
• All problems are transformed
–as a search tree
–generated by the initial state and
successor function
Tree Search Example
• Initial state
– The root of the search tree is a search node
• Expanding
– applying successor function to the current state
– thereby generating a new set of states
• leaf nodes
– the states having no successors
the set of all leaf nodes available for expansion at
given point is called the frontier /openlist.
Search tree
• State space
– has unique states {A, B}
– while a search tree may have cyclic paths: A-B-A-B-
A-B- …
– it is a repeated state, generated by a loopy path.
– This means that the search tree for Romania is infinite,
even though the search space is limited.
– These loopy paths makes some of the algorithms to
fail, making the problem seem unsolvable.
– In fact, a loopy path is a special case of redundant
paths
• A good search strategy should avoid such paths
Note:The way to avoid exploring redundant paths is
to remember where one has been.To do this we
augment the TREE-SEARCH algorithm with a data
structure called Explored set /closed list, which
remembered every expanded node.
General Graph Search Algorithm
Search tree
• A node is having five components:
– n.STATE: which state it is in the state space
– n.PARENT-NODE: from which node it is
generated
– n.ACTION: which action applied to its parent-
node to generate it
– n.PATH-COST: the cost, g(n), from initial state to
the node n itself
– n.DEPTH: number of steps along the path from
the initial state
Data Structure /infrastructure for
searching algorithm
Notice how the PARENT pointers string the nodes together into a tree
structure. These pointers also allow the solution path to be extracted
when a goal node is found; we use the SOLUTION function to return the
sequence of actions obtained by following parent pointers back to the
root.
Performance measure of Problem-
solving Algorithms
We can evaluate an algorithm’s performance
with these metrics:
Completeness: Is the algorithm guaranteed to find
a solution if there exist one?
Optimality: Does the algorithm find the optimal
solution?
Time complexity: How long does it take for the
algorithm to find a solution?
Space complexity: How much memory is
consumed in finding the solution?
Time is often measured in terms of number of nodes generated during
search ,and space in terms of the maximum number of nodes stored in
memory.
Thank you

More Related Content

Similar to UNIT1-AI final.pptx

EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptEELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
DaliaMagdy12
 
Ai software in everyday life
Ai software in everyday lifeAi software in everyday life
Ai software in everyday life
Saleem Almaqashi
 

Similar to UNIT1-AI final.pptx (20)

Mis module v Artificial Intelligence
Mis module v Artificial IntelligenceMis module v Artificial Intelligence
Mis module v Artificial Intelligence
 
AI KIMSRAD.pptx
AI KIMSRAD.pptxAI KIMSRAD.pptx
AI KIMSRAD.pptx
 
Artificial intelligence introduction
Artificial intelligence introductionArtificial intelligence introduction
Artificial intelligence introduction
 
Unit1_AI&ML (2).pptx
Unit1_AI&ML (2).pptxUnit1_AI&ML (2).pptx
Unit1_AI&ML (2).pptx
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Artificial-Intelligence-in-psychology-1.pptx
Artificial-Intelligence-in-psychology-1.pptxArtificial-Intelligence-in-psychology-1.pptx
Artificial-Intelligence-in-psychology-1.pptx
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptEELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
 
APPDEV 1.pptx
APPDEV 1.pptxAPPDEV 1.pptx
APPDEV 1.pptx
 
Ch~3.pdf
Ch~3.pdfCh~3.pdf
Ch~3.pdf
 
Artificial Intteligence-unit 1.pptx
Artificial Intteligence-unit 1.pptxArtificial Intteligence-unit 1.pptx
Artificial Intteligence-unit 1.pptx
 
Ai
AiAi
Ai
 
Artificial Intelligence (A.I.).pptx
Artificial Intelligence (A.I.).pptxArtificial Intelligence (A.I.).pptx
Artificial Intelligence (A.I.).pptx
 
Artificial intelligence_ class 12 KATHIR.pptx
Artificial intelligence_ class 12  KATHIR.pptxArtificial intelligence_ class 12  KATHIR.pptx
Artificial intelligence_ class 12 KATHIR.pptx
 
Artificial Intellegence
Artificial IntellegenceArtificial Intellegence
Artificial Intellegence
 
ARTIFICIAL INTELLIGENCE AND ROBOTICS
ARTIFICIAL INTELLIGENCE AND ROBOTICS ARTIFICIAL INTELLIGENCE AND ROBOTICS
ARTIFICIAL INTELLIGENCE AND ROBOTICS
 
Artificial Intelligence presentation,
Artificial Intelligence  presentation,Artificial Intelligence  presentation,
Artificial Intelligence presentation,
 
Artificial Intelligence power point presentation
Artificial Intelligence power point presentationArtificial Intelligence power point presentation
Artificial Intelligence power point presentation
 
Artificial intelligence
Artificial intelligence Artificial intelligence
Artificial intelligence
 
Ai software in everyday life
Ai software in everyday lifeAi software in everyday life
Ai software in everyday life
 

More from CS50Bootcamp (6)

AI unit-2 lecture notes.docx
AI unit-2 lecture notes.docxAI unit-2 lecture notes.docx
AI unit-2 lecture notes.docx
 
Coding and decoding.pptx
Coding and decoding.pptxCoding and decoding.pptx
Coding and decoding.pptx
 
NETWORKING COMMANDS.pptx
NETWORKING COMMANDS.pptxNETWORKING COMMANDS.pptx
NETWORKING COMMANDS.pptx
 
Ethical Hacking Workshop.pptx
Ethical Hacking Workshop.pptxEthical Hacking Workshop.pptx
Ethical Hacking Workshop.pptx
 
SHELL SORT-2.pptx
SHELL SORT-2.pptxSHELL SORT-2.pptx
SHELL SORT-2.pptx
 
PERMUTATIONS.pptx
PERMUTATIONS.pptxPERMUTATIONS.pptx
PERMUTATIONS.pptx
 

Recently uploaded

Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Kandungan 087776558899
 
Query optimization and processing for advanced database systems
Query optimization and processing for advanced database systemsQuery optimization and processing for advanced database systems
Query optimization and processing for advanced database systems
meharikiros2
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
Epec Engineered Technologies
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
Kamal Acharya
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
HenryBriggs2
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptx
pritamlangde
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
mphochane1998
 

Recently uploaded (20)

Introduction to Geographic Information Systems
Introduction to Geographic Information SystemsIntroduction to Geographic Information Systems
Introduction to Geographic Information Systems
 
Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)Introduction to Artificial Intelligence ( AI)
Introduction to Artificial Intelligence ( AI)
 
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak HamilCara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
Cara Menggugurkan Sperma Yang Masuk Rahim Biyar Tidak Hamil
 
Convergence of Robotics and Gen AI offers excellent opportunities for Entrepr...
Convergence of Robotics and Gen AI offers excellent opportunities for Entrepr...Convergence of Robotics and Gen AI offers excellent opportunities for Entrepr...
Convergence of Robotics and Gen AI offers excellent opportunities for Entrepr...
 
Query optimization and processing for advanced database systems
Query optimization and processing for advanced database systemsQuery optimization and processing for advanced database systems
Query optimization and processing for advanced database systems
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
 
UNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptxUNIT 4 PTRP final Convergence in probability.pptx
UNIT 4 PTRP final Convergence in probability.pptx
 
Computer Graphics Introduction To Curves
Computer Graphics Introduction To CurvesComputer Graphics Introduction To Curves
Computer Graphics Introduction To Curves
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)Theory of Time 2024 (Universal Theory for Everything)
Theory of Time 2024 (Universal Theory for Everything)
 
Hospital management system project report.pdf
Hospital management system project report.pdfHospital management system project report.pdf
Hospital management system project report.pdf
 
Design For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the startDesign For Accessibility: Getting it right from the start
Design For Accessibility: Getting it right from the start
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
 
fitting shop and tools used in fitting shop .ppt
fitting shop and tools used in fitting shop .pptfitting shop and tools used in fitting shop .ppt
fitting shop and tools used in fitting shop .ppt
 
Ground Improvement Technique: Earth Reinforcement
Ground Improvement Technique: Earth ReinforcementGround Improvement Technique: Earth Reinforcement
Ground Improvement Technique: Earth Reinforcement
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptx
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 
Signal Processing and Linear System Analysis
Signal Processing and Linear System AnalysisSignal Processing and Linear System Analysis
Signal Processing and Linear System Analysis
 
Path loss model, OKUMURA Model, Hata Model
Path loss model, OKUMURA Model, Hata ModelPath loss model, OKUMURA Model, Hata Model
Path loss model, OKUMURA Model, Hata Model
 

UNIT1-AI final.pptx

  • 2. UNIT-1 Syllabus: • Concept of AI, • History, • Current Status, • Scope, • Intelligent Agents, • Environments, • Problem Formulations, • Review of Tree and Graph Structures, • State Space Representation, Search Graph and • Search Tree.
  • 3. Outline – What is AI? – Brief History of AI – AI Problems and Applications
  • 4. • Artificial Intelligence Pursures creating the computers or machines as inteligent as human beings • What is AI? • According to the father of Artificial Intelligence,JohnMcCarthy,it is “The science and engineering of making intelligent machines ,espectially intelligent computer programs”. • Artificial intelligence is a way of making a computer-controlled robot, or software think intelligently,in the similar manner the intelligent humans think.
  • 5. • AI is Accomplishedby studying how human brain thinks, and how humans learn,decide and Work(TLDW) while trying to solve a problem.
  • 7.
  • 8. What is Artificial Intelligence (AI)? • “the exciting new effort to make computers think … machines with minds, in the full and literal sense.” (Haugeland, 1985) • “the automation of activities that we associates with human thinking, activities such as decision making, problem solving, learning…” (Bellman, 1978) • Systems that think like humans.
  • 9. “the study of mental faculties through the use of computational model.” (Charniak and McDermett, 1985) “the study of the computations that make it possible to perceive, reason and act.” (Winston, 1984) • Systems that think rationally
  • 10. “the art of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil, 1990) “The study of how to make computers thinks at which, at the movement, people are better.” (Rich and Knight, 1984) • Systems that act like humans.
  • 11. “a field of study that seeks to explain and emulate intelligent behavior in terms of computational process.” (Schalkoff, 1990) “the branch of computer science that is concerned with the automation of intelligent behaviour.” (Luger and Stubblefield, 1993) • Systems that act rationally
  • 12. The overall behaviour of the system should be human like. it could be achieved by observation.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32. The foundations of AI one or multiple areas can contribute to build an Intelligent system.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43. What is AI Technique? In the real world,the knowledge has some unwelcomed properties - • its volume is huge,next to unimaginable • It is not well-organized or well -formatted • it keeps changing constantly AI technique is a manner to organize and use the knowledge efficiently in such a way that - • It should be preceivable by the people who provide it • it should be easily modifiable to correct errors. • It should be useful in many situations through it is incomplete and inaccurate.
  • 44. Applications of AI • Gaming (G) • Natural Language Processing(NLP) • Expert Systems(ES) • Vision Systems(VS) • Speech Recognition(SR) • Hand Writing Recogition(HWR) • Intelligent Robots(IR)
  • 45. • Gaming: • AI plays Crucial role in strategic games such as Chess,poker,Tic-Toc-Toe etc. where machine can think of largenumber of possible positions based on Heuristic knowledge. • NLP: it is possible to interact with the computer that understands natural language spoken by humans.
  • 46. • Expert Systems: • these are some applications which integrate machine, software,and special information to impart. • An expert system is a computer program that is designed to solve complex problems and to provide decision-making ability like a human expert. It performs this by extracting knowledge from its knowledge base using the reasoning and inference rules according to the user queries.
  • 47. • Vision Systems: • These systems understand,interpret,and comprehend visual input on the computer. • ex: • Spying aeroplane-it takes the photographs which are used to figure out spatial information or map of the areas. • Doctors use clinical expert system to diagnose the patient.
  • 48. • Speech Recognition : • some intelligent systems are capable of hearing and comprehending the language in terms of sentences and their meaning while a human talks to it. • HandWriting Recognition: • it reads text written on paper by pen or screen by a stylus. it can recognize the shapes of letters and convert into editable text.
  • 49. • Intelligent Robots: • robots are able to perform the task given by human • they are capable of learning from their mistakes and they adapt to the new environment.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56. The birth of AI(1952-1956)
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67. Scope of AI AI in Science and Research AI in Cyber Security AI in Data Analysis AI in Transport AI in Home AI in Health Care
  • 68. 1. AI in Science and Research • AI is making lots of progress in the scientific sector. Artificial Intelligence can handle large quantities of data and processes it quicker than human minds. • AI is already making breakthroughs in this field. A great example is ‘Eve,’ which is an AI-based robot. It discovered an ingredient of toothpaste that can cure a dangerous disease like Malaria. • Biotechnology is another field where researchers are using AI to design microorganisms for industrial applications.
  • 69. 2. AI in Cyber Security • Cybersecurity is another field that’s benefitting from AI. As organizations are transferring their data to IT networks and cloud, the threat of hackers is becoming more significant • Cognitive AI is an excellent example of this field. It detects and analyses threats, while also providing insights to the analysts for making better-informed decisions • Another field is fraud detection. AI can help in detecting frauds and help organizations and people in avoiding scams.(Ex: RCNN)
  • 70. 3. AI in Data Analysis • The scope of AI in data analytics is rising rapidly. • Another example of AI applications in this sector is predicting outcomes from data. Such systems use the analytics data to predict results and the appropriate course of action to achieve those results. • EX: Helixa.ai
  • 71. 4. AI in Transport • Airplanes have been using autopilot to steer them in the air since 1912. An autopilot system controls the trajectory of a plane,ships and space craft. • future scope of AI is quite broad is driverless cars • Experts believe self-driving cars will bring many long-term and short-term benefits, including lower emissions and enhanced road safety. For example, self-driving cars will be free from human errors, which account for 90% of traffic accidents.(Tesla,Uber)
  • 72. 5. AI in Home • Amazon Echo and Google Home are popular smart home devices that let you perform various tasks with just voice commands. • Smart assistants are also present in mobile phones. Apple’s Siri and Google Assistant • Microsoft also has a smart assistant, which is called Cortana.
  • 73. 6. AI in Healthcare • For example, the Knight Career Institute and Intel have made a collaborative cancer cloud. This cloud takes data from the medical history of cancer (and similar) patients to help doctors in making a better diagnosis. • Many major organizations, including IBM and Microsoft, are collaborating with medical institutions to solve the various problems present in the healthcare sector.
  • 75. • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. • Three types of Agents • Human • Robot • Software
  • 76. • A human agent has eyes, ears, and other organs for sensors and hands, legs, vocal tract, and so on for actuators. • A robotic agent might have cameras and infrared range finders for sensors and various motors for actuators. • A software agent receives keystrokes, file contents, and network packets as sensory inputs and acts on the environment by displaying on the screen, writing files, and sending network packets. • Encoded bit strings and its programs and actions
  • 77. • Sensor: Sensor is a device which detects the change in the environment and sends the information to other electronic devices. An agent observes its environment through sensors. • Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.
  • 78. • Effectors: Effectors are the devices which affect the environment. Effectors can be legs, wheels, arms, fingers, wings, fins, and display screen
  • 80.
  • 81.
  • 82. OmniScience, Learning,Autonomy An omniscient agent knows the actual outcome of its actions and can act accordingly; but omniscience is impossible in reality. OmniScience: Omniscience is the ability to know everything that's known and could be known. Rationality maximizes expected performance, while perfection maximizes actual performance
  • 83. • LEARNING: a rational agent not only to gather information but also to learn as much as possible from what it perceives. The agent’s initial configuration could reflect some prior knowledge of the environment, but as the agent gains experience this may be modified and augmented. • A rational agent should be autonomous—it should learn what it can to compensate for partial or incorrect prior knowledge
  • 84. Intelligent Agents: • An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. • A thermostat is an example of an intelligent agent. • Following are the main four rules for an AI agent: • Rule 1: An AI agent must have the ability to perceive the environment. • Rule 2: The observation must be used to make decisions. • Rule 3: Decision should result in an action. • Rule 4: The action taken by an AI agent must be a rational action.
  • 85. The Concept of Rationality: • Rationally is nothing but status of being resonable ,sensible and having good sense of judgment. • Rationally is concerned with expected actions and results depending upon what the agent has perceived. • Performing actions with the aim of obtaining useful information is an important of rationality.
  • 86. What is rational at any given time depends on four things: • The performance measure that defines the criterion of success. • The agent’s prior knowledge of the environment. • The actions that the agent can perform. • The sequence of percepts This leads to a definition of a rational agent:
  • 87. Rational Agent • A rational agent is one that does the right thing—conceptually speaking, every entry in the table for the agent function is filled out correctly. • Obviously, doing the right thing is better than doing the wrong thing, but what does it mean to do the right thing?
  • 88. A Nature of environment • PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: • P: Performance measure • E: Environment • A: Actuators • S: Sensors • Here performance measure is the objective for the success of an agent's behavior
  • 89. • Performance: Safety, time, legal drive, comfort • Environment: Roads, other vehicles, road signs, pedestrian • Actuators: Steering, accelerator, brake, signal, horn • Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
  • 90.
  • 91. Types of Environments Environment Properties: • Fully observable vs. partially observable • Deterministic vs. stochastic / strategic • Episodic vs. sequential • Static vs. dynamic • Discrete vs. continuous • Single agent vs. multiagent • Known vs. unknown • Complete vs Incomplete
  • 92. Fully Observable vs. Partially Observable • Fully observable environments: if the sensors detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure. • A fully observable AI environment has access to all required information to complete target task. Ex: Image recognition operates in fully observable domains. • An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data • for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are
  • 93. Deterministic vs. Stochastic • Deterministic:if the next state of the environment is completely determined by the current state and the action executed by the agent • Stochastic = the next state has some uncertainty associated with it. In other words, deterministic environments ignore uncertainty. Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles are a classic example of stochastic AI processes. • Uncertainty could come from randomness, lack of a good environment model, or lack of complete sensor coverage · Strategic environment if the environment is deterministic except for the actions of other agents • Examples: • Non-deterministic environment: physical world: Robot on Mars • Deterministic environment: Tic Tac Toe game
  • 94. Episodic vs. sequential: • In an episodic task environment, the agent’s experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. • Crucially, the next episode does not depend on the actions taken in previous episodes. • Many classification tasks are episodic. • For example, an agent that has to spot defective parts on an assembly line bases each decision on the current part, regardless of previous decisions; moreover, the current decision doesn’t affect whether the next part is defective. • Episodic environment: mail sorting system
  • 95. sequential • In sequential environments, on the other hand, the current decision could affect all future decisions. • Chess and taxi driving are sequential: in both cases, short-term actions can have long-term consequences. • Episodic environments are much simpler than sequential environments because the agent does not need to think ahead.
  • 96. Static vs. Dynamic • If the environment can change while an agent is deliberating, then we say the environment is dynamic for that agent; • EX: Taxi driving is clearly dynamic • Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. • Ex: Crossword puzzles are static.
  • 97. • If the environment itself does not change with the passage of time but the agent’s performance score does, then we say the environment is semidynamic. • Ex:Chess, when played with a clock,
  • 98. Discrete vs. continuous Discrete = time moves in fixed steps, usually with one measurement per step (and perhaps one action, but could be no action). E.g. the chess environment has a finite number of distinct states (excluding the clock). Continuous = Signals constantly coming into sensors, actions continually changing. E.g. driving a car
  • 99. Single agent vs. multi agent: • An agent operating by itself in an environment is single agent! • For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment. • Multi agent is when other agents are present! • E.g Chess (Two players)
  • 100. • A strict definition of an other agent is anything that changes from step to step. • A stronger definition is that it must sense and act • Competitive or co-operative Multi-agent environments • Human users are an example of another agent in a system • E.g. Other players in a football team (or opposing team), wind and waves in a sailing agent, other cars in a taxi drive.
  • 101. Known Vs. Unknown • Known environment :the outcomes (or outcome probabilities if the environment is stochastic) for all actions are given. • E.g: In solitaire card games • if the environment is unknown, the agent will have to learn how it works in order to make good decisions. • E.g New video game
  • 102. Complete vs. Incomplete • Complete AI environments are those on which, at any give time, we have enough information to complete a branch of the problem. • E.g Chess • Incomplete environments as AI strategies can’t anticipate many moves in advance and, instead, they focus on finding a good ‘equilibrium” at any given time. • E.g Poker
  • 103.
  • 104. Structure of an AI Agent • The task of AI is to design an agent program which implements the agent function. The structure of an intelligent agent is a combination of architecture and agent program. It can be viewed as: • Agent = Architecture + Agent program
  • 105. • Following are the main three terms involved in the structure of an AI agent: • Architecture: Architecture is machinery that an AI agent executes on. • Agent Function: Agent function is used to map a percept to an action. f:P* → A Agent program: Agent program is an implementation of agent function. An agent program executes on the physical architecture to produce function f.
  • 106. Types of Agents • Simple reflex agents • Model-based reflex agents • Goal-based agents • Utility-based agents • Learning Agents
  • 107. Simple -reflex Agents The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current percepts and ignore the rest of the percept history.
  • 108. Simple Reflex agents: • These agents only succeed in the fully observable environment. • The Simple reflex agent does not consider any part of percepts history during their decision and action process • The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
  • 109. Problems for the simple reflex agent design approach: • They have very limited intelligence • They do not have knowledge of non-perceptual parts of the current state • Mostly too big to generate and to store. • Not adaptive to changes in the environment.
  • 110. Model Based Reflex Agents The Model-based agent can work in a partially observable environment, and track the situation. A model-based agent has two important factors: Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent. Internal State: It is a representation of the current state based on percept histsry.
  • 111. • These agents have the model, "which is knowledge of the world" and based on the model they perform actions. • Updating the agent state requires information about: • How the world evolves • How the agent's action affects the world.
  • 113. • The knowledge of the current state environment is not always sufficient to decide for an agent to what to do. • The agent needs to know its goal which describes desirable situations. • Goal-based agents expand the capabilities of the model- based agent by having the "goal" information. • They choose an action, so that they can achieve the goal. • These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. Such considerations of different scenario are called searching and planning, which makes an agent proactive.
  • 115. • These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state. • Utility-based agent act based not only goals but also the best way to achieve the goal. • The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. • The utility function maps each state to a real number to check how efficiently each action achieves the goals.
  • 117. • A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities. • It starts to act with basic knowledge and then able to act and adapt automatically through learning.
  • 118. • A learning agent has mainly four conceptual components, which are: • Learning element: It is responsible for making improvements by learning from environment • Critic: Learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard. • Performance element: It is responsible for selecting external action • Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences. • Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance.
  • 119. PROBLEM FORMULATION • We have seen that the reflex agents, whose actions are a direct mapping from the states of the environment, consumes a large space to store the mapping table and is inflexible. • The goal-based agents consider the long-term actions and the desirability of the outcome, which is easier to train and is adaptable to the changing environment.
  • 120. • There are two kinds of goal-based agents: • problem-solving agents • planning agents. • Problem-solving agents consider each states of the world as indivisible, with no internal structure of the states visible to the problem-solving algorithms. (atomic representation) • E.g., finding a driving route,each state is a city. • AI algorithms: search, games, Markov decision processes, hidden Markov models, etc.
  • 121. • Planning agents split up each state into variables and establishes relationship between them. • Factored and structure representation: • Factored representation :Splits up the each state into fixed set of variables or attributes (each of one have value) • E.g., GPS location, amount of gas in the tank. • AI algorithms: constraint satisfaction, and Bayesian networks
  • 122. Structured reprasentation: Relationships between the objects of a state can be explicitly expressed. AI algorithms: first order logic, knowledge- based learning, natural language understanding
  • 123. 2 Steps performed by Problem-solving agent Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal. Goal formulation is based on the current situation and the agent’s performance measure. Problem Formulation: It is the most important step of problem-solving which decides what actions should be taken to achieve the formulated goal.
  • 124.
  • 125.
  • 126. Road map : part of romania
  • 127.
  • 128. Problem definition and formulation • Before we jump on to finding the algorithm for evaluating the problem and searching for the solution, • we first need to define and formulate the problem. • Problem formulation involves deciding what actions and states to consider, given the goal.
  • 129. • The initial state, the actions and the transition model together define the state space of the problem — the set of all states reachable by any sequence of actions. • the graphical representation of the state space of the traveling problem. • A path in the state space is a sequence of states connected by a sequence of actions. • The solution to the given problem is defined as the sequence of actions from the initial state to the goal states. The quality of the solution is measured by the cost function of the path, and an optimal solution has the lowest path cost among all the solutions.
  • 130. Example problems • Toy problems – those intended to illustrate or exercise various problem-solving methods – E.g., puzzle, chess, etc. • Real-world problems – tend to be more difficult and whose solutions people actually care about – E.g., Design, planning, etc.
  • 131. Toy problems • Example: vacuum world Number of states: 8 Initial state: Any Number of actions: 4  left, right, suck, noOp Goal: clean up all dirt  Goal states: {7, 8}  Path Cost:  Each step costs 1
  • 132.
  • 133.
  • 134.
  • 135. The 8-puzzle • States: – a state description specifies the location of each of the eight tiles and blank in one of the nine squares • Initial State: – Any state in state space • Successor function: – the blank moves Left, Right, Up, or Down • Goal test: – current state matches the goal configuration • Path cost: – each step costs 1, so the path cost is just the length of the path
  • 136. The 8-queens • There are two ways to formulate the problem • All of them have the common followings: – Goal test: 8 queens on board, not attacking to each other – Path cost: zero
  • 137. The 8-queens • (1) Incremental formulation – involves operators that augment the state description starting from an empty state – Each action adds a queen to the state – States: • any arrangement of 0 to 8 queens on board – Successor function: • add a queen to any empty square
  • 138. The 8-queens • (2) Complete-state formulation – starts with all 8 queens on the board – move the queens individually around – States: • any arrangement of 8 queens, one per column in the leftmost columns – Operators: move an attacked queen to a row, not attacked by any other
  • 139. Conclusion The right formulation makes a big difference to the size of the search space
  • 140. Terminalogy • Search: It identifies all the best possible sequence of actions to reach the goal state from the current state. It takes a problem as an input and returns solution as its output. • Solution: It finds the best algorithm out of various algorithms, which may be proven as the best optimal solution. • Execution: It executes the best optimal solution from the searching algorithms to reach the goal state from the current state.
  • 141. Searching for solutions State Space Search: • Defining problem precisely(initial ,goal) • Analyse the problem (Technique) • Isolate and represent the task knowledge that necessary to solve the problem • choose the best problem sovling technique, and apply it to solve the particular problem.
  • 142. Searching for solutions • Finding out a solution is done by –searching through the state space • All problems are transformed –as a search tree –generated by the initial state and successor function
  • 143. Tree Search Example • Initial state – The root of the search tree is a search node • Expanding – applying successor function to the current state – thereby generating a new set of states • leaf nodes – the states having no successors the set of all leaf nodes available for expansion at given point is called the frontier /openlist.
  • 144.
  • 145.
  • 146. Search tree • State space – has unique states {A, B} – while a search tree may have cyclic paths: A-B-A-B- A-B- … – it is a repeated state, generated by a loopy path. – This means that the search tree for Romania is infinite, even though the search space is limited. – These loopy paths makes some of the algorithms to fail, making the problem seem unsolvable. – In fact, a loopy path is a special case of redundant paths • A good search strategy should avoid such paths
  • 147. Note:The way to avoid exploring redundant paths is to remember where one has been.To do this we augment the TREE-SEARCH algorithm with a data structure called Explored set /closed list, which remembered every expanded node.
  • 148. General Graph Search Algorithm
  • 149. Search tree • A node is having five components: – n.STATE: which state it is in the state space – n.PARENT-NODE: from which node it is generated – n.ACTION: which action applied to its parent- node to generate it – n.PATH-COST: the cost, g(n), from initial state to the node n itself – n.DEPTH: number of steps along the path from the initial state
  • 150. Data Structure /infrastructure for searching algorithm
  • 151. Notice how the PARENT pointers string the nodes together into a tree structure. These pointers also allow the solution path to be extracted when a goal node is found; we use the SOLUTION function to return the sequence of actions obtained by following parent pointers back to the root.
  • 152. Performance measure of Problem- solving Algorithms We can evaluate an algorithm’s performance with these metrics: Completeness: Is the algorithm guaranteed to find a solution if there exist one? Optimality: Does the algorithm find the optimal solution? Time complexity: How long does it take for the algorithm to find a solution? Space complexity: How much memory is consumed in finding the solution?
  • 153. Time is often measured in terms of number of nodes generated during search ,and space in terms of the maximum number of nodes stored in memory.

Editor's Notes

  1. Heuristics: It is knowledge based on trail and error,evaluations and experimentation.