SlideShare a Scribd company logo
1 of 76
UNIT I
Concept of AI
History
Current Status
Scope
Agents,Environments
Problem formulation
Review of Tree and Graph Structures
State Space Representation
Search Graph and Search Tree
Textbook
Artificial Intelligence: A Modern Approach (AIMA)
(Second Edition) by Stuart Russell and Peter Norvig
Concept of AI
What is Artificial
Intelligence?
 Artificial Intelligence:
 build and understand intelligent entities
 Intelligence:
 “the capacity to learn and solve problems”
 the ability to act rationally
Two main dimensions:
 Thought processes vs behavior
 Human-like vs rational-like
Views of AI fall into four categories/approaches:
Thinking humanly Thinking rationally
Acting humanly Acting rationally
Acting Humanly:Turing Test
(Can Machine think? A. M. Turing, 1950)
AI system passes if interrogator cannot tell
which one is the machine.
Acting humanly: Turing Test
To pass the test, computer need to possess
 Natural Language Processing – to communicate with
the machine;
 Knowledge Representation – to store and manipulate
information;
 Automated reasoning – to use the stored information
to answer questions and draw new conclusions;
 Machine Learning – to adapt to new circumstances
and to detect and extrapolate patterns.
Turing test  identified key research areas in AI:
Total Turing Test:
 To pass the Total Turing Test, the computer
needs,
 Computer vision –to perceive objects
 Robotics-manipulate objects and move about.
Thinking humanly: cognitive modeling
Requires scientific theories of internal activities
of the brain; How to validate?
1) Cognitive Science (top-down) 
Predicting and testing behavior of human
subjects
– computer models + experimental
techniques from psychology
2) Cognitive Neuroscience (bottom-up) 
Direct identification from neurological data
Thinking rationally: "laws of thought“
Proposed by Aristotle;
Given the correct premises, it yields the correct
conclusion
Socrates is a man
All men are mortal
--------------------------
Therefore Socrates is mortal
Logic  Making the right inferences!
Acting rationally: rational agent
An agent is anything that can be viewed
as perceiving its environment through
sensors and acting upon that
environment through actuators.
Rational behavior: doing the right thing;
that which is expected to maximize goal
achievement, given the available
information;
Foundations of AI
 Philosophy logic, methods of reasoning, mind vs. matter,
foundations of learning and knowledge
 Mathematics logic, probability, computation
 Economics utility, decision theory
 Neuroscience biological basis of intelligence(how brain process
information?)
 Psychology computational models of human intelligence(how
humans and animals think and act)
 Computer engineering how to build efficient computers?
 Linguistics rules of language, language acquisition(how does
language relate to thought)
 Control theory design of dynamical systems that use controller to
achieve desired behavior
History
 1943 McCulloch & Pitts “Boolean circuit model of brain”
 1950 Turing’s “Computing Machinery and Intelligence”
 1951 Minsky and Edmonds
• Built a neural net computer SNARC
• Used 3000 vacuum tubes and 40 neurons
The Birthplace of
“Artificial Intelligence”, 1956
 1956 Dartmouth meeting: “Artificial
Intelligence” adopted
 1956 Newell and Simon Logic theorist
(LT)- proves theorem.
Early enthusiasm,great
expectations (1952-1969)
 GPS- Newell and Simon-thinks like humans(1952)
 Samuel Checkers that learns (1952)
 McCarthy - Lisp (1958),
 Geometry theorem prover - Gelernter (1959)
 Robinson’s resolution(1963)
 Slagles – SAINT solves calculus problems(1963).
 Daniel Bobrows Student program solved algebra story
problems(1964).
 1968- TomEvans Analogy program solved geometric
analogy problems that appear in IQ test.
271- Fall 2008
 1966-1974 a dose of reality
 Problems with computation
 1969 :Minsky and Papert Published the book Perceptrons,
demonstrating the limitations of neural networks.
 1969-1979 Knowledge-based systems
 1969:Dendral:Inferring molecular structures
Mycin: diagnosing blood infections
Prolog Language PLANNER became popular
Minsky developed frames as a representation and reasoning
language.
 1980-present: AI becomes an industry
 Japanese Government announced Fifth generation project to build
intelligent computers
 AI Winter –companies failed to deliver on extra vagant promises
 1986-present: return of neural networks
Many research were done by psychologists on Neural networks
 1987-present: AI becomes a Science
 HMMs, planning, belief network
Emergence of Intelligent agents(1995-present)
o The agent architecture SOAR developed
o The agents environment is internet.
o Web based applications, search engines, recommender systems,
websites
Current Status
271- Fall 2008
Scope
271- Fall 2008
271- Fall 2008
Agents,Environments
Intelligent Agents
 Agents and environments
 Rationality
 Nature of Environments
 Structure of Agents
Agents
 An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
 Human agent:
sensors- eyes, ears, and other organs
actuators- hands, legs, mouth, and
other body parts
 Robotic agent:
Sensors - cameras and infrared range
finders
Actuators - motors
 an agent perceives its environment through
sensors
 the complete set of inputs at a given time is
called a percept
 the current percept, or a sequence of
percepts may influence the actions of an
agent –percept sequence
 The agent function maps from percept histories to
actions:[f: P*  A].The agent function is an
abstract mathematical description.
 The agent function will be implemented by an agent
program.The agent program is a concrete
implementation running on the agent
architecture .
Vacuum-cleaner world
 Percepts:
Location and status,
e.g., [A,Dirty]
 Actions:
Left, Right, Suck, NoOp
Example vacuum agent program:
function Vacuum-Agent([location,status]) returns an action
 if status = Dirty then return Suck
 else if location = A then return Right
 else if location = B then return Left
Rationality
 A rational agent is one that does the right
thing. Every entry in the table for the agent
function is filled out correctly.
 It is based on
 performance measure
 percept sequence
 background knowledge
 feasible actions
Omniscience, Learning and
Autonomy
 an omniscient agent deals with the actual
outcome of its actions
 a rational agent deals with the expected
outcome of actions
 a rational agent not only gather information
but also learns as much as possible from the
percepts it receives.
 a rational agent should be autonomous –it
should learn what it can do to compensate
for partial or incorrect prior knowledge.
Nature of Environments
Specifying the task environment
 Before we design an intelligent agent, we must specify its “task
environment”:
 Problem specification: Performance measure, Environment,
Actuators, Sensors (PEAS)
Example of Agent Types and their PEAS description:
 Example: automated taxi driver
 Performance measure
• Safe, fast, legal, comfortable trip, maximize profits
 Environment
• Roads, other traffic, pedestrians, customers
 Actuators
• Steering wheel, accelerator, brake, signal, horn
 Sensors
• Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard
 Example: Agent = Medical diagnosis system
Performance measure: Healthy patient, minimize costs,
lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (questions, tests, diagnoses,
treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings,
patient's answers)
 Example: Agent = Part-picking robot
Performance measure: Percentage of parts in correct
bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
Artificial Intelligence a modern approach
 Example: Agent = Interactive English tutor
Performance measure: Maximize student's score on test
Environment: Set of students
Actuators: Screen display (exercises, suggestions, corrections)
Sensors: Keyboard
Example: Agent = Satellite image system
Performance measure: Correct image categorization
Environment: Downlink from satellite
Actuators: Display categorization of scene
Sensors: Color pixel array
Properties of Task Environment
 Fully observable (vs. partially observable): The agent's sensors
give it access to the complete state of the environment at each point
in time
e.g an automated taxi doesn’t has sensor to see what other drivers are
doing/thinking.
 Deterministic (vs. stochastic): The next state of the environment is
completely determined by the current state and the agent’s action
 Strategic: the environment is deterministic except for the actions
of other agents
e.g Vacuum world is Deterministic while Taxi Driving is Stochastic –
as one can exactly predict the behaviour of traffic
 Episodic (vs. sequential): The agent's experience is divided into
atomic “episodes,” and the choice of action in each episode depends
only on the episode itself
 E.g. an agent sorting defective parts in an assembly line is episodic
while a taxi driving agent or a chess playing agent are sequential ….
 Static (vs. dynamic): The environment is unchanged while an agent is
deliberating
 Semidynamic: the environment does not change with the passage
of time, but the agent's performance score does
e.g.Taxi Driving is Dynamic, Crossword Puzzle solver is static,chess
played with a clock is semidynamic
 Discrete (vs. continuous): The environment provides a fixed number of
distinct percepts, actions, and environment states
e.g. chess game has finite number of states
• Taxi Driving is continuous-state and continuous-time problem …
 Single agent (vs. multi-agent): An agent operating by itself in an
environment
e.g. An agent solving a crossword puzzle is in a single agent
environment
• Agent in chess playing is in two-agent environment
task
environm.
observable determ./
stochastic
episodic/
sequential
static/
dynamic
discrete/
continuous
agents
crossword
puzzle
fully determ. sequential static discrete single
chess with
clock
fully strategic sequential semi discrete multi
poker partial stochastic sequential static discrete multi
back
gammon
fully stochastic sequential static discrete multi
taxi
driving
partial stochastic sequential dynamic continuous multi
medical
diagnosis
partial stochastic sequential dynamic continuous single
image
analysis
fully determ. episodic semi continuous single
partpicking
robot
partial stochastic episodic dynamic continuous single
refinery
controller
partial stochastic sequential dynamic continuous single
interact.
Eng. tutor
partial stochastic sequential dynamic discrete multi
Structure of Agents
 An agent is completely specified by the agent function
mapping percept sequences to actions.
 The agent program implements the agent function
mapping percepts sequences to actions
Agent=architecture + program.
Architecture= sort of computing device with physical
sensors and actuators.
 Aim of AI is to design the agent program
Table-Driven agent
Function Table-Driven-Agent(percept)
Static: percepts, a sequence, initially empty
table, a table of actions, indexed by percept
sequences, initially fully specified
append percept to the end of percepts
action <- Lookup(percepts,table)
Return action
The table agent program is invoked for each new percept and returns an action
each time. It keeps track of percept sequences using its own private data structure.
Table-lookup agent
 Drawbacks:
 Huge table
 Take a long time to build the table
 No autonomy
 Even with learning, need a long time to learn
the table entries.
 Example : let P be the set of possible percepts and T be the
lifetime of the agent (the total number of percepts it will receive)
then the lookup table will contain PT entries.
 The table of the vacuum agent (VA) will contain more than 4T
entries (VA has 4 possible percepts).
 Four basic kinds of agent program are
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
All of these can be turned into learning agents
Simple reflex agents
 Single current percept : the agent select an action on the
basis current percept, ignoring the rest of percept history.
 Example : The vacuum agent (VA) is a simple reflex agent,
because its decision is based only on the current location and
on whether that contains dirt.
 Rules relate
 “State” based on percept
 “action” for agent to perform
 “Condition-action” rule:
If a then b: e.g.
vacuum agent (VA) : if in(A) and dirty(A), then vacuum
taxi driving agent (TA): if car-in-front-is-braking then initiate-
braking.
Agent program for a simple reflex agent
The vacuum agent program is very small compared to the corresponding
table : it cuts down the number of possibilities from 4T to 4. This reduction
comes from the ignoring of the history percepts.
Simple reflex agent Program
Function Simple-Reflex-Agent(percept)
Static: rules, set of condition-actions rules;
state <- Interpret-Input(percept)
Rule <- Rule-Match(state, rules)
action <- Rule-Action[Rule]
return action
A simple reflex agent. It acts according to rule whose condition matches the
current state, as defined by the percept.
Schematic diagram of a Simple
reflex agent
example: vacuum cleaner world
Limited Intelligence
Fails if environment
is partially observable
current state of decision process
Simple reflex agents
Artificial Intelligence a modern approach
45
 Simple but very limited intelligence.
 Action does not depend on percept history, only on
current percept.
 Therefore no memory requirements.
 Infinite loops
 Suppose vacuum cleaner does not observe location.
What do you do given location = clean? Left of A or
right on B -> infinite loop.
 Possible Solution: Randomize action.
Model-based reflex agents
 Solution to partial observability problems
 Maintain state
• Keep track of parts of the world can't see now
• Maintain internal state that depends on the percept
history
 Update previous state based on
• Knowledge of how world changes, e.g. TA : an overtaking car
generally will be closer behind than it was a moment ago.
• Knowledge of effects of own actions, e.g. TA: When the agent
turns the steering wheel clockwise the car turns to the right.
• => Model called “Model of the world” implements the
knowledge about how the world work.
46
Schematic diagram of a Model-based reflex agents
Models the world by:
modeling how the world changes
how it’s actions change the world
description of
current world state
sometimes it is unclear what to do
without a clear goal
Model-based reflex agents
Function Model-based-Reflex-Agent(percept)
Static: state, a description of the current world state
rules, set of condition-actions rules;
actions, the most recent action, initially none
State<-Update-State(oldInternalState,LastAction,percept)
rule<- Rule-Match(State, rules)
action <- Rule-Action[rule]
return action
A model-based reflex agent. It keep track of the current state of the world using
an internal model. It then chooses an action in the same way as the reflect agent.
Goal-based agents
Artificial Intelligence a modern approach
49
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
 A destination to get to
 Uses knowledge about a goal to guide its
actions
 E.g., Search, planning
 Goal-based Agents are much more flexible in
responding to a changing environment;
accepting different goals.
Goal-based agents
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
Artificial Intelligence a modern approach
51
• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake
Utility-based agents
Artificial Intelligence a modern approach
52
 Goals are not always enough
 Many action sequences get taxi to destination
 Consider other things. How fast, how safe…..
 A utility function maps a state onto a real
number which describes the associated degree
of “happiness”, “goodness”, “success”.
 Where does the utility measure come from?
 Economics: money.
 Biology: number of offspring.
 Your life?
Utility-based agents
Some solutions to goal states are better than others.
Which one is best is given by a utility function.
Which combination of goals is preferred?
Learning agents
How does an agent improve over time?
By monitoring it’s performance and suggesting
better modeling, new action rules, etc.
Evaluates
current
world
state
changes
action
rules
suggests
explorations
“old agent”
model world
and
decide on
actions to be
taken
Learning Agents can be divided into 4 conceptual
components:
1. Learning elements are responsible for
improvements
2. Performance elements are responsible for
selecting external actions (previous knowledge)
3. Critic tells the learning elements how well the
agent is doing with respect to a fixed performance
standard.
4. Problem generator is responsible for suggesting
actions that will lead to new and informative
experience.
Example :Automated Taxi driving
•The performance element consists of whatever collection of knowledge and
procedures the TA has for selecting its driving actions.
•The critic observes the world and passes information along to the learning
element. For example after the taxi makes a quick left turn across three lanes
the critic observes the shocking language used by other drivers. From this
experience the learning element is able to formulate a rule saying this was a
bad action, and the performance element is modified by installing this new rule.
•The problem generator may identify certain areas of behavior in need of
improvement and suggest experiments : such as testing the brakes on different
road surfaces under different conditions.
•The learning element can make change in any Knowledge of previous agent
types : observation between two states (how the world evolves), observation of
results of actions (what my action do). (learn from what happens when
strong brake is applied on a wet road …)
56
Problem Formulation
Problem Solving agents
Example problems
Searching for solutions
57
Problem Solving agents:
1. Goal Formulation: Set of one or more (desirable) world
states.
2. Problem formulation: What actions and states to
consider given a goal and an initial state.
3. Search for solution: Given the problem, search for a
solution --- a sequence of actions to achieve the goal
starting from the initial state.
4. Execution of the solution
58
Example: Path Finding problem
 Formulate goal:
 be in Bucharest
(Romania)
 Formulate problem:
 action: drive between
pair of connected
cities (direct road)
 state: be in a city
(20 world states)
 Find solution:
 sequence of cities
leading from start to
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest
 Execution
 drive from Arad to
Bucharest according
to the solution
Initial
State
Goal
State
Environment: fully observable (map),
deterministic, and the agent knows effects
of each action.
Well defined Problems and
solutions
A problem can defined by 4 components
1.Initial state: starting point from which the agent sets
out
2.Operator: description of an action
State space: all states reachable from the initial
state by any sequence of actions
Path: sequence of actions leading from one state
to another
3.Goal test: determines if a given state is the goal
state
4.Path cost function: assign a numeric cost to
each path.
60
61
Example Problems
 Toy problems
 Illustrate/test various problem-solving methods
 Concise, exact description
 Can be used to compare performance
 Examples: 8-puzzle, 8-queens problem, Cryptarithmetic,
Vacuum world, Missionaries and cannibals.
 Real-world problem
 More difficult
 No single, agreed-upon specification (state, successor function,
edgecost)
 Examples: Route finding, VLSI layout, Robot navigation,
Assembly sequencing
Toy problems:
Simple Vacuum World
 states
 two locations
 dirty, clean
 initial state
 any legitimate state
 successor function (operators)
 left, right, suck
 goal test
 all squares clean
 path cost
 one unit per action
Properties: discrete locations, discrete dirt (binary), deterministic
The 8-puzzle
[Note: optimal solution of n-Puzzle family is NP-hard]
8-Puzzle
 states
 location of tiles (including blank tile)
 initial state
 any legitimate configuration
 successor function (operators)
 move tile
 alternatively: move blank
 goal test
 any legitimate configuration of tiles
 path cost
 one unit per move
Properties: abstraction leads to discrete configurations, discrete moves,deterministic
8-Queens
 incremental formulation
 states
• arrangement of up to 8 queens
on the board
 initial state
• empty board
 successor function (operators)
• add a queen to any square
 goal test
• all queens on board
• no queen attacked
 path cost
• irrelevant (all solutions equally
valid)
 complete-state formulation
 states
 arrangement of 8 queens on the board
 initial state
 all 8 queens on board
 successor function (operators)
 move a queen to a different square
 goal test
 no queen attacked
 path cost
 irrelevant (all solutions equally valid)
66
Real-world problems
 Route finding
 Defined in terms of locations and transitions along links between
them
 Applications: routing in computer networks, automated travel
advisory systems, airline travel planning systems
 states
 locations
 initial state
 starting point
 successor function (operators)
 move from one location to another
 goal test
 arrive at a certain location
 path cost
 may be quite complex
• money, time, travel comfort, scenery, ...
67
 Touring and traveling salesperson problems
 “Visit every city on the map at least once”
 Needs information about the visited cities
 Goal: Find the shortest tour that visits all cities
 NP-hard, but a lot of effort has been spent on improving the
capabilities of TSP algorithms
 Applications: planning movements of automatic circuit board drills
 VLSI layout
 positioning millions of components and connections on a chip to
minimize area, circuit delays, etc.
 Place cells on a chip so they don’t overlap and there is room for
connecting wires to be placed between the cells
 Robot navigation
 Generalization of the route finding problem
• No discrete set of routes
• Robot can move in a continuous space
• Infinite set of possible actions and states
 Assembly sequencing
 Automatic assembly of complex objects
 The problem is to find an order in which to assemble
the parts of some object
 Protein design
sequence of amino acids that will fold into the 3-
dimensional protein with the right properties to cure some
disease.
Searching for Solutions/
Search Graph & Search Tree
Search through the state space.
We will consider search techniques that use an
explicit search tree that is generated by the initial state and
successor function .
search Tree example Node selected
for expansion.
Nodes added to tree.
Selected for expansion.
Added to tree.
Note: Arad added (again) to tree!
(reachable from Sibiu)
Not necessarily a problem, but
in Graph-Search, we will avoid
this by maintaining an
“explored” list.
An informal description of
general Tree search algorithm
initialize (initial node)
Loop
choose a node for expansion according to strategy
goal node?  done
expand node with successor function
73
states vs. nodes
74
 A state is a (representation of) a physical configuration
 A node is a data structure with 5 components state, parent node, action,
path cost,depth
General Tree Search Algorithm
function TREE-SEARCH(problem, fringe) returns solution
fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if EMPTY?(fringe) then return failure
node := REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
fringe := INSERT-ALL(EXPAND(node, problem), fringe)
 generate the node from the initial state of the problem
 repeat
 return failure if there are no more nodes in the fringe
 examine the current node; if it’s a goal, return the solution
 expand the current node, and add the new nodes to the fringe
Measuring Problem solving performance
76
An algorithms performance can be evaluated in 4 ways:
1.Completeness: does it always find a solution if one exists?
2. Time Complexity: how long does it take to find a solution
3.Space Complexity: how much memory does it need to perform the
search
4.Optimality: does the strategy find the optimal solution
 Time and space complexity are measured in terms of
 b: branching factor(max no of successors of any node) of the
search tree
 d: depth of the shallowest goal node
 m: maximum length of the state space (may be ∞)

More Related Content

Similar to artificial Intelligence unit1 ppt (1).ppt

Lect 1_Introduction to AI and ML.pdf
Lect 1_Introduction to AI and ML.pdfLect 1_Introduction to AI and ML.pdf
Lect 1_Introduction to AI and ML.pdfgadissaassefa
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptEELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptDaliaMagdy12
 
Robotics and agents
Robotics and agentsRobotics and agents
Robotics and agentsritahani
 
Artificial intelligence_ class 12 KATHIR.pptx
Artificial intelligence_ class 12  KATHIR.pptxArtificial intelligence_ class 12  KATHIR.pptx
Artificial intelligence_ class 12 KATHIR.pptxvaradharajjayakumarv
 
Artificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.pptArtificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.pptsagarvbrvbr
 
Artificial intelligence introduction
Artificial intelligence  introductionArtificial intelligence  introduction
Artificial intelligence introductionS&P Capital IQ
 
Artificial intelligence Information and Introduction
Artificial intelligence Information and IntroductionArtificial intelligence Information and Introduction
Artificial intelligence Information and IntroductionDipen Vasoya
 
Artificial intelligence introduction
Artificial intelligence  introductionArtificial intelligence  introduction
Artificial intelligence introductionParneet Kaur
 
AI_01_introduction.pptx
AI_01_introduction.pptxAI_01_introduction.pptx
AI_01_introduction.pptxYousef Aburawi
 
Artificial Intelligence Introduction
Artificial Intelligence Introduction Artificial Intelligence Introduction
Artificial Intelligence Introduction Kaushlendra Rajput
 
UNIT1-AI final.pptx
UNIT1-AI final.pptxUNIT1-AI final.pptx
UNIT1-AI final.pptxCS50Bootcamp
 
Artificial intelligence introduction
Artificial intelligence  introduction Artificial intelligence  introduction
Artificial intelligence introduction San1705
 
Artificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.pptArtificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.pptSaurabhUpadhyay874937
 

Similar to artificial Intelligence unit1 ppt (1).ppt (20)

Ai u1
Ai u1Ai u1
Ai u1
 
AIES Unit I(2022).pptx
AIES Unit I(2022).pptxAIES Unit I(2022).pptx
AIES Unit I(2022).pptx
 
unit 1.pptx
unit 1.pptxunit 1.pptx
unit 1.pptx
 
Lect 1_Introduction to AI and ML.pdf
Lect 1_Introduction to AI and ML.pdfLect 1_Introduction to AI and ML.pdf
Lect 1_Introduction to AI and ML.pdf
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.pptEELU AI  lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
EELU AI lecture 1- fall 2022-2023 - Chapter 01- Introduction.ppt
 
Robotics and agents
Robotics and agentsRobotics and agents
Robotics and agents
 
Artificial intelligence_ class 12 KATHIR.pptx
Artificial intelligence_ class 12  KATHIR.pptxArtificial intelligence_ class 12  KATHIR.pptx
Artificial intelligence_ class 12 KATHIR.pptx
 
Artificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.pptArtificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.ppt
 
Artificial intelligence introduction
Artificial intelligence  introductionArtificial intelligence  introduction
Artificial intelligence introduction
 
Artificial intelligence Information and Introduction
Artificial intelligence Information and IntroductionArtificial intelligence Information and Introduction
Artificial intelligence Information and Introduction
 
Artificial intelligence introduction
Artificial intelligence  introductionArtificial intelligence  introduction
Artificial intelligence introduction
 
AI_01_introduction.pptx
AI_01_introduction.pptxAI_01_introduction.pptx
AI_01_introduction.pptx
 
Artificial Intelligence Introduction
Artificial Intelligence Introduction Artificial Intelligence Introduction
Artificial Intelligence Introduction
 
chapter2.ppt
chapter2.pptchapter2.ppt
chapter2.ppt
 
chapter2.ppt
chapter2.pptchapter2.ppt
chapter2.ppt
 
UNIT1-AI final.pptx
UNIT1-AI final.pptxUNIT1-AI final.pptx
UNIT1-AI final.pptx
 
Artificial intelligence introduction
Artificial intelligence  introduction Artificial intelligence  introduction
Artificial intelligence introduction
 
Artificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.pptArtificial Intelligence- Introduction.ppt
Artificial Intelligence- Introduction.ppt
 
Ai applications study
Ai applications  studyAi applications  study
Ai applications study
 

More from Ramya Nellutla

Deep network notes.pdf
Deep network notes.pdfDeep network notes.pdf
Deep network notes.pdfRamya Nellutla
 
pentration testing.pdf
pentration testing.pdfpentration testing.pdf
pentration testing.pdfRamya Nellutla
 
- Social Engineering Unit- II Part- I.pdf
- Social Engineering Unit- II Part- I.pdf- Social Engineering Unit- II Part- I.pdf
- Social Engineering Unit- II Part- I.pdfRamya Nellutla
 
Unit-3-Part-1 [Autosaved].ppt
Unit-3-Part-1 [Autosaved].pptUnit-3-Part-1 [Autosaved].ppt
Unit-3-Part-1 [Autosaved].pptRamya Nellutla
 
E5-roughsets unit-V.pdf
E5-roughsets unit-V.pdfE5-roughsets unit-V.pdf
E5-roughsets unit-V.pdfRamya Nellutla
 
Unit-II -Soft Computing.pdf
Unit-II -Soft Computing.pdfUnit-II -Soft Computing.pdf
Unit-II -Soft Computing.pdfRamya Nellutla
 
SC01_IntroductionSC-Unit-I.ppt
SC01_IntroductionSC-Unit-I.pptSC01_IntroductionSC-Unit-I.ppt
SC01_IntroductionSC-Unit-I.pptRamya Nellutla
 
- Fuzzy Systems -II.pptx
- Fuzzy Systems -II.pptx- Fuzzy Systems -II.pptx
- Fuzzy Systems -II.pptxRamya Nellutla
 

More from Ramya Nellutla (12)

Deep network notes.pdf
Deep network notes.pdfDeep network notes.pdf
Deep network notes.pdf
 
pentration testing.pdf
pentration testing.pdfpentration testing.pdf
pentration testing.pdf
 
Deep Learning.pptx
Deep Learning.pptxDeep Learning.pptx
Deep Learning.pptx
 
Unit-I PPT.pdf
Unit-I PPT.pdfUnit-I PPT.pdf
Unit-I PPT.pdf
 
- Social Engineering Unit- II Part- I.pdf
- Social Engineering Unit- II Part- I.pdf- Social Engineering Unit- II Part- I.pdf
- Social Engineering Unit- II Part- I.pdf
 
Datamodels.pptx
Datamodels.pptxDatamodels.pptx
Datamodels.pptx
 
Unit-3-Part-1 [Autosaved].ppt
Unit-3-Part-1 [Autosaved].pptUnit-3-Part-1 [Autosaved].ppt
Unit-3-Part-1 [Autosaved].ppt
 
E5-roughsets unit-V.pdf
E5-roughsets unit-V.pdfE5-roughsets unit-V.pdf
E5-roughsets unit-V.pdf
 
Unit-3.pptx
Unit-3.pptxUnit-3.pptx
Unit-3.pptx
 
Unit-II -Soft Computing.pdf
Unit-II -Soft Computing.pdfUnit-II -Soft Computing.pdf
Unit-II -Soft Computing.pdf
 
SC01_IntroductionSC-Unit-I.ppt
SC01_IntroductionSC-Unit-I.pptSC01_IntroductionSC-Unit-I.ppt
SC01_IntroductionSC-Unit-I.ppt
 
- Fuzzy Systems -II.pptx
- Fuzzy Systems -II.pptx- Fuzzy Systems -II.pptx
- Fuzzy Systems -II.pptx
 

Recently uploaded

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 

Recently uploaded (20)

Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 

artificial Intelligence unit1 ppt (1).ppt

  • 1. UNIT I Concept of AI History Current Status Scope Agents,Environments Problem formulation Review of Tree and Graph Structures State Space Representation Search Graph and Search Tree
  • 2. Textbook Artificial Intelligence: A Modern Approach (AIMA) (Second Edition) by Stuart Russell and Peter Norvig
  • 3. Concept of AI What is Artificial Intelligence?  Artificial Intelligence:  build and understand intelligent entities  Intelligence:  “the capacity to learn and solve problems”  the ability to act rationally Two main dimensions:  Thought processes vs behavior  Human-like vs rational-like
  • 4. Views of AI fall into four categories/approaches: Thinking humanly Thinking rationally Acting humanly Acting rationally
  • 5. Acting Humanly:Turing Test (Can Machine think? A. M. Turing, 1950) AI system passes if interrogator cannot tell which one is the machine.
  • 6. Acting humanly: Turing Test To pass the test, computer need to possess  Natural Language Processing – to communicate with the machine;  Knowledge Representation – to store and manipulate information;  Automated reasoning – to use the stored information to answer questions and draw new conclusions;  Machine Learning – to adapt to new circumstances and to detect and extrapolate patterns. Turing test  identified key research areas in AI:
  • 7. Total Turing Test:  To pass the Total Turing Test, the computer needs,  Computer vision –to perceive objects  Robotics-manipulate objects and move about.
  • 8. Thinking humanly: cognitive modeling Requires scientific theories of internal activities of the brain; How to validate? 1) Cognitive Science (top-down)  Predicting and testing behavior of human subjects – computer models + experimental techniques from psychology 2) Cognitive Neuroscience (bottom-up)  Direct identification from neurological data
  • 9. Thinking rationally: "laws of thought“ Proposed by Aristotle; Given the correct premises, it yields the correct conclusion Socrates is a man All men are mortal -------------------------- Therefore Socrates is mortal Logic  Making the right inferences!
  • 10. Acting rationally: rational agent An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Rational behavior: doing the right thing; that which is expected to maximize goal achievement, given the available information;
  • 11. Foundations of AI  Philosophy logic, methods of reasoning, mind vs. matter, foundations of learning and knowledge  Mathematics logic, probability, computation  Economics utility, decision theory  Neuroscience biological basis of intelligence(how brain process information?)  Psychology computational models of human intelligence(how humans and animals think and act)  Computer engineering how to build efficient computers?  Linguistics rules of language, language acquisition(how does language relate to thought)  Control theory design of dynamical systems that use controller to achieve desired behavior
  • 12. History  1943 McCulloch & Pitts “Boolean circuit model of brain”  1950 Turing’s “Computing Machinery and Intelligence”  1951 Minsky and Edmonds • Built a neural net computer SNARC • Used 3000 vacuum tubes and 40 neurons
  • 13. The Birthplace of “Artificial Intelligence”, 1956  1956 Dartmouth meeting: “Artificial Intelligence” adopted  1956 Newell and Simon Logic theorist (LT)- proves theorem.
  • 14. Early enthusiasm,great expectations (1952-1969)  GPS- Newell and Simon-thinks like humans(1952)  Samuel Checkers that learns (1952)  McCarthy - Lisp (1958),  Geometry theorem prover - Gelernter (1959)  Robinson’s resolution(1963)  Slagles – SAINT solves calculus problems(1963).  Daniel Bobrows Student program solved algebra story problems(1964).  1968- TomEvans Analogy program solved geometric analogy problems that appear in IQ test.
  • 15. 271- Fall 2008  1966-1974 a dose of reality  Problems with computation  1969 :Minsky and Papert Published the book Perceptrons, demonstrating the limitations of neural networks.  1969-1979 Knowledge-based systems  1969:Dendral:Inferring molecular structures Mycin: diagnosing blood infections Prolog Language PLANNER became popular Minsky developed frames as a representation and reasoning language.  1980-present: AI becomes an industry  Japanese Government announced Fifth generation project to build intelligent computers  AI Winter –companies failed to deliver on extra vagant promises  1986-present: return of neural networks Many research were done by psychologists on Neural networks
  • 16.  1987-present: AI becomes a Science  HMMs, planning, belief network Emergence of Intelligent agents(1995-present) o The agent architecture SOAR developed o The agents environment is internet. o Web based applications, search engines, recommender systems, websites
  • 19.
  • 21. Agents,Environments Intelligent Agents  Agents and environments  Rationality  Nature of Environments  Structure of Agents
  • 22. Agents  An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators  Human agent: sensors- eyes, ears, and other organs actuators- hands, legs, mouth, and other body parts  Robotic agent: Sensors - cameras and infrared range finders Actuators - motors
  • 23.  an agent perceives its environment through sensors  the complete set of inputs at a given time is called a percept  the current percept, or a sequence of percepts may influence the actions of an agent –percept sequence
  • 24.  The agent function maps from percept histories to actions:[f: P*  A].The agent function is an abstract mathematical description.  The agent function will be implemented by an agent program.The agent program is a concrete implementation running on the agent architecture .
  • 25. Vacuum-cleaner world  Percepts: Location and status, e.g., [A,Dirty]  Actions: Left, Right, Suck, NoOp Example vacuum agent program: function Vacuum-Agent([location,status]) returns an action  if status = Dirty then return Suck  else if location = A then return Right  else if location = B then return Left
  • 26.
  • 27. Rationality  A rational agent is one that does the right thing. Every entry in the table for the agent function is filled out correctly.  It is based on  performance measure  percept sequence  background knowledge  feasible actions
  • 28. Omniscience, Learning and Autonomy  an omniscient agent deals with the actual outcome of its actions  a rational agent deals with the expected outcome of actions  a rational agent not only gather information but also learns as much as possible from the percepts it receives.  a rational agent should be autonomous –it should learn what it can do to compensate for partial or incorrect prior knowledge.
  • 29. Nature of Environments Specifying the task environment  Before we design an intelligent agent, we must specify its “task environment”:  Problem specification: Performance measure, Environment, Actuators, Sensors (PEAS) Example of Agent Types and their PEAS description:  Example: automated taxi driver  Performance measure • Safe, fast, legal, comfortable trip, maximize profits  Environment • Roads, other traffic, pedestrians, customers  Actuators • Steering wheel, accelerator, brake, signal, horn  Sensors • Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard
  • 30.  Example: Agent = Medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers)
  • 31.  Example: Agent = Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors
  • 32. Artificial Intelligence a modern approach  Example: Agent = Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard
  • 33. Example: Agent = Satellite image system Performance measure: Correct image categorization Environment: Downlink from satellite Actuators: Display categorization of scene Sensors: Color pixel array
  • 34. Properties of Task Environment  Fully observable (vs. partially observable): The agent's sensors give it access to the complete state of the environment at each point in time e.g an automated taxi doesn’t has sensor to see what other drivers are doing/thinking.  Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the agent’s action  Strategic: the environment is deterministic except for the actions of other agents e.g Vacuum world is Deterministic while Taxi Driving is Stochastic – as one can exactly predict the behaviour of traffic  Episodic (vs. sequential): The agent's experience is divided into atomic “episodes,” and the choice of action in each episode depends only on the episode itself  E.g. an agent sorting defective parts in an assembly line is episodic while a taxi driving agent or a chess playing agent are sequential ….
  • 35.  Static (vs. dynamic): The environment is unchanged while an agent is deliberating  Semidynamic: the environment does not change with the passage of time, but the agent's performance score does e.g.Taxi Driving is Dynamic, Crossword Puzzle solver is static,chess played with a clock is semidynamic  Discrete (vs. continuous): The environment provides a fixed number of distinct percepts, actions, and environment states e.g. chess game has finite number of states • Taxi Driving is continuous-state and continuous-time problem …  Single agent (vs. multi-agent): An agent operating by itself in an environment e.g. An agent solving a crossword puzzle is in a single agent environment • Agent in chess playing is in two-agent environment
  • 36. task environm. observable determ./ stochastic episodic/ sequential static/ dynamic discrete/ continuous agents crossword puzzle fully determ. sequential static discrete single chess with clock fully strategic sequential semi discrete multi poker partial stochastic sequential static discrete multi back gammon fully stochastic sequential static discrete multi taxi driving partial stochastic sequential dynamic continuous multi medical diagnosis partial stochastic sequential dynamic continuous single image analysis fully determ. episodic semi continuous single partpicking robot partial stochastic episodic dynamic continuous single refinery controller partial stochastic sequential dynamic continuous single interact. Eng. tutor partial stochastic sequential dynamic discrete multi
  • 37. Structure of Agents  An agent is completely specified by the agent function mapping percept sequences to actions.  The agent program implements the agent function mapping percepts sequences to actions Agent=architecture + program. Architecture= sort of computing device with physical sensors and actuators.  Aim of AI is to design the agent program
  • 38. Table-Driven agent Function Table-Driven-Agent(percept) Static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially fully specified append percept to the end of percepts action <- Lookup(percepts,table) Return action The table agent program is invoked for each new percept and returns an action each time. It keeps track of percept sequences using its own private data structure.
  • 39. Table-lookup agent  Drawbacks:  Huge table  Take a long time to build the table  No autonomy  Even with learning, need a long time to learn the table entries.  Example : let P be the set of possible percepts and T be the lifetime of the agent (the total number of percepts it will receive) then the lookup table will contain PT entries.  The table of the vacuum agent (VA) will contain more than 4T entries (VA has 4 possible percepts).
  • 40.  Four basic kinds of agent program are  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents All of these can be turned into learning agents
  • 41. Simple reflex agents  Single current percept : the agent select an action on the basis current percept, ignoring the rest of percept history.  Example : The vacuum agent (VA) is a simple reflex agent, because its decision is based only on the current location and on whether that contains dirt.  Rules relate  “State” based on percept  “action” for agent to perform  “Condition-action” rule: If a then b: e.g. vacuum agent (VA) : if in(A) and dirty(A), then vacuum taxi driving agent (TA): if car-in-front-is-braking then initiate- braking.
  • 42. Agent program for a simple reflex agent The vacuum agent program is very small compared to the corresponding table : it cuts down the number of possibilities from 4T to 4. This reduction comes from the ignoring of the history percepts.
  • 43. Simple reflex agent Program Function Simple-Reflex-Agent(percept) Static: rules, set of condition-actions rules; state <- Interpret-Input(percept) Rule <- Rule-Match(state, rules) action <- Rule-Action[Rule] return action A simple reflex agent. It acts according to rule whose condition matches the current state, as defined by the percept.
  • 44. Schematic diagram of a Simple reflex agent example: vacuum cleaner world Limited Intelligence Fails if environment is partially observable current state of decision process
  • 45. Simple reflex agents Artificial Intelligence a modern approach 45  Simple but very limited intelligence.  Action does not depend on percept history, only on current percept.  Therefore no memory requirements.  Infinite loops  Suppose vacuum cleaner does not observe location. What do you do given location = clean? Left of A or right on B -> infinite loop.  Possible Solution: Randomize action.
  • 46. Model-based reflex agents  Solution to partial observability problems  Maintain state • Keep track of parts of the world can't see now • Maintain internal state that depends on the percept history  Update previous state based on • Knowledge of how world changes, e.g. TA : an overtaking car generally will be closer behind than it was a moment ago. • Knowledge of effects of own actions, e.g. TA: When the agent turns the steering wheel clockwise the car turns to the right. • => Model called “Model of the world” implements the knowledge about how the world work. 46
  • 47. Schematic diagram of a Model-based reflex agents Models the world by: modeling how the world changes how it’s actions change the world description of current world state sometimes it is unclear what to do without a clear goal
  • 48. Model-based reflex agents Function Model-based-Reflex-Agent(percept) Static: state, a description of the current world state rules, set of condition-actions rules; actions, the most recent action, initially none State<-Update-State(oldInternalState,LastAction,percept) rule<- Rule-Match(State, rules) action <- Rule-Action[rule] return action A model-based reflex agent. It keep track of the current state of the world using an internal model. It then chooses an action in the same way as the reflect agent.
  • 49. Goal-based agents Artificial Intelligence a modern approach 49 • knowing state and environment? Enough? – Taxi can go left, right, straight • Have a goal  A destination to get to  Uses knowledge about a goal to guide its actions  E.g., Search, planning  Goal-based Agents are much more flexible in responding to a changing environment; accepting different goals.
  • 50. Goal-based agents Goals provide reason to prefer one action over the other. We need to predict the future: we need to plan & search
  • 51. Artificial Intelligence a modern approach 51 • Reflex agent breaks when it sees brake lights. Goal based agent reasons – Brake light -> car in front is stopping -> I should stop -> I should use brake
  • 52. Utility-based agents Artificial Intelligence a modern approach 52  Goals are not always enough  Many action sequences get taxi to destination  Consider other things. How fast, how safe…..  A utility function maps a state onto a real number which describes the associated degree of “happiness”, “goodness”, “success”.  Where does the utility measure come from?  Economics: money.  Biology: number of offspring.  Your life?
  • 53. Utility-based agents Some solutions to goal states are better than others. Which one is best is given by a utility function. Which combination of goals is preferred?
  • 54. Learning agents How does an agent improve over time? By monitoring it’s performance and suggesting better modeling, new action rules, etc. Evaluates current world state changes action rules suggests explorations “old agent” model world and decide on actions to be taken
  • 55. Learning Agents can be divided into 4 conceptual components: 1. Learning elements are responsible for improvements 2. Performance elements are responsible for selecting external actions (previous knowledge) 3. Critic tells the learning elements how well the agent is doing with respect to a fixed performance standard. 4. Problem generator is responsible for suggesting actions that will lead to new and informative experience.
  • 56. Example :Automated Taxi driving •The performance element consists of whatever collection of knowledge and procedures the TA has for selecting its driving actions. •The critic observes the world and passes information along to the learning element. For example after the taxi makes a quick left turn across three lanes the critic observes the shocking language used by other drivers. From this experience the learning element is able to formulate a rule saying this was a bad action, and the performance element is modified by installing this new rule. •The problem generator may identify certain areas of behavior in need of improvement and suggest experiments : such as testing the brakes on different road surfaces under different conditions. •The learning element can make change in any Knowledge of previous agent types : observation between two states (how the world evolves), observation of results of actions (what my action do). (learn from what happens when strong brake is applied on a wet road …) 56
  • 57. Problem Formulation Problem Solving agents Example problems Searching for solutions 57
  • 58. Problem Solving agents: 1. Goal Formulation: Set of one or more (desirable) world states. 2. Problem formulation: What actions and states to consider given a goal and an initial state. 3. Search for solution: Given the problem, search for a solution --- a sequence of actions to achieve the goal starting from the initial state. 4. Execution of the solution 58
  • 59. Example: Path Finding problem  Formulate goal:  be in Bucharest (Romania)  Formulate problem:  action: drive between pair of connected cities (direct road)  state: be in a city (20 world states)  Find solution:  sequence of cities leading from start to goal state, e.g., Arad, Sibiu, Fagaras, Bucharest  Execution  drive from Arad to Bucharest according to the solution Initial State Goal State Environment: fully observable (map), deterministic, and the agent knows effects of each action.
  • 60. Well defined Problems and solutions A problem can defined by 4 components 1.Initial state: starting point from which the agent sets out 2.Operator: description of an action State space: all states reachable from the initial state by any sequence of actions Path: sequence of actions leading from one state to another 3.Goal test: determines if a given state is the goal state 4.Path cost function: assign a numeric cost to each path. 60
  • 61. 61 Example Problems  Toy problems  Illustrate/test various problem-solving methods  Concise, exact description  Can be used to compare performance  Examples: 8-puzzle, 8-queens problem, Cryptarithmetic, Vacuum world, Missionaries and cannibals.  Real-world problem  More difficult  No single, agreed-upon specification (state, successor function, edgecost)  Examples: Route finding, VLSI layout, Robot navigation, Assembly sequencing
  • 62. Toy problems: Simple Vacuum World  states  two locations  dirty, clean  initial state  any legitimate state  successor function (operators)  left, right, suck  goal test  all squares clean  path cost  one unit per action Properties: discrete locations, discrete dirt (binary), deterministic
  • 63. The 8-puzzle [Note: optimal solution of n-Puzzle family is NP-hard]
  • 64. 8-Puzzle  states  location of tiles (including blank tile)  initial state  any legitimate configuration  successor function (operators)  move tile  alternatively: move blank  goal test  any legitimate configuration of tiles  path cost  one unit per move Properties: abstraction leads to discrete configurations, discrete moves,deterministic
  • 65. 8-Queens  incremental formulation  states • arrangement of up to 8 queens on the board  initial state • empty board  successor function (operators) • add a queen to any square  goal test • all queens on board • no queen attacked  path cost • irrelevant (all solutions equally valid)  complete-state formulation  states  arrangement of 8 queens on the board  initial state  all 8 queens on board  successor function (operators)  move a queen to a different square  goal test  no queen attacked  path cost  irrelevant (all solutions equally valid)
  • 66. 66 Real-world problems  Route finding  Defined in terms of locations and transitions along links between them  Applications: routing in computer networks, automated travel advisory systems, airline travel planning systems  states  locations  initial state  starting point  successor function (operators)  move from one location to another  goal test  arrive at a certain location  path cost  may be quite complex • money, time, travel comfort, scenery, ...
  • 67. 67  Touring and traveling salesperson problems  “Visit every city on the map at least once”  Needs information about the visited cities  Goal: Find the shortest tour that visits all cities  NP-hard, but a lot of effort has been spent on improving the capabilities of TSP algorithms  Applications: planning movements of automatic circuit board drills  VLSI layout  positioning millions of components and connections on a chip to minimize area, circuit delays, etc.  Place cells on a chip so they don’t overlap and there is room for connecting wires to be placed between the cells  Robot navigation  Generalization of the route finding problem • No discrete set of routes • Robot can move in a continuous space • Infinite set of possible actions and states
  • 68.  Assembly sequencing  Automatic assembly of complex objects  The problem is to find an order in which to assemble the parts of some object  Protein design sequence of amino acids that will fold into the 3- dimensional protein with the right properties to cure some disease.
  • 69. Searching for Solutions/ Search Graph & Search Tree Search through the state space. We will consider search techniques that use an explicit search tree that is generated by the initial state and successor function .
  • 70. search Tree example Node selected for expansion.
  • 71. Nodes added to tree.
  • 72. Selected for expansion. Added to tree. Note: Arad added (again) to tree! (reachable from Sibiu) Not necessarily a problem, but in Graph-Search, we will avoid this by maintaining an “explored” list.
  • 73. An informal description of general Tree search algorithm initialize (initial node) Loop choose a node for expansion according to strategy goal node?  done expand node with successor function 73
  • 74. states vs. nodes 74  A state is a (representation of) a physical configuration  A node is a data structure with 5 components state, parent node, action, path cost,depth
  • 75. General Tree Search Algorithm function TREE-SEARCH(problem, fringe) returns solution fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe) loop do if EMPTY?(fringe) then return failure node := REMOVE-FIRST(fringe) if GOAL-TEST[problem] applied to STATE[node] succeeds then return SOLUTION(node) fringe := INSERT-ALL(EXPAND(node, problem), fringe)  generate the node from the initial state of the problem  repeat  return failure if there are no more nodes in the fringe  examine the current node; if it’s a goal, return the solution  expand the current node, and add the new nodes to the fringe
  • 76. Measuring Problem solving performance 76 An algorithms performance can be evaluated in 4 ways: 1.Completeness: does it always find a solution if one exists? 2. Time Complexity: how long does it take to find a solution 3.Space Complexity: how much memory does it need to perform the search 4.Optimality: does the strategy find the optimal solution  Time and space complexity are measured in terms of  b: branching factor(max no of successors of any node) of the search tree  d: depth of the shallowest goal node  m: maximum length of the state space (may be ∞)