Software and Systems Engineering Standards: Verification and Validation of Sy...
artificial Intelligence unit1 ppt (1).ppt
1. UNIT I
Concept of AI
History
Current Status
Scope
Agents,Environments
Problem formulation
Review of Tree and Graph Structures
State Space Representation
Search Graph and Search Tree
3. Concept of AI
What is Artificial
Intelligence?
Artificial Intelligence:
build and understand intelligent entities
Intelligence:
“the capacity to learn and solve problems”
the ability to act rationally
Two main dimensions:
Thought processes vs behavior
Human-like vs rational-like
4. Views of AI fall into four categories/approaches:
Thinking humanly Thinking rationally
Acting humanly Acting rationally
5. Acting Humanly:Turing Test
(Can Machine think? A. M. Turing, 1950)
AI system passes if interrogator cannot tell
which one is the machine.
6. Acting humanly: Turing Test
To pass the test, computer need to possess
Natural Language Processing – to communicate with
the machine;
Knowledge Representation – to store and manipulate
information;
Automated reasoning – to use the stored information
to answer questions and draw new conclusions;
Machine Learning – to adapt to new circumstances
and to detect and extrapolate patterns.
Turing test identified key research areas in AI:
7. Total Turing Test:
To pass the Total Turing Test, the computer
needs,
Computer vision –to perceive objects
Robotics-manipulate objects and move about.
8. Thinking humanly: cognitive modeling
Requires scientific theories of internal activities
of the brain; How to validate?
1) Cognitive Science (top-down)
Predicting and testing behavior of human
subjects
– computer models + experimental
techniques from psychology
2) Cognitive Neuroscience (bottom-up)
Direct identification from neurological data
9. Thinking rationally: "laws of thought“
Proposed by Aristotle;
Given the correct premises, it yields the correct
conclusion
Socrates is a man
All men are mortal
--------------------------
Therefore Socrates is mortal
Logic Making the right inferences!
10. Acting rationally: rational agent
An agent is anything that can be viewed
as perceiving its environment through
sensors and acting upon that
environment through actuators.
Rational behavior: doing the right thing;
that which is expected to maximize goal
achievement, given the available
information;
11. Foundations of AI
Philosophy logic, methods of reasoning, mind vs. matter,
foundations of learning and knowledge
Mathematics logic, probability, computation
Economics utility, decision theory
Neuroscience biological basis of intelligence(how brain process
information?)
Psychology computational models of human intelligence(how
humans and animals think and act)
Computer engineering how to build efficient computers?
Linguistics rules of language, language acquisition(how does
language relate to thought)
Control theory design of dynamical systems that use controller to
achieve desired behavior
12. History
1943 McCulloch & Pitts “Boolean circuit model of brain”
1950 Turing’s “Computing Machinery and Intelligence”
1951 Minsky and Edmonds
• Built a neural net computer SNARC
• Used 3000 vacuum tubes and 40 neurons
13. The Birthplace of
“Artificial Intelligence”, 1956
1956 Dartmouth meeting: “Artificial
Intelligence” adopted
1956 Newell and Simon Logic theorist
(LT)- proves theorem.
14. Early enthusiasm,great
expectations (1952-1969)
GPS- Newell and Simon-thinks like humans(1952)
Samuel Checkers that learns (1952)
McCarthy - Lisp (1958),
Geometry theorem prover - Gelernter (1959)
Robinson’s resolution(1963)
Slagles – SAINT solves calculus problems(1963).
Daniel Bobrows Student program solved algebra story
problems(1964).
1968- TomEvans Analogy program solved geometric
analogy problems that appear in IQ test.
15. 271- Fall 2008
1966-1974 a dose of reality
Problems with computation
1969 :Minsky and Papert Published the book Perceptrons,
demonstrating the limitations of neural networks.
1969-1979 Knowledge-based systems
1969:Dendral:Inferring molecular structures
Mycin: diagnosing blood infections
Prolog Language PLANNER became popular
Minsky developed frames as a representation and reasoning
language.
1980-present: AI becomes an industry
Japanese Government announced Fifth generation project to build
intelligent computers
AI Winter –companies failed to deliver on extra vagant promises
1986-present: return of neural networks
Many research were done by psychologists on Neural networks
16. 1987-present: AI becomes a Science
HMMs, planning, belief network
Emergence of Intelligent agents(1995-present)
o The agent architecture SOAR developed
o The agents environment is internet.
o Web based applications, search engines, recommender systems,
websites
22. Agents
An agent is anything that can be viewed as
perceiving its environment through sensors
and acting upon that environment through
actuators
Human agent:
sensors- eyes, ears, and other organs
actuators- hands, legs, mouth, and
other body parts
Robotic agent:
Sensors - cameras and infrared range
finders
Actuators - motors
23. an agent perceives its environment through
sensors
the complete set of inputs at a given time is
called a percept
the current percept, or a sequence of
percepts may influence the actions of an
agent –percept sequence
24. The agent function maps from percept histories to
actions:[f: P* A].The agent function is an
abstract mathematical description.
The agent function will be implemented by an agent
program.The agent program is a concrete
implementation running on the agent
architecture .
25. Vacuum-cleaner world
Percepts:
Location and status,
e.g., [A,Dirty]
Actions:
Left, Right, Suck, NoOp
Example vacuum agent program:
function Vacuum-Agent([location,status]) returns an action
if status = Dirty then return Suck
else if location = A then return Right
else if location = B then return Left
26.
27. Rationality
A rational agent is one that does the right
thing. Every entry in the table for the agent
function is filled out correctly.
It is based on
performance measure
percept sequence
background knowledge
feasible actions
28. Omniscience, Learning and
Autonomy
an omniscient agent deals with the actual
outcome of its actions
a rational agent deals with the expected
outcome of actions
a rational agent not only gather information
but also learns as much as possible from the
percepts it receives.
a rational agent should be autonomous –it
should learn what it can do to compensate
for partial or incorrect prior knowledge.
29. Nature of Environments
Specifying the task environment
Before we design an intelligent agent, we must specify its “task
environment”:
Problem specification: Performance measure, Environment,
Actuators, Sensors (PEAS)
Example of Agent Types and their PEAS description:
Example: automated taxi driver
Performance measure
• Safe, fast, legal, comfortable trip, maximize profits
Environment
• Roads, other traffic, pedestrians, customers
Actuators
• Steering wheel, accelerator, brake, signal, horn
Sensors
• Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard
31. Example: Agent = Part-picking robot
Performance measure: Percentage of parts in correct
bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
32. Artificial Intelligence a modern approach
Example: Agent = Interactive English tutor
Performance measure: Maximize student's score on test
Environment: Set of students
Actuators: Screen display (exercises, suggestions, corrections)
Sensors: Keyboard
33. Example: Agent = Satellite image system
Performance measure: Correct image categorization
Environment: Downlink from satellite
Actuators: Display categorization of scene
Sensors: Color pixel array
34. Properties of Task Environment
Fully observable (vs. partially observable): The agent's sensors
give it access to the complete state of the environment at each point
in time
e.g an automated taxi doesn’t has sensor to see what other drivers are
doing/thinking.
Deterministic (vs. stochastic): The next state of the environment is
completely determined by the current state and the agent’s action
Strategic: the environment is deterministic except for the actions
of other agents
e.g Vacuum world is Deterministic while Taxi Driving is Stochastic –
as one can exactly predict the behaviour of traffic
Episodic (vs. sequential): The agent's experience is divided into
atomic “episodes,” and the choice of action in each episode depends
only on the episode itself
E.g. an agent sorting defective parts in an assembly line is episodic
while a taxi driving agent or a chess playing agent are sequential ….
35. Static (vs. dynamic): The environment is unchanged while an agent is
deliberating
Semidynamic: the environment does not change with the passage
of time, but the agent's performance score does
e.g.Taxi Driving is Dynamic, Crossword Puzzle solver is static,chess
played with a clock is semidynamic
Discrete (vs. continuous): The environment provides a fixed number of
distinct percepts, actions, and environment states
e.g. chess game has finite number of states
• Taxi Driving is continuous-state and continuous-time problem …
Single agent (vs. multi-agent): An agent operating by itself in an
environment
e.g. An agent solving a crossword puzzle is in a single agent
environment
• Agent in chess playing is in two-agent environment
36. task
environm.
observable determ./
stochastic
episodic/
sequential
static/
dynamic
discrete/
continuous
agents
crossword
puzzle
fully determ. sequential static discrete single
chess with
clock
fully strategic sequential semi discrete multi
poker partial stochastic sequential static discrete multi
back
gammon
fully stochastic sequential static discrete multi
taxi
driving
partial stochastic sequential dynamic continuous multi
medical
diagnosis
partial stochastic sequential dynamic continuous single
image
analysis
fully determ. episodic semi continuous single
partpicking
robot
partial stochastic episodic dynamic continuous single
refinery
controller
partial stochastic sequential dynamic continuous single
interact.
Eng. tutor
partial stochastic sequential dynamic discrete multi
37. Structure of Agents
An agent is completely specified by the agent function
mapping percept sequences to actions.
The agent program implements the agent function
mapping percepts sequences to actions
Agent=architecture + program.
Architecture= sort of computing device with physical
sensors and actuators.
Aim of AI is to design the agent program
38. Table-Driven agent
Function Table-Driven-Agent(percept)
Static: percepts, a sequence, initially empty
table, a table of actions, indexed by percept
sequences, initially fully specified
append percept to the end of percepts
action <- Lookup(percepts,table)
Return action
The table agent program is invoked for each new percept and returns an action
each time. It keeps track of percept sequences using its own private data structure.
39. Table-lookup agent
Drawbacks:
Huge table
Take a long time to build the table
No autonomy
Even with learning, need a long time to learn
the table entries.
Example : let P be the set of possible percepts and T be the
lifetime of the agent (the total number of percepts it will receive)
then the lookup table will contain PT entries.
The table of the vacuum agent (VA) will contain more than 4T
entries (VA has 4 possible percepts).
40. Four basic kinds of agent program are
Simple reflex agents
Model-based reflex agents
Goal-based agents
Utility-based agents
All of these can be turned into learning agents
41. Simple reflex agents
Single current percept : the agent select an action on the
basis current percept, ignoring the rest of percept history.
Example : The vacuum agent (VA) is a simple reflex agent,
because its decision is based only on the current location and
on whether that contains dirt.
Rules relate
“State” based on percept
“action” for agent to perform
“Condition-action” rule:
If a then b: e.g.
vacuum agent (VA) : if in(A) and dirty(A), then vacuum
taxi driving agent (TA): if car-in-front-is-braking then initiate-
braking.
42. Agent program for a simple reflex agent
The vacuum agent program is very small compared to the corresponding
table : it cuts down the number of possibilities from 4T to 4. This reduction
comes from the ignoring of the history percepts.
43. Simple reflex agent Program
Function Simple-Reflex-Agent(percept)
Static: rules, set of condition-actions rules;
state <- Interpret-Input(percept)
Rule <- Rule-Match(state, rules)
action <- Rule-Action[Rule]
return action
A simple reflex agent. It acts according to rule whose condition matches the
current state, as defined by the percept.
44. Schematic diagram of a Simple
reflex agent
example: vacuum cleaner world
Limited Intelligence
Fails if environment
is partially observable
current state of decision process
45. Simple reflex agents
Artificial Intelligence a modern approach
45
Simple but very limited intelligence.
Action does not depend on percept history, only on
current percept.
Therefore no memory requirements.
Infinite loops
Suppose vacuum cleaner does not observe location.
What do you do given location = clean? Left of A or
right on B -> infinite loop.
Possible Solution: Randomize action.
46. Model-based reflex agents
Solution to partial observability problems
Maintain state
• Keep track of parts of the world can't see now
• Maintain internal state that depends on the percept
history
Update previous state based on
• Knowledge of how world changes, e.g. TA : an overtaking car
generally will be closer behind than it was a moment ago.
• Knowledge of effects of own actions, e.g. TA: When the agent
turns the steering wheel clockwise the car turns to the right.
• => Model called “Model of the world” implements the
knowledge about how the world work.
46
47. Schematic diagram of a Model-based reflex agents
Models the world by:
modeling how the world changes
how it’s actions change the world
description of
current world state
sometimes it is unclear what to do
without a clear goal
48. Model-based reflex agents
Function Model-based-Reflex-Agent(percept)
Static: state, a description of the current world state
rules, set of condition-actions rules;
actions, the most recent action, initially none
State<-Update-State(oldInternalState,LastAction,percept)
rule<- Rule-Match(State, rules)
action <- Rule-Action[rule]
return action
A model-based reflex agent. It keep track of the current state of the world using
an internal model. It then chooses an action in the same way as the reflect agent.
49. Goal-based agents
Artificial Intelligence a modern approach
49
• knowing state and environment? Enough?
– Taxi can go left, right, straight
• Have a goal
A destination to get to
Uses knowledge about a goal to guide its
actions
E.g., Search, planning
Goal-based Agents are much more flexible in
responding to a changing environment;
accepting different goals.
50. Goal-based agents
Goals provide reason to prefer one action over the other.
We need to predict the future: we need to plan & search
51. Artificial Intelligence a modern approach
51
• Reflex agent breaks when it sees brake lights. Goal based agent
reasons
– Brake light -> car in front is stopping -> I should stop -> I should use brake
52. Utility-based agents
Artificial Intelligence a modern approach
52
Goals are not always enough
Many action sequences get taxi to destination
Consider other things. How fast, how safe…..
A utility function maps a state onto a real
number which describes the associated degree
of “happiness”, “goodness”, “success”.
Where does the utility measure come from?
Economics: money.
Biology: number of offspring.
Your life?
53. Utility-based agents
Some solutions to goal states are better than others.
Which one is best is given by a utility function.
Which combination of goals is preferred?
54. Learning agents
How does an agent improve over time?
By monitoring it’s performance and suggesting
better modeling, new action rules, etc.
Evaluates
current
world
state
changes
action
rules
suggests
explorations
“old agent”
model world
and
decide on
actions to be
taken
55. Learning Agents can be divided into 4 conceptual
components:
1. Learning elements are responsible for
improvements
2. Performance elements are responsible for
selecting external actions (previous knowledge)
3. Critic tells the learning elements how well the
agent is doing with respect to a fixed performance
standard.
4. Problem generator is responsible for suggesting
actions that will lead to new and informative
experience.
56. Example :Automated Taxi driving
•The performance element consists of whatever collection of knowledge and
procedures the TA has for selecting its driving actions.
•The critic observes the world and passes information along to the learning
element. For example after the taxi makes a quick left turn across three lanes
the critic observes the shocking language used by other drivers. From this
experience the learning element is able to formulate a rule saying this was a
bad action, and the performance element is modified by installing this new rule.
•The problem generator may identify certain areas of behavior in need of
improvement and suggest experiments : such as testing the brakes on different
road surfaces under different conditions.
•The learning element can make change in any Knowledge of previous agent
types : observation between two states (how the world evolves), observation of
results of actions (what my action do). (learn from what happens when
strong brake is applied on a wet road …)
56
58. Problem Solving agents:
1. Goal Formulation: Set of one or more (desirable) world
states.
2. Problem formulation: What actions and states to
consider given a goal and an initial state.
3. Search for solution: Given the problem, search for a
solution --- a sequence of actions to achieve the goal
starting from the initial state.
4. Execution of the solution
58
59. Example: Path Finding problem
Formulate goal:
be in Bucharest
(Romania)
Formulate problem:
action: drive between
pair of connected
cities (direct road)
state: be in a city
(20 world states)
Find solution:
sequence of cities
leading from start to
goal state, e.g., Arad,
Sibiu, Fagaras,
Bucharest
Execution
drive from Arad to
Bucharest according
to the solution
Initial
State
Goal
State
Environment: fully observable (map),
deterministic, and the agent knows effects
of each action.
60. Well defined Problems and
solutions
A problem can defined by 4 components
1.Initial state: starting point from which the agent sets
out
2.Operator: description of an action
State space: all states reachable from the initial
state by any sequence of actions
Path: sequence of actions leading from one state
to another
3.Goal test: determines if a given state is the goal
state
4.Path cost function: assign a numeric cost to
each path.
60
61. 61
Example Problems
Toy problems
Illustrate/test various problem-solving methods
Concise, exact description
Can be used to compare performance
Examples: 8-puzzle, 8-queens problem, Cryptarithmetic,
Vacuum world, Missionaries and cannibals.
Real-world problem
More difficult
No single, agreed-upon specification (state, successor function,
edgecost)
Examples: Route finding, VLSI layout, Robot navigation,
Assembly sequencing
62. Toy problems:
Simple Vacuum World
states
two locations
dirty, clean
initial state
any legitimate state
successor function (operators)
left, right, suck
goal test
all squares clean
path cost
one unit per action
Properties: discrete locations, discrete dirt (binary), deterministic
64. 8-Puzzle
states
location of tiles (including blank tile)
initial state
any legitimate configuration
successor function (operators)
move tile
alternatively: move blank
goal test
any legitimate configuration of tiles
path cost
one unit per move
Properties: abstraction leads to discrete configurations, discrete moves,deterministic
65. 8-Queens
incremental formulation
states
• arrangement of up to 8 queens
on the board
initial state
• empty board
successor function (operators)
• add a queen to any square
goal test
• all queens on board
• no queen attacked
path cost
• irrelevant (all solutions equally
valid)
complete-state formulation
states
arrangement of 8 queens on the board
initial state
all 8 queens on board
successor function (operators)
move a queen to a different square
goal test
no queen attacked
path cost
irrelevant (all solutions equally valid)
66. 66
Real-world problems
Route finding
Defined in terms of locations and transitions along links between
them
Applications: routing in computer networks, automated travel
advisory systems, airline travel planning systems
states
locations
initial state
starting point
successor function (operators)
move from one location to another
goal test
arrive at a certain location
path cost
may be quite complex
• money, time, travel comfort, scenery, ...
67. 67
Touring and traveling salesperson problems
“Visit every city on the map at least once”
Needs information about the visited cities
Goal: Find the shortest tour that visits all cities
NP-hard, but a lot of effort has been spent on improving the
capabilities of TSP algorithms
Applications: planning movements of automatic circuit board drills
VLSI layout
positioning millions of components and connections on a chip to
minimize area, circuit delays, etc.
Place cells on a chip so they don’t overlap and there is room for
connecting wires to be placed between the cells
Robot navigation
Generalization of the route finding problem
• No discrete set of routes
• Robot can move in a continuous space
• Infinite set of possible actions and states
68. Assembly sequencing
Automatic assembly of complex objects
The problem is to find an order in which to assemble
the parts of some object
Protein design
sequence of amino acids that will fold into the 3-
dimensional protein with the right properties to cure some
disease.
69. Searching for Solutions/
Search Graph & Search Tree
Search through the state space.
We will consider search techniques that use an
explicit search tree that is generated by the initial state and
successor function .
72. Selected for expansion.
Added to tree.
Note: Arad added (again) to tree!
(reachable from Sibiu)
Not necessarily a problem, but
in Graph-Search, we will avoid
this by maintaining an
“explored” list.
73. An informal description of
general Tree search algorithm
initialize (initial node)
Loop
choose a node for expansion according to strategy
goal node? done
expand node with successor function
73
74. states vs. nodes
74
A state is a (representation of) a physical configuration
A node is a data structure with 5 components state, parent node, action,
path cost,depth
75. General Tree Search Algorithm
function TREE-SEARCH(problem, fringe) returns solution
fringe := INSERT(MAKE-NODE(INITIAL-STATE[problem]), fringe)
loop do
if EMPTY?(fringe) then return failure
node := REMOVE-FIRST(fringe)
if GOAL-TEST[problem] applied to STATE[node] succeeds
then return SOLUTION(node)
fringe := INSERT-ALL(EXPAND(node, problem), fringe)
generate the node from the initial state of the problem
repeat
return failure if there are no more nodes in the fringe
examine the current node; if it’s a goal, return the solution
expand the current node, and add the new nodes to the fringe
76. Measuring Problem solving performance
76
An algorithms performance can be evaluated in 4 ways:
1.Completeness: does it always find a solution if one exists?
2. Time Complexity: how long does it take to find a solution
3.Space Complexity: how much memory does it need to perform the
search
4.Optimality: does the strategy find the optimal solution
Time and space complexity are measured in terms of
b: branching factor(max no of successors of any node) of the
search tree
d: depth of the shallowest goal node
m: maximum length of the state space (may be ∞)