Object Automation Software Solutions Pvt Ltd in collaboration with SRM Ramapuram delivered Workshop for Skill Development on Artificial Intelligence.
Searching and Sorting Algorithms-An Introduction by Dr.SP. Abinandhan ,Professor , NIE, Mysore.
2. Agents
o Perceiving its environment through sensors and acting
upon that environment through actuators
Agent
?
Sensors
Actuators
Environment
Percepts
Actions
https://inst.eecs.berkeley.edu/~cs188/su20/
3. Agents
o Percept: Agent’s perceptual inputs at any given instant
o Percept Sequence: History of all perceptions
o Agent’s choice of action at any given instant can depend on
the entire percept sequence of observed to date
o An agent function describes agent behavior and is a
mapping from given percept sequence to a given action
4. The Structures of Agent
o Till now, we discussed about agent behavior
o How behavior is implemented?
o agent = architecture + program
o Input: Current percept, and agent function
5. Simple Reflex Agents
o Select actions base on the current
percept
o Simple reflex behaviors occur
even in more complex
environments
o Condition action rule
o if car-in-front-is-braking then
initiate-braking
https://inst.eecs.berkeley.edu/~cs188/su20/
7. Problem-solving Agents
o The process of looking for a sequence of actions that
reaches the goal is called search
o A search algorithm takes a problem as input and
returns a solution in the form of an action sequence
o Execution phase: Recommended actions are carried
out
o Formulate, search, and execute
9. Travelling in Romania
State space:
Cities
Successor function:
Roads: Go to adjacent city with
cost = distance
Start state:
Arad
Goal test:
Is state == Bucharest?
Solution?
10. State Space Graphs
o State space graph: A mathematical
representation of a search problem
o Nodes are (abstracted) world configurations
o Arcs represent successors (action results)
o The goal test is a set of goal nodes (maybe
only one)
o In a state space graph, each state occurs
only once!
o We can rarely build this full graph in
memory (it’s too big), but it’s a useful
idea
https://inst.eecs.berkeley.edu/~cs188/su20/
11. State Space Graphs
o State space graph: A mathematical
representation of a search problem
o Nodes are (abstracted) world configurations
o Arcs represent successors (action results)
o The goal test is a set of goal nodes (maybe
only one)
o In a state space graph, each state occurs
only once!
o We can rarely build this full graph in
memory (it’s too big), but it’s a useful
idea
S
G
d
b
p
q
c
e
h
a
f
r
Tiny state space graph for a tiny search
problem
https://inst.eecs.berkeley.edu/~cs188/su20/
12. Search Trees
o A search tree:
o A “what if” tree of plans and their outcomes
o The start state is the root node
o Children correspond to successors
o Nodes show states, but correspond to PLANS that achieve those states
o For most problems, we can never actually build the whole tree
“E”, 1.0
“N”, 1.0
This is now / start
Possible futures
https://inst.eecs.berkeley.edu/~cs188/su20/
13. State Space Graphs vs. Search Trees
S
a
b
d p
a
c
e
p
h
f
r
q
q c G
a
q
e
p
h
f
r
q
q c G
a
S
G
d
b
p q
c
e
h
a
f
r
We construct both on
demand – and we
construct as little as
possible.
Each NODE in in the
search tree is an entire
PATH in the state space
graph.
Search Tree
State Space Graph
https://inst.eecs.berkeley.edu/~cs188/su20/
14. Quiz: State Space Graphs vs. Search Trees
S G
b
a
Consider this 4-state graph: How big is its search tree (from S)?
https://inst.eecs.berkeley.edu/~cs188/su20/
15. Quiz: State Space Graphs vs. Search Trees
S G
b
a
Consider this 4-state graph:
Important: Lots of repeated structure in the search tree!
How big is its search tree (from S)?
s
b
b G a
a
G
a G b G
… …
https://inst.eecs.berkeley.edu/~cs188/su20/
18. Searching with a Search Tree
o Search:
o Expand out potential plans (tree nodes)
o Maintain a fringe of partial plans under
consideration
o Try to expand as few tree nodes as possible
https://inst.eecs.berkeley.edu/~cs188/su20/
19. General Tree Search
o Important ideas:
o Fringe
o Expansion
o Exploration strategy
o Main question: which fringe nodes to
explore?
21. Example: Tree Search
a a p
q
h
f
r
q
c G
a
q
q
p
q
a
S
G
d
b
p q
c
e
h
a
f
r
f
d
e
r
S
d e p
e
h r
f
c G
b c
s
s 🡪 d
s 🡪 e
s 🡪 p
s 🡪 d 🡪 b
s 🡪 d 🡪 c
s 🡪 d 🡪 e
s 🡪 d 🡪 e 🡪 h
s 🡪 d 🡪 e 🡪 r
s 🡪 d 🡪 e 🡪 r 🡪 f
s 🡪 d 🡪 e 🡪 r 🡪 f 🡪 c
s 🡪 d 🡪 e 🡪 r 🡪 f 🡪 G
https://inst.eecs.berkeley.edu/~cs188/su20/
23. Depth-First Search
S
a
b
d p
a
c
e
p
h
f
r
q
q c G
a
q
e
p
h
f
r
q
q c G
a
S
G
d
b
p q
c
e
h
a
f
r
q
p
h
f
d
b
a
c
e
r
Strategy: expand a deepest
node first
Implementation: Fringe is a
LIFO stack
https://inst.eecs.berkeley.edu/~cs188/su20/
25. Breadth-First Search
S
a
b
d p
a
c
e
p
h
f
r
q
q c G
a
q
e
p
h
f
r
q
q c G
a
S
G
d
b
p
q
c
e
h
a
f
r
Search
Tiers
Strategy: expand a shallowest
node first
Implementation: Fringe is a
FIFO queue
https://inst.eecs.berkeley.edu/~cs188/su20/
26. Quiz: DFS vs BFS
https://inst.eecs.berkeley.edu/~cs188/su20/
28. Uniform Cost Search
S
a
b
d p
a
c
e
p
h
f
r
q
q c G
a
q
e
p
h
f
r
q
q c G
a
Strategy: expand a cheapest node
first:
Fringe is a priority queue (priority:
cumulative cost)
S
G
d
b
p q
c
e
h
a
f
r
3 9 1
16
4
11
5
7
13
8
10
11
17 11
0
6
3
9
1
1
2
8
8
2
15
1
2
Cost
contours
2
https://inst.eecs.berkeley.edu/~cs188/su20/
30. Search Heuristics
▪ A heuristic is:
▪ A function that estimates how close a state is to a goal
▪ Designed for a particular search problem
▪ Examples: Manhattan distance, Euclidean distance for
pathing
10
5
11.2
https://inst.eecs.berkeley.edu/~cs188/su20/
36. Combining UCS and Greedy
o Uniform-cost orders by path cost, or backward cost g(n)
o Greedy orders by goal proximity, or forward cost h(n)
o A* Search orders by the sum: f(n) = g(n) + h(n)
S a d
b
G
h=5
h=6
h=2
1
8
1
1
2
h=6
h=0
c
h=7
3
e h=1
1
Example: Teg Grenager
S
a
b
c
e
d
d
G
G
g = 0
h=6
g = 1
h=5
g = 2
h=6
g = 3
h=7
g = 4
h=2
g = 6
h=0
g = 9
h=1
g = 10
h=2
g = 12
h=0
https://inst.eecs.berkeley.edu/~cs188/su20/
37. When should A* terminate?
o Should we stop when we enqueue a goal?
o No: only stop when we dequeue a goal
S
B
A
G
2
3
2
2
h = 1
h = 2
h = 0
h = 3
https://inst.eecs.berkeley.edu/~cs188/su20/
38. Is A* Optimal?
o What went wrong?
o Actual bad goal cost < estimated good goal cost
o We need estimates to be less than actual costs!
A
G
S
1 3
h = 6
h = 0
5
h = 7
https://inst.eecs.berkeley.edu/~cs188/su20/
40. Idea: Admissibility
Inadmissible (pessimistic) heuristics break
optimality by trapping good plans on the
fringe
Admissible (optimistic) heuristics slow down
bad plans but never outweigh true costs
https://inst.eecs.berkeley.edu/~cs188/su20/
41. Admissible Heuristics
o A heuristic h is admissible (optimistic) if:
where is the true cost to a nearest goal
o Examples:
o Coming up with admissible heuristics is most of what’s
involved in using A* in practice.
15
42. A* Applications
o Video games
o Pathing / routing problems
o Resource planning problems
o Robot motion planning
o Language analysis
o …
https://inst.eecs.berkeley.edu/~cs188/su20/
43. A*: Summary
o A* uses both backward costs and (estimates of) forward
costs
o A* is optimal with admissible / consistent heuristics
o Heuristic design is key: often use relaxed problems
https://inst.eecs.berkeley.edu/~cs188/su20/
44.
45. Good Behavior: The Concept of Rationality
o A rational agent – One does the right thing
o What is right thing?
o Consider the consequences of the agent behavior
o Agent’s action depends on the perception
o The sequence of agents’ action triggers different environment
states
o If the sequences are desirable, then performance is better
46. Good Behavior: The Concept of Rationality
o Can we define success in our own opinions?
o No fixed performance measure for all agents and tasks
o Let us assume that for vacuum cleaner, performance
measure is the amount of dirt cleaned
o How a rational agent can perform?
o Can we have reward?
o It is better to design performance measures according to
what one actually wants in the environment, rather than
according to how one thinks the agent should behave
47. Good Behavior: The Concept of Rationality
o Rationality depends on
o The performance measure that defines the criterion of success
o The agent’s prior knowledge of the environment
o The actions that the agent can perform
o The agent’s percept sequence to date
o For each possible percept sequence, a rational agent should
select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the
agent has.
48. Omniscience, learning, and autonomy
o What is the difference between rationality and omniscience?
o An omniscient agent knows the actual outcome of its
actions and can act accordingly
o Rationality maximizes expected performance, while
perfection maximizes actual performance
o Rationality does not require omniscience. Why?
49. Omniscience, learning, and autonomy
o If an agent does not look both ways before crossing a busy
road, then its percept sequence will not tell it that there is a
large truck approaching at high speed
o Is it OK to cross road?
50. Omniscience, learning, and autonomy
o It would not be rational to cross the road given this
uninformative percept sequence
o A rational agent should choose the “looking” action before
stepping into the street
o Doing actions in order to modify future
percepts—sometimes called information gathering
o A rational agent not only to gather information but also to
learn as much as possible from what it perceives.
51. Omniscience, learning, and autonomy
o In a priori environment, agent need not learn, but need to
act correctly
o An agent relies on the prior knowledge of its designer
rather than on its own percepts, we say that the agent lacks
autonomy
o A rational agent should be autonomous—it should learn
what it can to compensate for partial or incorrect prior
knowledge
52. Omniscience, learning, and autonomy
o After sufficient experience of its environment, the behavior
of a rational agent can become effectively independent of its
prior knowledge