SlideShare a Scribd company logo
1
Artificial Intelligence Planning
Introduction and Overview
Artificial Intelligence Planning
•Introduction and Overview
2
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
What is Planning (in AI)?
•now: what do we mean by (AI) planning?
•A Conceptual Model for Planning
•Planning and Search
•Example Problems
3
Human Planning and Acting
• acting without (explicit) planning:
– when purpose is immediate
– when performing well-trained behaviours
– when course of action can be freely adapted
• acting after planning:
– when addressing a new situation
– when tasks are complex
– when the environment imposes high risk/cost
– when collaborating with others
• people plan only when strictly necessary
Human Planning and Acting
•humans rarely plan before acting in everyday situations
•acting without (explicit) planning: (may be subconscious)
•when purpose is immediate (e.g. switch on computer)
•when performing well-trained behaviours (e.g. drive car)
•when course of action can be freely adapted (e.g. shopping)
•acting after planning:
•when addressing a new situation (e.g. move house)
•when tasks are complex (e.g. plan this course)
•when the environment imposes high risk/cost (e.g. manage nuclear power
station)
•when collaborating with others (e.g. build house)
•people plan only when strictly necessary
•because planning is complicated and time-consuming (trade-off: cost vs.
benefit)
•often we seek only good rather than optimal plans
4
Defining AI Planning
• planning:
– explicit deliberation process that chooses and organizes
actions by anticipating their outcomes
– aims at achieving some pre-stated objectives
• AI planning:
– computational study of this deliberation process
Defining AI Planning
•planning:
•explicit deliberation process that chooses and organizes actions by
anticipating their outcomes
•in short: planning is reasoning about actions
•aims at achieving some pre-stated objectives
•or: achieving them as best as possible (planning as optimization
problem)
•AI planning:
•computational study of this deliberation process
5
Why Study Planning in AI?
• scientific goal of AI:
understand intelligence
– planning is an important component of rational
(intelligent) behaviour
• engineering goal of AI:
build intelligent entities
– build planning software for choosing and
organizing actions for autonomous intelligent
machines
Why Study Planning in AI?
•scientific goal of AI: understand intelligence
•planning is an important component of rational (intelligent) behaviour
•planning is part of intelligent behaviour
•engineering goal of AI: build intelligent entities
•build planning software for choosing and organizing actions for
autonomous intelligent machines
•example: Mars explorer (cannot be remotely operated)
•robot: Shakey, SRI 1968
6
Domain-Specific vs.
Domain-Independent Planning
• domain-specific planning: use specific representations and
techniques adapted to each problem
– important domains: path and motion planning, perception planning,
manipulation planning, communication planning
• domain-independent planning: use generic representations and
techniques
– exploit commonalities to all forms of planning
– leads to general understanding of planning
• domain-independent planning complements domain-specific
planning
Domain-Specific vs. Domain-Independent Planning
•domain-specific planning: use specific representations and techniques
adapted to each problem
•important domains: path and motion planning, perception planning,
manipulation planning, communication planning
•domain-independent planning: use generic representations and techniques
•exploit commonalities to all forms of planning
•saves effort; no need to reinvent same techniques for different
problems
•leads to general understanding of planning
•contributes to scientific goal of AI
•domain-independent planning complements domain-specific planning
•use domain-dependent planning where highly efficient solution is required
Quiz: True or False?
• people only plan when they have to because the benefit of an optimal
plan does not always justify the effort of planning
• for humans, planning is a subconscious process, which is why
computational planning is so hard
• planning involves a mental simulation of actions to foresee future world
states and compare them to goals
• in Artificial Intelligence, planning is concerned with the search for
computationally optimal plans
• domain-specific planning is used when efficiency is vital, whereas
domain-independent planning is good for planning from first principles
7
8
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•just done: what do we mean by (AI) planning?
A Conceptual Model for Planning
•now: state-transition systems – formalizing the problem
•Planning and Search
•Example Problems
9
Why a Conceptual Model?
• conceptual model: theoretical device for describing the
elements of a problem
• good for:
– explaining basic concepts
– clarifying assumptions
– analyzing requirements
– proving semantic properties
• not good for:
– efficient algorithms and computational concerns
Why a Conceptual Model?
•conceptual model: theoretical device for describing the elements of a problem
•good for:
•explaining basic concepts: what are the objects to be manipulated during
problem-solving?
•clarifying assumptions: what are the constraints imposed by this model?
•analyzing requirements: what representations do we need for the objects?
•proving semantic properties: when is an algorithm sound or complete?
•not good for:
•efficient algorithms and computational concerns
10
Conceptual Model for Planning:
State-Transition Systems
• A state-transition system is a 4-tuple Σ = (S,A,E,γ), where:
– S = {s1,s2,…} is a finite or recursively enumerable set of states;
– A = {a1,a2,…} is a finite or recursively enumerable set of actions;
– E = {e1,e2,…} is a finite or recursively enumerable set of events;
and
– γ: S×(A∪E)→2S is a state transition function.
• if a∈A and γ(s,a) ≠ ∅ then a is applicable in s
• applying a in s will take the system to s′∈γ(s,a)
Conceptual Model for Planning: State-Transition Systems
•A state-transition system is a 4-tuple Σ=(S,A,E,γ), where:
•a general model for a dynamic system, common to other areas of computer
science; aka. dynamic-event system
•S = {s1,s2,…} is a finite or recursively enumerable set of states;
•the possible states the world can be in
•A = {a1,a2,…} is a finite or recursively enumerable set of actions;
•the actions that can be performed by some agent in the world,
transitions are controlled by the plan executor
•E = {e1,e2,…} is a finite or recursively enumerable set of events; and
•the events that can occur in the world, transitions that are contingent
(correspond to the internal dynamics of the system)
•γ: S×(A∪E)→2S
is a state transition function.
•notation: 2S
=powerset of S; maps to a set of states
•the function describing how the world evolves when actions or events
occur
•note: model does not allow for parallelism between actions and/or events
•if a∈A and γ(s,a) ≠ ∅ then a is applicable in s
•applying a in s will take the system to s′∈γ(s,a)
11
State-Transition Systems as Graphs
• A state-transition system Σ = (S,A,E,γ) can be
represented by a directed labelled graph G = (NG,EG)
where:
– the nodes correspond to the states in S, i.e. NG=S; and
– there is an arc from s∈NG to s′∈NG, i.e. s→s′∈EG, with label
u∈(A∪E) if and only if s′∈γ(s,u).
State-Transition Systems as Graphs
•A state-transition system Σ=(S,A,E,γ) can be represented by a directed labelled
graph G=(NG,EG) where:
•the nodes correspond to the states in S, i.e. NG=S; and
•nodes correspond to world states
•there is an arc from s∈NG to s′∈NG, i.e. s→s′∈EG, with label a∈(A∪E) if and
only if s′∈γ(s,a).
•there is an arc if there is an action or event that transforms one state
into the other (called a state transition)
•the label of that arc is that action or event
Toy Problem:
Missionaries and Cannibals
On one bank of a river are three
missionaries (black triangles) and
three cannibals (red circles). There is
one boat available that can hold up
to two people and that they would like to use to
cross the river. If the cannibals ever outnumber the missionaries on
either of the river’s banks, the missionaries will get eaten.
How can the boat be used to safely carry all the missionaries and
cannibals across the river?
Toy Problem: Missionaries and Cannibals
•On one bank of a river are three missionaries (black triangles) and three
cannibals (red circles).
•description of initial state and graphical representation; other states must
follow the same pattern
•There is one boat available that can hold up to two people and that they would
like to use to cross the river.
•more state description: boat; implicitly: description of possible action: cross
river using boat
•If the cannibals ever outnumber the missionaries on either of the river’s
banks, the missionaries will get eaten.
•constraint on all states; must not be violated
•How can the boat be used to safely carry all the missionaries and cannibals
across the river?
•problem: find a sequence of action that brings about another state
13
State-Transition Graph Example:
Missionaries and Cannibals
1c
1m
1c
2c
1c
2c
1c
2m
1m
1c
1m
1c
1c
2c
1m
2m
1c
2c
1c
1m
State-Transition Graph Example: Missionaries and Cannibals
•On one bank of a river are three missionaries (black triangles) and three cannibals
(red circles). There is one boat available that can hold up to two people and that they
would like to use to cross the river. If the cannibals ever outnumber the missionaries
on either of the river’s banks, the missionaries will get eaten. How can the boat be
used to safely carry all the missionaries and cannibals across the river?
14
Objectives and Plans
• state-transition system:
– describes all ways in which a system may evolve
• plan:
– a structure that gives appropriate actions to apply in order to achieve
some objective when starting from a given state
• types of objective:
– goal state sg or set of goal states Sg
– satisfy some conditions over the sequence of states
– optimize utility function attached to states
– task to be performed
Objectives and Plans
•state-transition system:
•describes all ways in which a system may evolve
•plan:
•a structure that gives appropriate actions to apply in order to achieve
some objective when starting from a given state
•structure: e.g. sequential list of actions to be performed in order;
function mapping states to actions
•describes a path through the state-transition graph
•types of objective:
•goal state sg or set of goal states Sg
•simplest case; objective achieved by sequence of transitions that ends
in one of the goal states
•satisfy some conditions over the sequence of states
•example: states to be avoided or visited (goal state is special case)
•optimize utility function attached to states
•optimize compound function (sum, max) of utilities of visited states
•task to be performed
•alternative approach: recursively defined set of actions and (other)
tasks
15
Planning and Plan Execution
• planner:
– given: description of Σ, initial state,
objective
– generate: plan that achieves objective
• controller:
– given: plan, current state (observation
function: η:S→O)
– generate: action
• state-transition system:
– evolves as actions are executed and
events occur
Planner
Controller
System Σ
Initial State
Objectives
Description of Σ
Events
Plan
ActionsObservations
Planning and Plan Execution
•planner:
•given: description of Σ, initial state, objective
•generate: plan that achieves objective
•planner works offline: relies on formal description of Σ
•controller:
•given: plan, current state (observation function: η:S→O)
•partial knowledge of controller about world modelled through
observation function
•O = set of possible observations (e.g. subset); input for controller
•generate: action
•controller works online: along with the dynamics of Σ
•state-transition system:
•evolves as actions are executed and events occur
Dynamic Planning
• problem: real world differs from model
described by Σ
• more realistic model: interleaved planning
and execution
– plan supervision
– plan revision
– re-planning
• dynamic planning: closed loop between
planner and controller
– execution status
Planner
Controller
System Σ
Initial State
Objectives
Description of Σ
Events
Plans
ActionsObservations
Execution Status
Dynamic Planning
•problem: physical system differs from model described by Σ
•planner only has access to model (description of Σ)
•controller must cope with differences between Σ and real world
•more realistic model: interleaved planning and execution
•plan supervision: detect when observations differ from expected results
•plan revision: adapt existing plan to new circumstances
•re-planning: generate a new plan from current (initial) state
•dynamic planning: closed loop between planner and controller
•execution status
17
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•A Conceptual Model for Planning
•just done: state-transition systems – formalizing the problem
Planning and Search
•now: problem formulation and search problems
•Example Problems
Toy Problems vs.
Real-World Problems
Toy Problems/Puzzles
– concise and exact description
– used for illustration purposes
(e.g. here)
– used for performance
comparisons
Real-World Problems
– no single, agreed-upon
description
– people care about the
solutions
Toy Problems vs. Real-World Problems
•Toy Problems/Puzzles
•concise and exact description
•used for illustration purposes (e.g. here)
•used for performance comparisons
•toy problems combine
•richness: they are anything but trivial to solve
•simplicity: they are described easily and precisely
•Real-World Problems
•no single, agreed-upon description
•people care about the solutions
•most real-world problems would take hours to define
Search Problems
• initial state
• set of possible actions/applicability conditions
– successor function: state set of <action, state>
– successor function + initial state = state space
– path (solution)
• goal
– goal state or goal test function
• path cost function
– for optimality
– assumption: path cost = sum of step costs
Search Problems
•initial state: current state the world is in (state = situation)
•states: symbol structures representing real world objects and relations
physical symbols systems
•finite set of possible actions (aka. operators or production rules (problem
formulation))/applicability conditions
•successor function: state set of <action, state>: action is applicable in
given state; result of applying action in given state is paired state
•successor function + initial state = state space: directed graph with states
as nodes and actions as arcs
•path (in the graph) (solution)
•goal (goal formulation)
•goal state (for unique goal state) or goal test function (for multiple goal
states (e.g. in chess))
•Solution: path in state space from initial state to goal state
•path cost function
•for optimality: find solution path with minimal path cost
•assumption: path cost = sum of step costs (cost of applying a given action
in a given state)
Missionaries and Cannibals: Successor
Function
{<1m1c, (L:3m,3c,b-R:0m,0c)>,
<1m, (L:3m,2c,b-R:0m,1c)>}
(L:2m,2c-R:1m,1c,b)
{<2c, (L:3m,3c,b-R:0m,0c)>,
<1c, (L:3m,2c,b-R:0m,1c)>}
(L:3m,1c-R:0m,2c,b)
{<2c, (L:3m,1c-R:0m,2c,b)>,
<1m1c, (L:2m,2c-R:1m,1c,b)>,
<1c, (L:3m,2c-R:0m,1c,b)>}
(L:3m,3c,b-R:0m,0c)
set of <action, state>state
Missionaries and Cannibals: Successor Function
•state set of <action, state> (domain and range: set of pairs)
•(L:3m,3c,b-R:0m,0c) {<2c, (L:3m,1c-R:0m,2c,b)>, <1m1c, (L:2m,2c-
R:1m,1c,b)>, <1c, (L:3m,2c-R:0m,1c,b)>}
•states:
•L/R: on left/right bank
•m/c: missionaries/cannibals (example: 3 missionaries and three
cannibals on left bank, none on right bank)
•b: boat (example: boat on left bank)
•actions:
•m/c: missionaries/cannibals crossing (example(s): 2 cannibals crossing
(L to R), 1m and 1c crossing; 1c crossing)
•(L:3m,1c-R:0m,2c,b) {<2c, (L:3m,3c,b-R:0m,0c)>, <1c, (L:3m,2c,b-R:0m,1c)>}
(note: only two actions applicable)
•(L:2m,2c-R:1m,1c,b) {<1m1c, (L:3m,3c,b-R:0m,0c)>, <1m, (L:3m,2c,b-
R:0m,1c)>}
Real-World Problem:
Touring in Romania
Oradea
Bucharest
Fagaras
Pitesti
Neamt
Iasi
Vaslui
Urziceni
Hirsova
Eforie
Giurgiu
Craiova
Rimnicu Vilcea
Sibiu
Dobreta
Mehadia
Lugoj
Timisoara
Arad
Zerind
120
140
151
75
70
111
118
75
71
85
90
211
101
97
138
146
80
99
87
92
142
98
86
Real-World Problem: Touring in Romania
•shown: rough map of Romania
•initial state: on vacation in Arad, Romania
•goal? actions? -- “Touring Romania” cannot readily be described in terms of possible
actions, goals, and path cost
Problem Formulation
• problem formulation
– process of deciding what actions and states to consider
– granularity/abstraction level
• assumptions about the
environment:
– finite and discrete
– fully observable
– deterministic
– static
• other assumptions:
– restricted goals
– sequential plans
– implicit time
– offline planning
Problem Formulation
•problem formulation
•process of deciding what actions and states to consider
•granularity/abstraction level
•TR example:
•possible actions: turn steering wheel by one degree left/right; move left
foot … too much (irrelevant) detail, uncertainty, too many steps in
solution
•possible action: drive to Bucharest How to execute such an action?
•best level of granularity: drive to another major town to which there is a
direct road
•assumptions about the environment:
•finite and discrete: we can enumerate all the possible actions and states
(else: continuous)
•fully observable: agent can see what current state it is in
•deterministic: applying an action in a given state leads to exactly one
(usually different) state
•static: environment does not change while we think (no events)
•other assumptions:
•restricted goals: given as an explicit goal state sg or set of goal states Sg
•sequential plans: solution plan is a linearly ordered finite sequence of
actions
•implicit time: actions and events have no duration
•offline planning: planner is not concerned with changes of Σ while planning
Search Nodes
• search nodes: the nodes in the search tree
• data structure:
– state: a state in the state space
– parent node: the immediate predecessor in the search tree
– action: the action that, performed in the parent node’s state, leads to
this node’s state
– path cost: the total cost of the path leading to this node
– depth: the depth of this node in the search tree
Search Nodes
•search nodes: the nodes in the search tree
•node is a bookkeeping structure in a search tree
•data structure:
•state: a state in the state space
•state (vs. node) corresponds to a configuration of the world
•two nodes may contain equal states
•parent node: the immediate predecessor in the search tree
•nodes are on paths (defined by parent nodes)
•action: the action that, performed in the parent node’s state, leads to
this node’s state
•path cost: the total cost of the path leading to this node
•depth: the depth of this node in the search tree
•alternative: representing paths only (sequences of actions):
•possible, but state provides direct access to valuable information that might
be expensive to regenerate all the time
General Tree Search Algorithm
function treeSearch(problem, strategy)
fringe { new searchNode(problem.initialState) }
loop
if empty(fringe) then return failure
node selectFrom(fringe, strategy)
if problem.goalTest(node.state) then
return pathTo(node)
fringe fringe + expand(problem, node)
General Tree Search Algorithm
function treeSearch(problem, strategy)
•find a solution to the given problem while expanding nodes according to the given
strategy
fringe { new searchNode(problem.initialState) }
•fringe: set of known states; initially just initial state
loop
•possibly infinite loop expands nodes
if empty(fringe) then return failure
•complete tree explored; no goal state found
node selectFrom(fringe, strategy)
•select node from fringe according to search control strategy; the node will not
be selected again
if problem.goalTest(node.state) then
•goal test before expansion: to avoid trick problem like “get from Arad to Arad”
return pathTo(node)
•success: goal node found
fringe fringe + expand(problem, node)
•otherwise: add new nodes to the fringe and continue loop
Search (Control) Strategy
• search or control strategy: an effective method for
scheduling the application of the successor function to
expand nodes
– selects the next node to be expanded from the fringe
– determines the order in which nodes are expanded
– aim: produce a goal state as quickly as possible
• examples:
– LIFO/FIFO-queue for fringe nodes
– alphabetical ordering
Search (Control) Strategy
•search or control strategy: an effective method for scheduling the application
of the successor function to expand nodes
•removes non-determinism from search method
•selects the next node to be expanded from the fringe
•closed nodes never need to be expanded again
•determines the order in which nodes are expanded
•exact order makes method deterministic
•aim: produce a goal state as quickly as possible
•strategy that produces goal state quicker is usually considered better
•examples:
•LIFO/FIFO-queue for fringe nodes (two fundamental search strategies)
•alphabetical ordering
•remark: complete search tree is usually too large to fit into memory, strategy
determines which part to generate
26
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•A Conceptual Model for Planning
•Planning and Search
•just done: problem formulation and search problems
Example Problems
•now: more examples incl. DWR problem
Toy Problem:
Sliding-Block Puzzle
The 8-Puzzle
876
543
21
132
75
6
4
8
Toy Problem: Sliding-Block Puzzle
•[images]
•The 8-Puzzle
•problem description:
•states: location of each tile and the blank (9!/2 = 181440 states)
•initial state: any (reachable) state, e.g. one shown on left (worst case: solution
requires at least 31 steps)
•actions: good formulation: move blank left/right/up/down
•goal test: goal state (shown right)
•step cost: 1
•generalization: sliding-tile puzzles, e.g. 15-puzzle, 24-puzzle
•problem class is NP-complete
Toy Problem: N-Queens Problem
Place n queens on an
n by n chess board
such that none of the
queens attacks any of
the others.
Toy Problem: N-Queens Problem
•Place n queens on an n by n chess board such that none of the queens
attacks any of the others.
•problem: no row, column, or diagonal must contain more than one queen
•path cost irrelevant; looking for goal state
•configuration shown is not a goal state
29
The Dock-Worker Robots (DWR)
Domain
• aim: have one example to illustrate
planning procedures and techniques
• informal description:
– harbour with several locations (docks),
docked ships, storage areas for
containers, and parking areas for trucks
and trains
– cranes to load and unload ships etc., and
robot carts to move containers around
The Dock-Worker Robots (DWR) Domain
•aim: have one example to illustrate planning procedures and techniques
•problem must be nontrivial to be interesting, but not too much overhead to
introduce problem
•informal description:
•generalization of earlier example (state transition graph)
•harbour with several locations (docks), docked ships, storage areas for
containers, and parking areas for trucks and trains
•cranes to load and unload ships etc., and robot carts to move
containers around
•simplified and enriched version of this domain will be introduced later
•approach: use first-order predicate logic as representation
l1 l2
DWR Example State
k1
ca
k2
cb
cc
cd
ce
cf
robot
crane
location
pile (p1 and q1)
container
pile (p2 and q2, both empty)
container
pallet
r1
31
Actions in the DWR Domain
• move robot r from location l to some adjacent and unoccupied
location l’
• take container c with empty crane k from the top of pile p, all
located at the same location l
• put down container c held by crane k on top of pile p, all
located at location l
• load container c held by crane k onto unloaded robot r, all
located at location l
• unload container c with empty crane k from loaded robot r,
all located at location l
Actions in the DWR Domain
•move robot r from location l to some adjacent and unoccupied location l’
•take container c with empty crane k from the top of pile p, all located at the
same location l
•put down container c held by crane k on top of pile p, all located at location l
•load container c held by crane k onto unloaded robot r, all located at location l
•unload container c with empty crane k from loaded robot r, all located at
location l
•formal specifications will follow when we have introduced a formal action description
language
•problem: how to represent actions formally? first-order logic?
32
s0
State-Transition Systems: Graph
Example
location1 location2
pallet
cont.
crane
s2
location1 location2
pallet
cont.
crane
s1
location1 location2
pallet
cont.
crane
s3
location1 location2
pallet
cont.
crane
s4
location1 location2
pallet
crane
robot robot
robot
robot
robot
cont.
s5
location1 location2
pallet
crane
robot
cont.
take put
move1
move2
move2
move1
take put
load
unload
move2move1
State-Transition Systems: Graph Example
•states: s0 to s5
•objects: robot, crane, container, pallet, two locations
•actions:
•crane can take/put the container from the pallet/onto the pallet
•crane can load/unload the container from the robot
•robot can drive to either location (with or without the container loaded)
•no events
•state transition function: arcs shown in graph
•note: state transitions are deterministic: each action leads to at most one other state
33
Overview
• What is Planning (in AI)?
• A Conceptual Model for Planning
• Planning and Search
• Example Problems
Overview
•What is Planning (in AI)?
•A Conceptual Model for Planning
•Planning and Search
•Example Problems
•just done : more examples incl. DWR problem

More Related Content

Similar to 1.1 1.4-introduction

Detailed notes on Artificial Intelligence planning sjm.ppt
Detailed notes on Artificial Intelligence planning sjm.pptDetailed notes on Artificial Intelligence planning sjm.ppt
Detailed notes on Artificial Intelligence planning sjm.ppt
PrateekPatidar13
 
AI Robotics
AI RoboticsAI Robotics
AI Robotics
Yasir Khan
 
An introduction to reinforcement learning
An introduction to reinforcement learningAn introduction to reinforcement learning
An introduction to reinforcement learning
Subrat Panda, PhD
 
anintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdfanintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdf
ssuseradaf5f
 
02 problem solving_search_control
02 problem solving_search_control02 problem solving_search_control
02 problem solving_search_control
Praveen Kumar
 
chapter3part1.ppt
chapter3part1.pptchapter3part1.ppt
chapter3part1.ppt
ssuser99ca78
 
ITS for Crowds
ITS for CrowdsITS for Crowds
ITS for Crowds
Serge Hoogendoorn
 
CH4_AI_Lecture.ppt
CH4_AI_Lecture.pptCH4_AI_Lecture.ppt
CH4_AI_Lecture.ppt
AhmedNURHUSIEN
 
16355694.ppt
16355694.ppt16355694.ppt
16355694.ppt
shohel rana
 
Ml ppt at
Ml ppt atMl ppt at
Ml ppt at
pradeep kumar
 
ODSC India 2018: Topological space creation &amp; Clustering at BigData scale
ODSC India 2018: Topological space creation &amp; Clustering at BigData scaleODSC India 2018: Topological space creation &amp; Clustering at BigData scale
ODSC India 2018: Topological space creation &amp; Clustering at BigData scale
Kuldeep Jiwani
 
Aitf 2014 pem_introduction_presentation_feb28_ram_version2
Aitf 2014 pem_introduction_presentation_feb28_ram_version2Aitf 2014 pem_introduction_presentation_feb28_ram_version2
Aitf 2014 pem_introduction_presentation_feb28_ram_version2
Bob MacMillan
 
Caculus
CaculusCaculus
Caculus.pptx
Caculus.pptxCaculus.pptx
Caculus.pptx
Er. Rahul Jarariya
 
A Lightweight Infrastructure for Graph Analytics
A Lightweight Infrastructure for Graph AnalyticsA Lightweight Infrastructure for Graph Analytics
A Lightweight Infrastructure for Graph Analytics
Donald Nguyen
 
Machine learning introduction to unit 1.ppt
Machine learning introduction to unit 1.pptMachine learning introduction to unit 1.ppt
Machine learning introduction to unit 1.ppt
ShivaShiva783981
 
Artificial intelligence(04)
Artificial intelligence(04)Artificial intelligence(04)
Artificial intelligence(04)
Nazir Ahmed
 
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHMSTUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
Avay Minni
 
Application of calculus in cse
Application of calculus in cseApplication of calculus in cse
Application of calculus in cse
Md. Janibul Hoque
 
Reinfrocement Learning
Reinfrocement LearningReinfrocement Learning
Reinfrocement Learning
Natan Katz
 

Similar to 1.1 1.4-introduction (20)

Detailed notes on Artificial Intelligence planning sjm.ppt
Detailed notes on Artificial Intelligence planning sjm.pptDetailed notes on Artificial Intelligence planning sjm.ppt
Detailed notes on Artificial Intelligence planning sjm.ppt
 
AI Robotics
AI RoboticsAI Robotics
AI Robotics
 
An introduction to reinforcement learning
An introduction to reinforcement learningAn introduction to reinforcement learning
An introduction to reinforcement learning
 
anintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdfanintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdf
 
02 problem solving_search_control
02 problem solving_search_control02 problem solving_search_control
02 problem solving_search_control
 
chapter3part1.ppt
chapter3part1.pptchapter3part1.ppt
chapter3part1.ppt
 
ITS for Crowds
ITS for CrowdsITS for Crowds
ITS for Crowds
 
CH4_AI_Lecture.ppt
CH4_AI_Lecture.pptCH4_AI_Lecture.ppt
CH4_AI_Lecture.ppt
 
16355694.ppt
16355694.ppt16355694.ppt
16355694.ppt
 
Ml ppt at
Ml ppt atMl ppt at
Ml ppt at
 
ODSC India 2018: Topological space creation &amp; Clustering at BigData scale
ODSC India 2018: Topological space creation &amp; Clustering at BigData scaleODSC India 2018: Topological space creation &amp; Clustering at BigData scale
ODSC India 2018: Topological space creation &amp; Clustering at BigData scale
 
Aitf 2014 pem_introduction_presentation_feb28_ram_version2
Aitf 2014 pem_introduction_presentation_feb28_ram_version2Aitf 2014 pem_introduction_presentation_feb28_ram_version2
Aitf 2014 pem_introduction_presentation_feb28_ram_version2
 
Caculus
CaculusCaculus
Caculus
 
Caculus.pptx
Caculus.pptxCaculus.pptx
Caculus.pptx
 
A Lightweight Infrastructure for Graph Analytics
A Lightweight Infrastructure for Graph AnalyticsA Lightweight Infrastructure for Graph Analytics
A Lightweight Infrastructure for Graph Analytics
 
Machine learning introduction to unit 1.ppt
Machine learning introduction to unit 1.pptMachine learning introduction to unit 1.ppt
Machine learning introduction to unit 1.ppt
 
Artificial intelligence(04)
Artificial intelligence(04)Artificial intelligence(04)
Artificial intelligence(04)
 
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHMSTUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
STUDY ON PROJECT MANAGEMENT THROUGH GENETIC ALGORITHM
 
Application of calculus in cse
Application of calculus in cseApplication of calculus in cse
Application of calculus in cse
 
Reinfrocement Learning
Reinfrocement LearningReinfrocement Learning
Reinfrocement Learning
 

Recently uploaded

[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
Vietnam Cotton & Spinning Association
 
Cell The Unit of Life for NEET Multiple Choice Questions.docx
Cell The Unit of Life for NEET Multiple Choice Questions.docxCell The Unit of Life for NEET Multiple Choice Questions.docx
Cell The Unit of Life for NEET Multiple Choice Questions.docx
vasanthatpuram
 
一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理
一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理
一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理
mbawufebxi
 
一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理
keesa2
 
06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases
06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases
06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases
Timothy Spann
 
Open Source Contributions to Postgres: The Basics POSETTE 2024
Open Source Contributions to Postgres: The Basics POSETTE 2024Open Source Contributions to Postgres: The Basics POSETTE 2024
Open Source Contributions to Postgres: The Basics POSETTE 2024
ElizabethGarrettChri
 
Sample Devops SRE Product Companies .pdf
Sample Devops SRE  Product Companies .pdfSample Devops SRE  Product Companies .pdf
Sample Devops SRE Product Companies .pdf
Vineet
 
Build applications with generative AI on Google Cloud
Build applications with generative AI on Google CloudBuild applications with generative AI on Google Cloud
Build applications with generative AI on Google Cloud
Márton Kodok
 
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
eoxhsaa
 
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
aguty
 
Telemetry Solution for Gaming (AWS Summit'24)
Telemetry Solution for Gaming (AWS Summit'24)Telemetry Solution for Gaming (AWS Summit'24)
Telemetry Solution for Gaming (AWS Summit'24)
GeorgiiSteshenko
 
一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理
一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理
一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理
nyvan3
 
Overview IFM June 2024 Consumer Confidence INDEX Report.pdf
Overview IFM June 2024 Consumer Confidence INDEX Report.pdfOverview IFM June 2024 Consumer Confidence INDEX Report.pdf
Overview IFM June 2024 Consumer Confidence INDEX Report.pdf
nhutnguyen355078
 
Namma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdf
Namma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdfNamma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdf
Namma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdf
22ad0301
 
Module 1 ppt BIG DATA ANALYTICS_NOTES FOR MCA
Module 1 ppt BIG DATA ANALYTICS_NOTES FOR MCAModule 1 ppt BIG DATA ANALYTICS_NOTES FOR MCA
Module 1 ppt BIG DATA ANALYTICS_NOTES FOR MCA
yuvarajkumar334
 
[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024
Vietnam Cotton & Spinning Association
 
一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理
一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理
一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理
ywqeos
 
8 things to know before you start to code in 2024
8 things to know before you start to code in 20248 things to know before you start to code in 2024
8 things to know before you start to code in 2024
ArianaRamos54
 
一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理
一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理
一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理
osoyvvf
 
一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理
一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理
一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理
actyx
 

Recently uploaded (20)

[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
 
Cell The Unit of Life for NEET Multiple Choice Questions.docx
Cell The Unit of Life for NEET Multiple Choice Questions.docxCell The Unit of Life for NEET Multiple Choice Questions.docx
Cell The Unit of Life for NEET Multiple Choice Questions.docx
 
一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理
一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理
一比一原版雷丁大学毕业证(UoR毕业证书)学历如何办理
 
一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理
 
06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases
06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases
06-20-2024-AI Camp Meetup-Unstructured Data and Vector Databases
 
Open Source Contributions to Postgres: The Basics POSETTE 2024
Open Source Contributions to Postgres: The Basics POSETTE 2024Open Source Contributions to Postgres: The Basics POSETTE 2024
Open Source Contributions to Postgres: The Basics POSETTE 2024
 
Sample Devops SRE Product Companies .pdf
Sample Devops SRE  Product Companies .pdfSample Devops SRE  Product Companies .pdf
Sample Devops SRE Product Companies .pdf
 
Build applications with generative AI on Google Cloud
Build applications with generative AI on Google CloudBuild applications with generative AI on Google Cloud
Build applications with generative AI on Google Cloud
 
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
 
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
 
Telemetry Solution for Gaming (AWS Summit'24)
Telemetry Solution for Gaming (AWS Summit'24)Telemetry Solution for Gaming (AWS Summit'24)
Telemetry Solution for Gaming (AWS Summit'24)
 
一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理
一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理
一比一原版英国赫特福德大学毕业证(hertfordshire毕业证书)如何办理
 
Overview IFM June 2024 Consumer Confidence INDEX Report.pdf
Overview IFM June 2024 Consumer Confidence INDEX Report.pdfOverview IFM June 2024 Consumer Confidence INDEX Report.pdf
Overview IFM June 2024 Consumer Confidence INDEX Report.pdf
 
Namma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdf
Namma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdfNamma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdf
Namma-Kalvi-11th-Physics-Study-Material-Unit-1-EM-221086.pdf
 
Module 1 ppt BIG DATA ANALYTICS_NOTES FOR MCA
Module 1 ppt BIG DATA ANALYTICS_NOTES FOR MCAModule 1 ppt BIG DATA ANALYTICS_NOTES FOR MCA
Module 1 ppt BIG DATA ANALYTICS_NOTES FOR MCA
 
[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics May 2024
 
一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理
一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理
一比一原版(lbs毕业证书)伦敦商学院毕业证如何办理
 
8 things to know before you start to code in 2024
8 things to know before you start to code in 20248 things to know before you start to code in 2024
8 things to know before you start to code in 2024
 
一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理
一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理
一比一原版(uom毕业证书)曼彻斯特大学毕业证如何办理
 
一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理
一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理
一比一原版斯威本理工大学毕业证(swinburne毕业证)如何办理
 

1.1 1.4-introduction

  • 1. 1 Artificial Intelligence Planning Introduction and Overview Artificial Intelligence Planning •Introduction and Overview
  • 2. 2 Overview • What is Planning (in AI)? • A Conceptual Model for Planning • Planning and Search • Example Problems Overview What is Planning (in AI)? •now: what do we mean by (AI) planning? •A Conceptual Model for Planning •Planning and Search •Example Problems
  • 3. 3 Human Planning and Acting • acting without (explicit) planning: – when purpose is immediate – when performing well-trained behaviours – when course of action can be freely adapted • acting after planning: – when addressing a new situation – when tasks are complex – when the environment imposes high risk/cost – when collaborating with others • people plan only when strictly necessary Human Planning and Acting •humans rarely plan before acting in everyday situations •acting without (explicit) planning: (may be subconscious) •when purpose is immediate (e.g. switch on computer) •when performing well-trained behaviours (e.g. drive car) •when course of action can be freely adapted (e.g. shopping) •acting after planning: •when addressing a new situation (e.g. move house) •when tasks are complex (e.g. plan this course) •when the environment imposes high risk/cost (e.g. manage nuclear power station) •when collaborating with others (e.g. build house) •people plan only when strictly necessary •because planning is complicated and time-consuming (trade-off: cost vs. benefit) •often we seek only good rather than optimal plans
  • 4. 4 Defining AI Planning • planning: – explicit deliberation process that chooses and organizes actions by anticipating their outcomes – aims at achieving some pre-stated objectives • AI planning: – computational study of this deliberation process Defining AI Planning •planning: •explicit deliberation process that chooses and organizes actions by anticipating their outcomes •in short: planning is reasoning about actions •aims at achieving some pre-stated objectives •or: achieving them as best as possible (planning as optimization problem) •AI planning: •computational study of this deliberation process
  • 5. 5 Why Study Planning in AI? • scientific goal of AI: understand intelligence – planning is an important component of rational (intelligent) behaviour • engineering goal of AI: build intelligent entities – build planning software for choosing and organizing actions for autonomous intelligent machines Why Study Planning in AI? •scientific goal of AI: understand intelligence •planning is an important component of rational (intelligent) behaviour •planning is part of intelligent behaviour •engineering goal of AI: build intelligent entities •build planning software for choosing and organizing actions for autonomous intelligent machines •example: Mars explorer (cannot be remotely operated) •robot: Shakey, SRI 1968
  • 6. 6 Domain-Specific vs. Domain-Independent Planning • domain-specific planning: use specific representations and techniques adapted to each problem – important domains: path and motion planning, perception planning, manipulation planning, communication planning • domain-independent planning: use generic representations and techniques – exploit commonalities to all forms of planning – leads to general understanding of planning • domain-independent planning complements domain-specific planning Domain-Specific vs. Domain-Independent Planning •domain-specific planning: use specific representations and techniques adapted to each problem •important domains: path and motion planning, perception planning, manipulation planning, communication planning •domain-independent planning: use generic representations and techniques •exploit commonalities to all forms of planning •saves effort; no need to reinvent same techniques for different problems •leads to general understanding of planning •contributes to scientific goal of AI •domain-independent planning complements domain-specific planning •use domain-dependent planning where highly efficient solution is required
  • 7. Quiz: True or False? • people only plan when they have to because the benefit of an optimal plan does not always justify the effort of planning • for humans, planning is a subconscious process, which is why computational planning is so hard • planning involves a mental simulation of actions to foresee future world states and compare them to goals • in Artificial Intelligence, planning is concerned with the search for computationally optimal plans • domain-specific planning is used when efficiency is vital, whereas domain-independent planning is good for planning from first principles 7
  • 8. 8 Overview • What is Planning (in AI)? • A Conceptual Model for Planning • Planning and Search • Example Problems Overview •What is Planning (in AI)? •just done: what do we mean by (AI) planning? A Conceptual Model for Planning •now: state-transition systems – formalizing the problem •Planning and Search •Example Problems
  • 9. 9 Why a Conceptual Model? • conceptual model: theoretical device for describing the elements of a problem • good for: – explaining basic concepts – clarifying assumptions – analyzing requirements – proving semantic properties • not good for: – efficient algorithms and computational concerns Why a Conceptual Model? •conceptual model: theoretical device for describing the elements of a problem •good for: •explaining basic concepts: what are the objects to be manipulated during problem-solving? •clarifying assumptions: what are the constraints imposed by this model? •analyzing requirements: what representations do we need for the objects? •proving semantic properties: when is an algorithm sound or complete? •not good for: •efficient algorithms and computational concerns
  • 10. 10 Conceptual Model for Planning: State-Transition Systems • A state-transition system is a 4-tuple Σ = (S,A,E,γ), where: – S = {s1,s2,…} is a finite or recursively enumerable set of states; – A = {a1,a2,…} is a finite or recursively enumerable set of actions; – E = {e1,e2,…} is a finite or recursively enumerable set of events; and – γ: S×(A∪E)→2S is a state transition function. • if a∈A and γ(s,a) ≠ ∅ then a is applicable in s • applying a in s will take the system to s′∈γ(s,a) Conceptual Model for Planning: State-Transition Systems •A state-transition system is a 4-tuple Σ=(S,A,E,γ), where: •a general model for a dynamic system, common to other areas of computer science; aka. dynamic-event system •S = {s1,s2,…} is a finite or recursively enumerable set of states; •the possible states the world can be in •A = {a1,a2,…} is a finite or recursively enumerable set of actions; •the actions that can be performed by some agent in the world, transitions are controlled by the plan executor •E = {e1,e2,…} is a finite or recursively enumerable set of events; and •the events that can occur in the world, transitions that are contingent (correspond to the internal dynamics of the system) •γ: S×(A∪E)→2S is a state transition function. •notation: 2S =powerset of S; maps to a set of states •the function describing how the world evolves when actions or events occur •note: model does not allow for parallelism between actions and/or events •if a∈A and γ(s,a) ≠ ∅ then a is applicable in s •applying a in s will take the system to s′∈γ(s,a)
  • 11. 11 State-Transition Systems as Graphs • A state-transition system Σ = (S,A,E,γ) can be represented by a directed labelled graph G = (NG,EG) where: – the nodes correspond to the states in S, i.e. NG=S; and – there is an arc from s∈NG to s′∈NG, i.e. s→s′∈EG, with label u∈(A∪E) if and only if s′∈γ(s,u). State-Transition Systems as Graphs •A state-transition system Σ=(S,A,E,γ) can be represented by a directed labelled graph G=(NG,EG) where: •the nodes correspond to the states in S, i.e. NG=S; and •nodes correspond to world states •there is an arc from s∈NG to s′∈NG, i.e. s→s′∈EG, with label a∈(A∪E) if and only if s′∈γ(s,a). •there is an arc if there is an action or event that transforms one state into the other (called a state transition) •the label of that arc is that action or event
  • 12. Toy Problem: Missionaries and Cannibals On one bank of a river are three missionaries (black triangles) and three cannibals (red circles). There is one boat available that can hold up to two people and that they would like to use to cross the river. If the cannibals ever outnumber the missionaries on either of the river’s banks, the missionaries will get eaten. How can the boat be used to safely carry all the missionaries and cannibals across the river? Toy Problem: Missionaries and Cannibals •On one bank of a river are three missionaries (black triangles) and three cannibals (red circles). •description of initial state and graphical representation; other states must follow the same pattern •There is one boat available that can hold up to two people and that they would like to use to cross the river. •more state description: boat; implicitly: description of possible action: cross river using boat •If the cannibals ever outnumber the missionaries on either of the river’s banks, the missionaries will get eaten. •constraint on all states; must not be violated •How can the boat be used to safely carry all the missionaries and cannibals across the river? •problem: find a sequence of action that brings about another state
  • 13. 13 State-Transition Graph Example: Missionaries and Cannibals 1c 1m 1c 2c 1c 2c 1c 2m 1m 1c 1m 1c 1c 2c 1m 2m 1c 2c 1c 1m State-Transition Graph Example: Missionaries and Cannibals •On one bank of a river are three missionaries (black triangles) and three cannibals (red circles). There is one boat available that can hold up to two people and that they would like to use to cross the river. If the cannibals ever outnumber the missionaries on either of the river’s banks, the missionaries will get eaten. How can the boat be used to safely carry all the missionaries and cannibals across the river?
  • 14. 14 Objectives and Plans • state-transition system: – describes all ways in which a system may evolve • plan: – a structure that gives appropriate actions to apply in order to achieve some objective when starting from a given state • types of objective: – goal state sg or set of goal states Sg – satisfy some conditions over the sequence of states – optimize utility function attached to states – task to be performed Objectives and Plans •state-transition system: •describes all ways in which a system may evolve •plan: •a structure that gives appropriate actions to apply in order to achieve some objective when starting from a given state •structure: e.g. sequential list of actions to be performed in order; function mapping states to actions •describes a path through the state-transition graph •types of objective: •goal state sg or set of goal states Sg •simplest case; objective achieved by sequence of transitions that ends in one of the goal states •satisfy some conditions over the sequence of states •example: states to be avoided or visited (goal state is special case) •optimize utility function attached to states •optimize compound function (sum, max) of utilities of visited states •task to be performed •alternative approach: recursively defined set of actions and (other) tasks
  • 15. 15 Planning and Plan Execution • planner: – given: description of Σ, initial state, objective – generate: plan that achieves objective • controller: – given: plan, current state (observation function: η:S→O) – generate: action • state-transition system: – evolves as actions are executed and events occur Planner Controller System Σ Initial State Objectives Description of Σ Events Plan ActionsObservations Planning and Plan Execution •planner: •given: description of Σ, initial state, objective •generate: plan that achieves objective •planner works offline: relies on formal description of Σ •controller: •given: plan, current state (observation function: η:S→O) •partial knowledge of controller about world modelled through observation function •O = set of possible observations (e.g. subset); input for controller •generate: action •controller works online: along with the dynamics of Σ •state-transition system: •evolves as actions are executed and events occur
  • 16. Dynamic Planning • problem: real world differs from model described by Σ • more realistic model: interleaved planning and execution – plan supervision – plan revision – re-planning • dynamic planning: closed loop between planner and controller – execution status Planner Controller System Σ Initial State Objectives Description of Σ Events Plans ActionsObservations Execution Status Dynamic Planning •problem: physical system differs from model described by Σ •planner only has access to model (description of Σ) •controller must cope with differences between Σ and real world •more realistic model: interleaved planning and execution •plan supervision: detect when observations differ from expected results •plan revision: adapt existing plan to new circumstances •re-planning: generate a new plan from current (initial) state •dynamic planning: closed loop between planner and controller •execution status
  • 17. 17 Overview • What is Planning (in AI)? • A Conceptual Model for Planning • Planning and Search • Example Problems Overview •What is Planning (in AI)? •A Conceptual Model for Planning •just done: state-transition systems – formalizing the problem Planning and Search •now: problem formulation and search problems •Example Problems
  • 18. Toy Problems vs. Real-World Problems Toy Problems/Puzzles – concise and exact description – used for illustration purposes (e.g. here) – used for performance comparisons Real-World Problems – no single, agreed-upon description – people care about the solutions Toy Problems vs. Real-World Problems •Toy Problems/Puzzles •concise and exact description •used for illustration purposes (e.g. here) •used for performance comparisons •toy problems combine •richness: they are anything but trivial to solve •simplicity: they are described easily and precisely •Real-World Problems •no single, agreed-upon description •people care about the solutions •most real-world problems would take hours to define
  • 19. Search Problems • initial state • set of possible actions/applicability conditions – successor function: state set of <action, state> – successor function + initial state = state space – path (solution) • goal – goal state or goal test function • path cost function – for optimality – assumption: path cost = sum of step costs Search Problems •initial state: current state the world is in (state = situation) •states: symbol structures representing real world objects and relations physical symbols systems •finite set of possible actions (aka. operators or production rules (problem formulation))/applicability conditions •successor function: state set of <action, state>: action is applicable in given state; result of applying action in given state is paired state •successor function + initial state = state space: directed graph with states as nodes and actions as arcs •path (in the graph) (solution) •goal (goal formulation) •goal state (for unique goal state) or goal test function (for multiple goal states (e.g. in chess)) •Solution: path in state space from initial state to goal state •path cost function •for optimality: find solution path with minimal path cost •assumption: path cost = sum of step costs (cost of applying a given action in a given state)
  • 20. Missionaries and Cannibals: Successor Function {<1m1c, (L:3m,3c,b-R:0m,0c)>, <1m, (L:3m,2c,b-R:0m,1c)>} (L:2m,2c-R:1m,1c,b) {<2c, (L:3m,3c,b-R:0m,0c)>, <1c, (L:3m,2c,b-R:0m,1c)>} (L:3m,1c-R:0m,2c,b) {<2c, (L:3m,1c-R:0m,2c,b)>, <1m1c, (L:2m,2c-R:1m,1c,b)>, <1c, (L:3m,2c-R:0m,1c,b)>} (L:3m,3c,b-R:0m,0c) set of <action, state>state Missionaries and Cannibals: Successor Function •state set of <action, state> (domain and range: set of pairs) •(L:3m,3c,b-R:0m,0c) {<2c, (L:3m,1c-R:0m,2c,b)>, <1m1c, (L:2m,2c- R:1m,1c,b)>, <1c, (L:3m,2c-R:0m,1c,b)>} •states: •L/R: on left/right bank •m/c: missionaries/cannibals (example: 3 missionaries and three cannibals on left bank, none on right bank) •b: boat (example: boat on left bank) •actions: •m/c: missionaries/cannibals crossing (example(s): 2 cannibals crossing (L to R), 1m and 1c crossing; 1c crossing) •(L:3m,1c-R:0m,2c,b) {<2c, (L:3m,3c,b-R:0m,0c)>, <1c, (L:3m,2c,b-R:0m,1c)>} (note: only two actions applicable) •(L:2m,2c-R:1m,1c,b) {<1m1c, (L:3m,3c,b-R:0m,0c)>, <1m, (L:3m,2c,b- R:0m,1c)>}
  • 21. Real-World Problem: Touring in Romania Oradea Bucharest Fagaras Pitesti Neamt Iasi Vaslui Urziceni Hirsova Eforie Giurgiu Craiova Rimnicu Vilcea Sibiu Dobreta Mehadia Lugoj Timisoara Arad Zerind 120 140 151 75 70 111 118 75 71 85 90 211 101 97 138 146 80 99 87 92 142 98 86 Real-World Problem: Touring in Romania •shown: rough map of Romania •initial state: on vacation in Arad, Romania •goal? actions? -- “Touring Romania” cannot readily be described in terms of possible actions, goals, and path cost
  • 22. Problem Formulation • problem formulation – process of deciding what actions and states to consider – granularity/abstraction level • assumptions about the environment: – finite and discrete – fully observable – deterministic – static • other assumptions: – restricted goals – sequential plans – implicit time – offline planning Problem Formulation •problem formulation •process of deciding what actions and states to consider •granularity/abstraction level •TR example: •possible actions: turn steering wheel by one degree left/right; move left foot … too much (irrelevant) detail, uncertainty, too many steps in solution •possible action: drive to Bucharest How to execute such an action? •best level of granularity: drive to another major town to which there is a direct road •assumptions about the environment: •finite and discrete: we can enumerate all the possible actions and states (else: continuous) •fully observable: agent can see what current state it is in •deterministic: applying an action in a given state leads to exactly one (usually different) state •static: environment does not change while we think (no events) •other assumptions: •restricted goals: given as an explicit goal state sg or set of goal states Sg •sequential plans: solution plan is a linearly ordered finite sequence of actions •implicit time: actions and events have no duration •offline planning: planner is not concerned with changes of Σ while planning
  • 23. Search Nodes • search nodes: the nodes in the search tree • data structure: – state: a state in the state space – parent node: the immediate predecessor in the search tree – action: the action that, performed in the parent node’s state, leads to this node’s state – path cost: the total cost of the path leading to this node – depth: the depth of this node in the search tree Search Nodes •search nodes: the nodes in the search tree •node is a bookkeeping structure in a search tree •data structure: •state: a state in the state space •state (vs. node) corresponds to a configuration of the world •two nodes may contain equal states •parent node: the immediate predecessor in the search tree •nodes are on paths (defined by parent nodes) •action: the action that, performed in the parent node’s state, leads to this node’s state •path cost: the total cost of the path leading to this node •depth: the depth of this node in the search tree •alternative: representing paths only (sequences of actions): •possible, but state provides direct access to valuable information that might be expensive to regenerate all the time
  • 24. General Tree Search Algorithm function treeSearch(problem, strategy) fringe { new searchNode(problem.initialState) } loop if empty(fringe) then return failure node selectFrom(fringe, strategy) if problem.goalTest(node.state) then return pathTo(node) fringe fringe + expand(problem, node) General Tree Search Algorithm function treeSearch(problem, strategy) •find a solution to the given problem while expanding nodes according to the given strategy fringe { new searchNode(problem.initialState) } •fringe: set of known states; initially just initial state loop •possibly infinite loop expands nodes if empty(fringe) then return failure •complete tree explored; no goal state found node selectFrom(fringe, strategy) •select node from fringe according to search control strategy; the node will not be selected again if problem.goalTest(node.state) then •goal test before expansion: to avoid trick problem like “get from Arad to Arad” return pathTo(node) •success: goal node found fringe fringe + expand(problem, node) •otherwise: add new nodes to the fringe and continue loop
  • 25. Search (Control) Strategy • search or control strategy: an effective method for scheduling the application of the successor function to expand nodes – selects the next node to be expanded from the fringe – determines the order in which nodes are expanded – aim: produce a goal state as quickly as possible • examples: – LIFO/FIFO-queue for fringe nodes – alphabetical ordering Search (Control) Strategy •search or control strategy: an effective method for scheduling the application of the successor function to expand nodes •removes non-determinism from search method •selects the next node to be expanded from the fringe •closed nodes never need to be expanded again •determines the order in which nodes are expanded •exact order makes method deterministic •aim: produce a goal state as quickly as possible •strategy that produces goal state quicker is usually considered better •examples: •LIFO/FIFO-queue for fringe nodes (two fundamental search strategies) •alphabetical ordering •remark: complete search tree is usually too large to fit into memory, strategy determines which part to generate
  • 26. 26 Overview • What is Planning (in AI)? • A Conceptual Model for Planning • Planning and Search • Example Problems Overview •What is Planning (in AI)? •A Conceptual Model for Planning •Planning and Search •just done: problem formulation and search problems Example Problems •now: more examples incl. DWR problem
  • 27. Toy Problem: Sliding-Block Puzzle The 8-Puzzle 876 543 21 132 75 6 4 8 Toy Problem: Sliding-Block Puzzle •[images] •The 8-Puzzle •problem description: •states: location of each tile and the blank (9!/2 = 181440 states) •initial state: any (reachable) state, e.g. one shown on left (worst case: solution requires at least 31 steps) •actions: good formulation: move blank left/right/up/down •goal test: goal state (shown right) •step cost: 1 •generalization: sliding-tile puzzles, e.g. 15-puzzle, 24-puzzle •problem class is NP-complete
  • 28. Toy Problem: N-Queens Problem Place n queens on an n by n chess board such that none of the queens attacks any of the others. Toy Problem: N-Queens Problem •Place n queens on an n by n chess board such that none of the queens attacks any of the others. •problem: no row, column, or diagonal must contain more than one queen •path cost irrelevant; looking for goal state •configuration shown is not a goal state
  • 29. 29 The Dock-Worker Robots (DWR) Domain • aim: have one example to illustrate planning procedures and techniques • informal description: – harbour with several locations (docks), docked ships, storage areas for containers, and parking areas for trucks and trains – cranes to load and unload ships etc., and robot carts to move containers around The Dock-Worker Robots (DWR) Domain •aim: have one example to illustrate planning procedures and techniques •problem must be nontrivial to be interesting, but not too much overhead to introduce problem •informal description: •generalization of earlier example (state transition graph) •harbour with several locations (docks), docked ships, storage areas for containers, and parking areas for trucks and trains •cranes to load and unload ships etc., and robot carts to move containers around •simplified and enriched version of this domain will be introduced later •approach: use first-order predicate logic as representation
  • 30. l1 l2 DWR Example State k1 ca k2 cb cc cd ce cf robot crane location pile (p1 and q1) container pile (p2 and q2, both empty) container pallet r1
  • 31. 31 Actions in the DWR Domain • move robot r from location l to some adjacent and unoccupied location l’ • take container c with empty crane k from the top of pile p, all located at the same location l • put down container c held by crane k on top of pile p, all located at location l • load container c held by crane k onto unloaded robot r, all located at location l • unload container c with empty crane k from loaded robot r, all located at location l Actions in the DWR Domain •move robot r from location l to some adjacent and unoccupied location l’ •take container c with empty crane k from the top of pile p, all located at the same location l •put down container c held by crane k on top of pile p, all located at location l •load container c held by crane k onto unloaded robot r, all located at location l •unload container c with empty crane k from loaded robot r, all located at location l •formal specifications will follow when we have introduced a formal action description language •problem: how to represent actions formally? first-order logic?
  • 32. 32 s0 State-Transition Systems: Graph Example location1 location2 pallet cont. crane s2 location1 location2 pallet cont. crane s1 location1 location2 pallet cont. crane s3 location1 location2 pallet cont. crane s4 location1 location2 pallet crane robot robot robot robot robot cont. s5 location1 location2 pallet crane robot cont. take put move1 move2 move2 move1 take put load unload move2move1 State-Transition Systems: Graph Example •states: s0 to s5 •objects: robot, crane, container, pallet, two locations •actions: •crane can take/put the container from the pallet/onto the pallet •crane can load/unload the container from the robot •robot can drive to either location (with or without the container loaded) •no events •state transition function: arcs shown in graph •note: state transitions are deterministic: each action leads to at most one other state
  • 33. 33 Overview • What is Planning (in AI)? • A Conceptual Model for Planning • Planning and Search • Example Problems Overview •What is Planning (in AI)? •A Conceptual Model for Planning •Planning and Search •Example Problems •just done : more examples incl. DWR problem