### SlideShare for iOS

by Linkedin Corporation

FREE - On the App Store

Presentation on artificial intelligence planing

Presentation on artificial intelligence planing

- Total Views
- 601
- Views on SlideShare
- 601
- Embed Views

- Likes
- 0
- Downloads
- 0
- Comments
- 0

No embeds

Uploaded via SlideShare as Microsoft PowerPoint

© All Rights Reserved

- 1. Perspectives on ArtificialIntelligence PlanningBased on the research paperby,Professor H´ector Geffner
- 2. General problem solversA general problem solver is a program that accepts high-level descriptions of problems and automatically computes their solution
- 3. Problem solvers con. Problem Solver General modeling Algorithms language
- 4. What is planning?Planning is a key area in Artificial Intelligence. In its general form, planning is concerned with the automatic synthesis of action strategies (plans) from a description of actions, sensors, and goals.
- 5. Elements of Planning1. Representation languages for describing problems conveniently.2. Mathematical models for making the different planning tasks precise3. Algorithms for solving these models effectively
- 6. ModelsModels needed to define scope of a planner• What is a planning problem• What is a solution(plan)• What is an optimal solution
- 7. Classical PlanningClassical planning can be understood in terms ofdeterministic state models characterized by the followingelements• A finite and discrete state space S,• An initial situation given by a state s0 ∈ S,• A goal situation given by a non empty set SG ⊆ S,• Actions a(s) ⊆ A applicable in each state s ∈ S,• A deterministic state transition function f(a, s) for a ∈ a(s)• Positive action costs c(a, s) for doing action a in s.
- 8. Classical Planning con.
- 9. Planning with Uncertainty Unlike in classical planning here states of the system and that state transitions are nondeterministic so we have to define how uncertainty is modeled. Also we have to take sensing and feedback in to the account.
- 10. Modeling Uncertainty• Pure non determinismUncertainty about the state of the world isrepresented by the set of states S’ ⊆ S that aredeemed possible• ProbabilityRepresented by a probability distribution over S.
- 11. Modeling uncertainty in statetransitions
- 12. Planning under uncertaintywithout feedbackIn both above cases the problem of planning under uncertaintywithout feedback reduces to a deterministic search problem in beliefspace, a space which can be characterized by the followingelements.• A space B of belief states over S,• An initial situation given by a belief state b0 ∈ B,• A goal situation given by a set of target beliefs BG• Actions a(b) ⊆ A applicable in each belief state b• Deterministic transitions b to ba for a ∈ a(b) givenby (1) and (2) above• Positive action costs c(a, b).
- 13. Planning with SensingWith the ability to sense the world, the choice ofthe actions depends on the observation gathered and thus the form of the plans changes.
- 14. Planning with Sensing con.Full-state ObservabilityIn the presence of sensing, the choice of the action ai at timei depends on all observations o0, o1, . . . , oi−1 gathered up tothat point.Partial Observabilityobservations reveal partial information about the true stateof the world and it is necessary to model how the two arerelated. The solution then takes the form of functionsmapping belief states into actions, as states are no longerknown and belief states summarize all the information fromprevious belief states and partial observations
- 15. Planning with Sensing con.
- 16. Temporal Planning Temporal models extend classical planning in a different direction. This is a simple but general model where actions have durations and their execution can overlap in time.• we assume a duration d(a) > 0 for each action a, and a predicate comp(A) that defines when a set of actions A can be executed concurrently.
- 17. Model for temporal planning• Need to replace the single actions a in that model by sets of legal actions. {A0,A1,A2,…..}• each set Ai start their execution at the same time ti. The end or completion time of an action a in Ai is thus ti + d(a) where d(a) is the duration of a.• t0 = 0 and ti+1 is given by the end time of the first action in A0, . . . , Ai that completes after ti.
- 18. Model for temporal planningcon.• The initial state s0 is given, while si+1 is a function of the state si at time ti and the set of actions Ai in the plan that complete exactly at time t + 1; i.e., si+1 = fT (Ai, si).• The state transition function fT is obtained from the representation of the individual actions• A valid temporal plan is a sequence of legal sets of actions mapping the initial state into a goal state.
- 19. Model for temporal planningcon.
- 20. Temporal planning vs.sequential planning• Though the model for sequential planning and the model for temporal planning both appear to be close from a mathematical point of view they are quite different from a computational point of view.• Heuristic search is probably the best current approach for optimal and non-optimal sequential planning, it does not represent the best approach for parallel planning
- 21. Languages In large problems, the state space and state transitions need to be represented implicitly in alogical action language, normally through a set of (state) variables and action rules. A good actionlanguage is one that supports compact encodings of the models of interest.
- 22. Strips• In AI Planning, the standard language for many years has been the Strips language introduced in 1971 by Fikes & Nilsson.• While from a logical point of view, Strips is a very limited language, Strips is well known and helps to illustrate the relationship between planning languages and planning models, and to motivate some of the extensions that have been proposed.
- 23. Strips con. Strips State language Operator language
- 24. Elements of Strips• The Strips language L is a simple logical language made up of two types of symbols: relational and constant symbols. E.g. on(a, b) relational constant symbol symbols• In Strips, there are no functional symbols and the constant symbols are the only terms.
- 25. Elements of Strips con.• Atomscombination p(t1, . . . , tk) of a relational symbol pand a tuple of terms ti of the same arity as p.• Operatorsdefined over the set of atoms in L. Each operatorop has a precondition, add, and delete listsPrec(op), Add(op), and Del(op) given by sets ofatoms.
- 26. Planning problems in Strips P = <A,O, I,G>• A stands for the set of all atoms in the domain• O is the set of operators• I and G are sets of atoms defining the initial and goal situations.
- 27. Planning problems in Strips con.The problem P defines a deterministic state modelS(P)like below• The states s are sets of atoms from A• The initial state s0 is I• The goal states are the states s such that G ⊆ s• A(s) is the set of operators o ∈ O s.t. Prec(o) ⊆ s• The state transition function f is such that f(a, s) =s + add(a) − del(a) for a ∈ a(s)• Costs c(a, s) are all equal to 1
- 28. Advanced languages• Domain-independent planners with expressive state and action languages like GPT (Bonet & Geffner2000) and MBP (Bertoli et al. 2001) has been introduced. both of them provide additional constructs for expressing non determinism and sensing.• Knowledge-based planners has been introduced which provide very rich modeling languages, often including facilities for representing time and resources
- 29. ComputationThis relates to the algorithms and techniques use to solve planning problems.
- 30. Heuristic Search• Simple , powerful and explains success of recent approaches• Maps planning problems into search problems• Explicitly searches state space with heuristic h(s) that estimates cost from s to Goal• Heuristic h extracted automatically from problem representation• Uses A*, IDA*…… etc to plan
- 31. Heuristic Functions• Heuristic search depends on the choice of heuristic function• Heuristics derived as optimal cost function of relaxed problems • Simple relaxation used in planning • Ignore delete lists • Reduce large goal sets to subsets • Ignore certain atoms
- 32. Additive Heuristic
- 33. Additive Heuristic• Assume atoms are independent• Heuristic not admissible (not lower bounded )• Informative and fast• Useful for optimal planning• The heuristic h+(s) of a state is h+(G) where G is the goal, and is obtained by solving the above first equation with single shortest path algorithms.
- 34. Fast Forward Heuristic• Does not assume that atoms are independent• Solve the relaxation suboptimally• Extracts useful information for guiding hill climbing search
- 35. Max Heuristic
- 36. Generalisation: hm heuristics• For fixed m=1,2…. Assume cost of achieving set C given by cost of most costly subset of size m – For m=1 , hm = hmax – For m=2 , hm = hG (Graphplan) – For any m , hm admisible and polinomial
- 37. Pattern databases• Project the state space S of a problem into a smaller state space S’• S’ can be solved optimally or exhaustively• Heuristic h(s) for the original state space is obtained from the solution cost h’∗(s’) of the projected state s’ in the relaxed space.
- 38. Branching Scheme• A branch is an object that specifies a linear sequence of versions of an element• Enable parallel development• Usually seldom mentioned in texts• But has very strong influence on performance
- 39. Types of branching• Forward Branching – Start from the initial state and go forward till the goal state is found• Backward Branching – start from the goal state and go backward till the initial state• Other – When both above methods do not workout
- 40. Classification in AI planners• State space planners – Progression and regression planning – Search in the space of states – Build plans from head or tail only – Estimated cost consists of two parts • the accumulated cost of the plan g(p) that depends on p • the estimated cost of the remaining plan h(s) that depends only on the state s obtained by progression or regression – High branching factor
- 41. Classification in AI planning• Partial –order planners – Search in the space of plans – The partial plan heads or tails can be suitably summarized – Useful for computing the estimated cost f(p) of the best complete plans
- 42. Heuristics and branchingconvergenceit should be possible to combine informative lowerbounds and effective branching rules, thus allowingus to prune partial solutions p whose estimatedcompletion cost f(p) exceeds a bound B.
- 43. Search in Non-Deterministic Spaces• Heuristic and constraint-based approaches are not directly applicable to problems involving non-determinism and feedback• The solution of these problems is not a sequence of actions but a function mapping states into actions.• Dynamic programming is used
- 44. Dynamic programming
- 45. Mathematics……
- 46. More Mathematics ……. Bellman equations
- 47. DP features• Works well for spaces containing large number of states• For larger spaces, the time and space requirements of pure DP methods is expensive• In comparison heuristic search methods for deterministic problems that can deal with huge state spaces provided a good heuristic function
- 48. Converging DP and HeuristicmethodsNew strategies that integrate DP and heuristicsearch methods have been proposed. • Real time dynamic programming • In every non-goal state s, the best action a according to the heuristic is selected • Heuristic value h(s) of the state s is updated using Bellman equation. • Then a random successor state is selected • This process is repeated until the goal is reached

Full NameComment goes here.