Upcoming SlideShare
×

# Artificial intelligence introduction

1,599 views
1,506 views

Published on

This ppt gives the introduction of the artificial intelligence

Published in: Technology
2 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

Views
Total views
1,599
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
127
0
Likes
2
Embeds 0
No embeds

No notes for slide

### Artificial intelligence introduction

1. 1. INTELLIGENT AGENTS
2. 2. Intelligent Agents What is an agent ?  An agent is anything that perceiving its environment through sensors and acting upon that environment through actuators  Example:  Human is an agent  A robot is also an agent with cameras and motors  A thermostat detecting room temperature.
3. 3. Intelligent Agents
4. 4. Diagram of an agent What AI should fill
5. 5. Simple Terms Percept  Agent’s perceptual inputs at any given instant Percept sequence  Complete history of everything that the agent has ever perceived.
6. 6. Agent function & program Agent’s behavior is mathematically described by  Agent function  A function mapping any given percept sequence to an action Practically it is described by  An agent program  The real implementation
7. 7. Vacuum-cleaner world Perception: Clean or Dirty? where it is in? Actions: Move left, Move right, suck, do nothing
8. 8. Vacuum-cleaner world
9. 9. Program implements the agentfunction tabulated in Fig. 2.3 Function Reflex-Vacuum-Agent([location,statuse]) return an action If status = Dirty then return Suck else if location = A then return Right else if location = B then return left
10. 10. Concept of Rationality Rational agent  One that does the right thing  = every entry in the table for the agent function is correct (rational). What is correct?  The actions that cause the agent to be most successful  So we need ways to measure success.
11. 11. Performance measure Performance measure  An objective function that determines  How the agent does successfully  E.g., 90% or 30% ? An agent, based on its percepts   action sequence : if desirable, it is said to be performing well.  No universal performance measure for all agents
12. 12. Performance measure A general rule:  Design performance measures according to  What one actually wants in the environment  Rather than how one thinks the agent should behave E.g., in vacuum-cleaner world  We want the floor clean, no matter how the agent behave  We don’t restrict how the agent behaves
13. 13. RationalityWhat is rational at any given time dependson four things: The performance measure defining the criterion of success The agent’s prior knowledge of the environment The actions that the agent can perform The agents’s percept sequence up to now
14. 14. Rational agentFor each possible percept sequence, an rational agent should select  an action expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent hasE.g., an exam Maximize marks, based onthe questions on the paper & your knowledge
15. 15. Omniscience An omniscient agent  Knows the actual outcome of its actions in advance  No other possible outcomes  However, impossible in real world An example  crossing a street but died of the fallen cargo door from 33,000ft  irrational?
16. 16. Omniscience Based on the circumstance, it is rational. As rationality maximizes  Expected performance Perfection maximizes  Actual performance Hence rational agents are not omniscient.
17. 17. Learning Does a rational agent depend on only current percept?  No, the past percept sequence should also be used  This is called learning  After experiencing an episode, the agent  should adjust its behaviors to perform better for the same job next time.
18. 18. Autonomy If an agent just relies on the prior knowledge of its designer rather than its own percepts then the agent lacks autonomyA rational agent should be autonomous- it should learn what it can to compensate for partial or incorrect prior knowledge. E.g., a clock  No input (percepts)  Run only but its own algorithm (prior knowledge)  No learning, no experience, etc.
19. 19. Software Agents Sometimes, the environment may not be the real world  E.g., flight simulator, video games, Internet  They are all artificial but very complex environments  Those agents working in these environments are called  Software agent (softbots)  Because all parts of the agent are software
20. 20. Task environments Task environments are the problems  While the rational agents are the solutions Specifying the task environment  PEAS description as fully as possible  Performance  Environment  Actuators  Sensors In designing an agent, the first step must always be to specify the task environment as fully as possible. Use automated taxi driver as an example
21. 21. Task environments Performance measure  How can we judge the automated driver?  Which factors are considered?  getting to the correct destination  minimizing fuel consumption  minimizing the trip time and/or cost  minimizing the violations of traffic laws  maximizing the safety and comfort, etc.
22. 22. Task environments Environment  A taxi must deal with a variety of roads  Traffic lights, other vehicles, pedestrians, stray animals, road works, police cars, etc.  Interact with the customer
23. 23. Task environments Actuators (for outputs)  Control over the accelerator, steering, gear shifting and braking  A display to communicate with the customers Sensors (for inputs)  Detect other vehicles, road situations  GPS (Global Positioning System) to know where the taxi is  Many more devices are necessary
24. 24. Task environments A sketch of automated taxi driver
25. 25. Properties of task environments Fully observable vs. Partially observable  If an agent’s sensors give it access to the complete state of the environment at each point in time then the environment is effectively and fully observable  if the sensors detect all aspects  That are relevant to the choice of action
26. 26. Partially observableAn environment might be Partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data.Example:  A local dirt sensor of the cleaner cannot tell  Whether other squares are clean or not
27. 27. Properties of task environments Deterministic vs. stochastic  next state of the environment Completely determined by the current state and the actions executed by the agent, then the environment is deterministic, otherwise, it is Stochastic. -Cleaner and taxi driver are:  Stochastic because of some unobservable aspects  noise or unknown
28. 28. Properties of task environmentsEpisodic vs. sequential  An episode = agent’s single pair of perception & action  The quality of the agent’s action does not depend on other episodes  Every episode is independent of each other  Episodic environment is simpler  The agent does not need to think aheadSequential  Current action may affect all future decisions -Ex. Taxi driving and chess.
29. 29. Properties of task environments Static vs. dynamic  A dynamic environment is always changing over time  E.g., the number of people in the street  While static environment  E.g., the destination Semidynamic  environment is not changed over time  but the agent’s performance score does
30. 30. Properties of task environmentsDiscrete vs. continuous If there are a limited number of distinct states, clearly defined percepts and actions, the environment is discrete E.g., Chess game Continuous: Taxi driving
31. 31. Properties of task environments Single agent VS. multiagent  Playing a crossword puzzle – single agent  Chess playing – two agents  Competitive multiagent environment  Chess playing  Cooperative multiagent environment  Automated taxi driver  Avoiding collision
32. 32. Properties of task environments Known vs. unknownThis distinction refers not to the environment itslef but to the agent’s (or designer’s) state of knowledge about the environment.-In known environment, the outcomes for all actions aregiven. ( example: solitaire card games).- If the environment is unknown, the agent will have to learn how it works in order to make good decisions. ( example: new video game).
33. 33. Examples of task environments
34. 34. Structure of agents
35. 35. Structure of agents Agent = architecture + program  Architecture = some sort of computing device (sensors + actuators)  (Agent) Program = some function that implements the agent mapping = “?”  Agent Program = Job of AI
36. 36. Agent programs Input for Agent Program  Only the current percept Input for Agent Function  The entire percept sequence  The agent must remember all of them Implement the agent program as  A look up table (agent function)
37. 37. Agent programs Skeleton design of an agent program
38. 38. Agent Programs P = the set of possible percepts T= lifetime of the agent  The total number of percepts it receives Size of the look up table∑t =1 P T t Consider playing chess  P =10, T=150  Will require a table of at least 10150 entries
39. 39. Agent programs Despite of huge size, look up table does what we want. The key challenge of AI  Find out how to write programs that, to the extent possible, produce rational behavior  From a small amount of code  Rather than a large amount of table entries  E.g., a five-line program of Newton’s Method  V.s. huge tables of square roots, sine, cosine, …
40. 40. Types of agent programs Four types  Simple reflex agents  Model-based reflex agents  Goal-based agents  Utility-based agents
41. 41. Simple reflex agentsIt uses just condition-action rules The rules are like the form “if … then …” efficient but have narrow range of applicability Because knowledge sometimes cannot be stated explicitly Work only  if the environment is fully observable
42. 42. Simple reflex agents
43. 43. Simple reflex agents (2)
44. 44. A Simple Reflex Agent in Nature percepts (size, motion) RULES: (1) If small moving object, then activate SNAP (2) If large moving object, then activate AVOID and inhibit SNAP ELSE (not moving) then NOOPneeded forcompleteness Action: SNAP or AVOID or NOOP
45. 45. Model-based Reflex Agents For the world that is partially observable  the agent has to keep track of an internal state  That depends on the percept history  Reflecting some of the unobserved aspects  E.g., driving a car and changing lane Requiring two types of knowledge  How the world evolves independently of the agent  How the agent’s actions affect the world
46. 46. Example Table Agent With Internal State IF THENSaw an object ahead, Go straightand turned right, andit’s now clear aheadSaw an object on my Haltright, turned right, andobject ahead againSee no objects ahead Go straightSee an object ahead Turn randomly
47. 47. Example Reflex Agent With Internal State: Wall-Following star tActions: left, right, straight, open-doorRules:3. If open(left) & open(right) and open(straight) then choose randomly between right and left5. If wall(left) and open(right) and open(straight) then straight6. If wall(right) and open(left) and open(straight) then straight7. If wall(right) and open(left) and wall(straight) then left8. If wall(left) and open(right) and wall(straight) then right9. If wall(left) and door(right) and wall(straight) then open-door10. If wall(right) and wall(left) and open(straight) then straight.11. (Default) Move randomly
48. 48. Model-based Reflex Agents The agent is with memory
49. 49. Model-based Reflex Agents
50. 50. Goal-based agents Current state of the environment is always not enough The goal is another issue to achieve  Judgment of rationality / correctness Actions chosen  goals, based on  the current state  the current percept
51. 51. Goal-based agentsConclusion Goal-based agents are less efficient but more flexible  Agent  Different goals  different tasks Search and planning  two other sub-fields in AI  to find out the action sequences to achieve its goal
52. 52. Goal-based agents
53. 53. Utility-based agents Goals alone are not enough  to generate high-quality behavior  E.g. meals in Canteen, good or not ? Many action sequences  the goals  some are better and some worse  If goal means success,  then utility means the degree of success (how successful it is)
54. 54. Utility-based agents (4)
55. 55. Utility-based agents it is said state A has higher utility  If state A is more preferred than others Utility is therefore a function  that maps a state onto a real number  the degree of success
56. 56. Utility-based agents (3) Utility has several advantages:  When there are conflicting goals,  Only some of the goals but not all can be achieved  utility describes the appropriate trade-off  When there are several goals  None of them are achieved certainly  utility provides a way for the decision-making
57. 57. Learning Agents After an agent is programmed, can it work immediately?  No, it still need teaching In AI,  Once an agent is done  We teach it by giving it a set of examples  Test it by using another set of examples We then say the agent learns  A learning agent
58. 58. Learning Agents Four conceptual components  Learning element  Making improvement  Performance element  Selecting external actions  Critic  Tells the Learning element how well the agent is doing with respect to fixed performance standard. (Feedback from user or examples, good or not?)  Problem generator  Suggest actions that will lead to new and informative experiences.
59. 59. Learning Agents
60. 60. Problem Solving
61. 61. Problem-Solving Agent sensors ? environment agent actuators
62. 62. Problem-Solving Agent sensors ? environment agent actuators • Formulate Goal • Formulate Problem •States •Actions • Find Solution
63. 63. Example: Route finding
64. 64. Holiday Planning On holiday in Romania; Currently in Arad. Flight leaves tomorrow from Bucharest. Formulate Goal: Be in Bucharest Formulate Problem: States: various cities Actions: drive between cities Find solution: Sequence of cities: Arad, Sibiu, Fagaras, Bucharest
65. 65. Problem Solving States Actions Start Solution Goal
66. 66. Vacuum World
67. 67. Problem-solving agent Four general steps in problem solving:  Goal formulation  What are the successful world states  Problem formulation  What actions and states to consider given the goal  Search  Determine the possible sequence of actions that lead to the states of known values and then choosing the best sequence.  Execute  Give the solution perform the actions.
68. 68. Problem-solving agentfunction SIMPLE-PROBLEM-SOLVING-AGENT(percept) return an action static: seq, an action sequence state, some description of the current world state goal, a goal problem, a problem formulation state ← UPDATE-STATE(state, percept) if seq is empty then goal ← FORMULATE-GOAL(state) problem ← FORMULATE-PROBLEM(state,goal) seq ← SEARCH(problem) action ← FIRST(seq) seq ← REST(seq) return action
69. 69. Assumptions Made (for now) The environment is static The environment is discretizable The environment is observable The actions are deterministic
70. 70. Problem formulation A problem is defined by:  An initial state, e.g. Arad  Successor function S(X)= set of action-state pairs  e.g. S(Arad)={<Arad → Zerind, Zerind>,…} intial state + successor function = state space  Goal test, can be  Explicit, e.g. x=‘at bucharest’  Implicit, e.g. checkmate(x)  Path cost (additive)  e.g. sum of distances, number of actions executed, …  c(x,a,y) is the step cost, assumed to be >= 0 A solution is a sequence of actions from initial to goal state. Optimal solution has the lowest path cost.
71. 71. Selecting a state space Real world is absurdly complex. State space must be abstracted for problem solving. State = set of real states. Action = complex combination of real actions.  e.g. Arad →Zerind represents a complex set of possible routes, detours, rest stops, etc.  The abstraction is valid if the path between two states is reflected in the real world. Solution = set of real paths that are solutions in the real world. Each abstract action should be “easier” than the real problem.
72. 72. Example: vacuum world States?? Initial state?? Actions?? Goal test?? Path cost??
73. 73. Example: vacuum world States?? two locations with or without dirt: 2 x 22=8 states. Initial state?? Any state can be initial Actions?? {Left, Right, Suck} Goal test?? Check whether squares are clean. Path cost?? Number of actions to reach goal.
74. 74. Example: 8-puzzle States?? Initial state?? Actions?? Goal test?? Path cost??
75. 75. Example: 8-puzzle States?? Integer location of each tile Initial state?? Any state can be initial Actions?? {Left, Right, Up, Down} Goal test?? Check whether goal configuration is reached Path cost?? Number of actions to reach goal
76. 76. Example: 8-puzzle 8 2 1 2 3 3 4 7 4 5 6 5 1 6 7 8 Initial state Goal state
77. 77. Example: 8-puzzle 8 2 7 3 4 8 2 5 1 6 3 4 7 5 1 6 8 2 8 2 3 4 7 3 4 7 5 1 6 5 1 6
78. 78. Example: 8-puzzle Size of the state space = 9!/2 = 181,440 15-puzzle  .65 x 1012 0.18 sec 6 days 24-puzzle  .5 x 1025 12 billion years 10 million states/sec
79. 79. Example: 8-queens Place 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal. A solution Not a solution
80. 80. Example: 8-queens problemIncremental formulation vs. complete-state formulation States?? Initial state?? Actions?? Goal test?? Path cost??
81. 81. Example: 8-queens Formulation #1: •States: any arrangement of 0 to 8 queens on the board • Initial state: 0 queens on the board • Actions: add a queen in any square • Goal test: 8 queens on the board, none attacked • Path cost: none  648 states with 8 queens
82. 82. Example: 8-queens Formulation #2: •States: any arrangement of k = 0 to 8 queens in the k leftmost columns with none attacked • Initial state: 0 queens on the board • Successor function: add a queen to any square in the leftmost empty column such that it is not attacked by any other queen 2,067 states • Goal test: 8 queens on the board
83. 83. Real-world Problems Route finding Touring problems VLSI layout Robot Navigation Automatic assembly sequencing Drug design Internet searching …
84. 84. Route Finding states  locations initial state  starting point successor function (operators)  move from one location to another goal test  arrive at a certain location path cost  may be quite complex  money, time, travel comfort, scenery, ...
85. 85. Traveling Salesperson states  locations / cities  illegal states  each city may be visited only once  visited cities must be kept as state information initial state  starting point  no cities visited successor function (operators)  move from one location to another one goal test  all locations visited  agent at the initial location path cost  distance between locations
86. 86. VLSI Layout states  positions of components, wires on a chip initial state  incremental: no components placed  complete-state: all components placed (e.g. randomly, manually) successor function (operators)  incremental: place components, route wire  complete-state: move component, move wire goal test  all components placed  components connected as specified path cost  may be complex  distance, capacity, number of connections per component
87. 87. Robot Navigation states  locations  position of actuators initial state  start position (dependent on the task) successor function (operators)  movement, actions of actuators goal test  task-dependent path cost  may be very complex  distance, energy consumption
88. 88. Assembly Sequencingstates  location of componentsinitial state  no components assembledsuccessor function (operators)  place componentgoal test  system fully assembledpath cost  number of moves
89. 89. Search Strategies A strategy is defined by picking the order of node expansion Performance Measures:  Completeness – does it always find a solution if one exists?  Time complexity – number of nodes generated/expanded  Space complexity – maximum number of nodes in memory  Optimality – does it always find a least-cost solution Time and space complexity are measured in terms of  b – maximum branching factor of the search tree  d – depth of the least-cost solution  m – maximum depth of the state space (may be ∞)
90. 90. Uninformed search strategies (a.k.a. blind search) = use only information available in problem definition.  When strategies can determine whether one non-goal state is better than another → informed search. Categories defined by expansion algorithm:  Breadth-first search  Uniform-cost search  Depth-first search  Depth-limited search  Iterative deepening search.  Bidirectional search
91. 91. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (1) 4 5 6 7 8 9
92. 92. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (2, 3) 4 5 6 7 8 9
93. 93. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (3, 4, 5) 4 5 6 7 8 9
94. 94. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (4, 5, 6, 7) 4 5 6 7 8 9
95. 95. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (5, 6, 7, 8) 4 5 6 7 8 9
96. 96. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (6, 7, 8) 4 5 6 7 8 9
97. 97. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (7, 8, 9) 4 5 6 7 8 9
98. 98. Breadth-First Strategy Expand shallowest unexpanded node Implementation: fringe is a FIFO queue New nodes are inserted at the end of the queue 1 2 3 FRINGE = (8, 9) 4 5 6 7 8 9
99. 99. Breadth-first search: evaluation Completeness:  Does it always find a solution if one exists?  YES  If shallowest goal node is at some finite depth d  Condition: If b is finite  (maximum num. of succ. nodes is finite)
100. 100. Breadth-first search: evaluation Completeness:  YES (if b is finite) Time complexity:  Assume a state space where every state has b successors.  root has b successors, each node at the next level has again b successors (total b2), …  Assume solution is at depth d  Worst case; expand all but the last node at depth d  Total numb. of nodes generated:  1 + b + b2 + … + bd + b(bd-1) = O(bd+1)
101. 101. Breadth-first search: evaluation Completeness:  YES (if b is finite) Time complexity:  Total numb. of nodes generated:  1 + b + b2 + … + bd + b(bd-1) = O(bd+1) Space complexity:O(bd+1)
102. 102. Breadth-first search: evaluation Completeness:  YES (if b is finite) Time complexity:  Total numb. of nodes generated:  1 + b + b2 + … + bd + b(bd-1) = O(bd+1) Space complexity:O(bd+1) Optimality:  Does it always find the least-cost solution?  In general YES  unless actions have different cost.
103. 103. Breadth-first search: evaluation lessons:  Memory requirements are a bigger problem than execution time.  Exponential complexity search problems cannot be solved by uninformed search methods for any but the smallest instances. DEPTH NODES TIME MEMORY 2 1100 0.11 seconds 1 megabyte 4 111100 11 seconds 106 megabytes 6 107 19 minutes 10 gigabytes 8 109 31 hours 1 terabyte 10 1011 129 days 101 terabytes 12 1013 35 years 10 petabytes Assumptions: b = 10; 10,000 nodes/sec; 1000 bytes/node
104. 104. Uniform-cost search Extension of BF-search:  Expand node with lowest path cost Implementation: fringe = queue ordered by path cost. UC-search is the same as BF-search when all step- costs are equal.
105. 105. Uniform-cost search Completeness:  YES, if step-cost > ε (smal positive constant) Time complexity:  Assume C* the cost of the optimal solution.  Assume that every action costs at least ε  Worst-case: *ε C/ O ) (b Space complexity:  Idem to time complexity Optimality:  nodes expanded in order of increasing path cost.  YES, if complete.
106. 106. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 FRINGE = (1) 4 5
107. 107. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 FRINGE = (2, 3) 4 5
108. 108. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 FRINGE = (4, 5, 3) 4 5
109. 109. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
110. 110. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
111. 111. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
112. 112. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
113. 113. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
114. 114. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
115. 115. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
116. 116. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
117. 117. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
118. 118. Depth-First StrategyExpand deepest unexpanded nodeImplementation: fringe is a LIFO queue (=stack) 1 2 3 4 5
119. 119. Depth-first search: evaluation Completeness;  Does it always find a solution if one exists?  NO  unless search space is finite and no loops are possible.
120. 120. Depth-first search: evaluation Completeness;  NO unless search space is finite. m Ob ) ( Time complexity;  Terrible if m is much larger than d (depth of optimal solution)  But if many solutions, then faster than BFS
121. 121. Depth-first search: evaluation Completeness;  NO unless search space is finite. Time complexity;Ob ) (m Space complexity; Om1 ( + b )  Backtracking search uses even less memory  One successor instead of all b.
122. 122. Depth-first search: evaluation Completeness;  NO unless search space is finite. Time complexity; Ob ) (m ( + Space complexity; Om1 b ) Optimality; No
123. 123. Depth-Limited Strategy Depth-first with depth cutoff k (maximal depth below which nodes are not expanded) Three possible outcomes:  Solution  Failure (no solution)  Cutoff (no solution within cutoff) Solves the infinite-path problem. If k< d then incompleteness results. If k> d then not optimal. Time complexity: O(bk) Space complexity O(bk)
124. 124. Iterative Deepening StrategyRepeat for k = 0, 1, 2, …: Perform depth-first with depth cutoff k Complete Optimal if step cost =1 Time complexity is: (d+1)(1) + db + (d-1)b2 + … + (1) bd = O(bd) Space complexity is: O(bd)
125. 125. Comparison of Strategies Breadth-first is complete and optimal, but has high space complexity Depth-first is space efficient, but neither complete nor optimal Iterative deepening combines benefits of DFS and BFS and is asymptotically optimal
126. 126. Bidirectional Strategy 2 fringe queues: FRINGE1 and FRINGE2Time and space complexity = O(bd/2) << O(bd)The predecessor of each node should be efficiently computable.
127. 127. Summary of algorithmsCriterion Breadth- Uniform- Depth- Depth- Iterative Bidirectio First cost First limited deepening nal searchComplete YES* YES* NO YES, YES YES* ? if l ≥ d Time bd+1 bC*/e bm bl bd bd/2 Space bd+1 bC*/e bm bl bd bd/2Optimal? YES* YES* NO NO YES YES