The document discusses search techniques in artificial intelligence. It defines search as finding a sequence of actions to achieve a goal state. Common problems that use search include problem solving, natural language processing, computer vision, and machine learning. Search involves defining a search space with states, operators to transition between states, an initial state, and a goal test. Popular uninformed search techniques like breadth-first search and depth-first search are explained. The document also introduces informed search techniques like uniform cost search that use cost information to guide the search towards optimal solutions.
AI_Session 11: searching with Non-Deterministic Actions and partial observati...Asst.prof M.Gokilavani
This document summarizes a session on problem solving by search in artificial intelligence. It discusses uninformed and informed search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers searching with non-deterministic actions, partial observations, and online search agents operating in unknown environments. Examples discussed include the vacuum world problem and how search trees are used to handle non-determinism through contingency planning. The next session will cover online search agents operating in unknown environments.
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...vikas dhakane
The document discusses different types of search algorithms in artificial intelligence. It describes informed search which uses heuristics and knowledge to find solutions more efficiently compared to uninformed searches. It provides details on heuristic functions which estimate how close a state is to the goal. The document also explains best first search, an informed search technique that uses heuristic values and a priority queue to iteratively explore the most promising nodes first in searching for a solution.
This presentation discusses the state space problem formulation and different search techniques to solve these. Techniques such as Breadth First, Depth First, Uniform Cost and A star algorithms are covered with examples. We also discuss where such techniques are useful and the limitations.
The document discusses problem solving agents and search algorithms. It describes problem solving as having four steps: goal formulation, problem formulation, search, and execution. It then discusses different types of problems agents may face, such as single state problems and problems with partial information. The document introduces tree search algorithms and strategies for searching a state space, such as breadth-first search. It analyzes the performance of breadth-first search and notes its exponential time and memory complexity for large problems.
This presentation discuses the following topics:
What is A-Star (A*) Algorithm in Artificial Intelligence?
A* Algorithm Steps
Why is A* Search Algorithm Preferred?
A* and Its Basic Concepts
What is a Heuristic Function?
Admissibility of the Heuristic Function
Consistency of the Heuristic Function
Artificial Intelligence involves representing problems as state spaces and using algorithms to search the state space to solve the problem. The document discusses key concepts in problem solving using search including representing the problem as states, defining state transitions with successor functions, and exploring the resulting state space to find a solution. It provides examples of representing common problems like the 8-puzzle and n-queens as state spaces. The document also summarizes uninformed search strategies like breadth-first, depth-first, and iterative deepening search that use the problem definition to search the state space without using heuristics.
The document discusses the 8-puzzle problem and the A* algorithm. The 8-puzzle problem involves a 3x3 grid with 8 numbered tiles and 1 blank space that can be moved. The A* algorithm maintains a tree of paths from the initial to final state, extending the paths one step at a time until the final state is reached. It is complete and optimal but depends on the accuracy of the heuristic used to estimate costs.
This document discusses various heuristic search techniques used in artificial intelligence. It begins by defining heuristics as techniques that find approximate solutions faster than classic methods when exact solutions are not possible or not feasible due to time or memory constraints. It then describes heuristic search, hill climbing, simulated annealing, A* search, and best-first search. Hill climbing is presented as an example heuristic technique that evaluates neighboring states to move toward an optimal solution. The document also discusses problems that can occur with hill climbing like getting stuck in local maxima.
AI_Session 11: searching with Non-Deterministic Actions and partial observati...Asst.prof M.Gokilavani
This document summarizes a session on problem solving by search in artificial intelligence. It discusses uninformed and informed search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers searching with non-deterministic actions, partial observations, and online search agents operating in unknown environments. Examples discussed include the vacuum world problem and how search trees are used to handle non-determinism through contingency planning. The next session will cover online search agents operating in unknown environments.
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...vikas dhakane
The document discusses different types of search algorithms in artificial intelligence. It describes informed search which uses heuristics and knowledge to find solutions more efficiently compared to uninformed searches. It provides details on heuristic functions which estimate how close a state is to the goal. The document also explains best first search, an informed search technique that uses heuristic values and a priority queue to iteratively explore the most promising nodes first in searching for a solution.
This presentation discusses the state space problem formulation and different search techniques to solve these. Techniques such as Breadth First, Depth First, Uniform Cost and A star algorithms are covered with examples. We also discuss where such techniques are useful and the limitations.
The document discusses problem solving agents and search algorithms. It describes problem solving as having four steps: goal formulation, problem formulation, search, and execution. It then discusses different types of problems agents may face, such as single state problems and problems with partial information. The document introduces tree search algorithms and strategies for searching a state space, such as breadth-first search. It analyzes the performance of breadth-first search and notes its exponential time and memory complexity for large problems.
This presentation discuses the following topics:
What is A-Star (A*) Algorithm in Artificial Intelligence?
A* Algorithm Steps
Why is A* Search Algorithm Preferred?
A* and Its Basic Concepts
What is a Heuristic Function?
Admissibility of the Heuristic Function
Consistency of the Heuristic Function
Artificial Intelligence involves representing problems as state spaces and using algorithms to search the state space to solve the problem. The document discusses key concepts in problem solving using search including representing the problem as states, defining state transitions with successor functions, and exploring the resulting state space to find a solution. It provides examples of representing common problems like the 8-puzzle and n-queens as state spaces. The document also summarizes uninformed search strategies like breadth-first, depth-first, and iterative deepening search that use the problem definition to search the state space without using heuristics.
The document discusses the 8-puzzle problem and the A* algorithm. The 8-puzzle problem involves a 3x3 grid with 8 numbered tiles and 1 blank space that can be moved. The A* algorithm maintains a tree of paths from the initial to final state, extending the paths one step at a time until the final state is reached. It is complete and optimal but depends on the accuracy of the heuristic used to estimate costs.
This document discusses various heuristic search techniques used in artificial intelligence. It begins by defining heuristics as techniques that find approximate solutions faster than classic methods when exact solutions are not possible or not feasible due to time or memory constraints. It then describes heuristic search, hill climbing, simulated annealing, A* search, and best-first search. Hill climbing is presented as an example heuristic technique that evaluates neighboring states to move toward an optimal solution. The document also discusses problems that can occur with hill climbing like getting stuck in local maxima.
1) Hill climbing is a local search algorithm that continuously moves in the direction of increasing value to find the optimal solution. It terminates when no neighbor has a higher value.
2) It has a linear time complexity but constant space complexity. It is used to optimize mathematical problems like the traveling salesman problem.
3) There are different types of hill climbing algorithms like simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing that vary in how they evaluate neighbor states.
Local search algorithms operate by examining the current node and its neighbors. They are suitable for problems where the solution is the goal state itself rather than the path to get there. Hill-climbing and simulated annealing are examples of local search algorithms. Hill-climbing continuously moves to higher value neighbors until a local peak is reached. Simulated annealing also examines random moves and can accept moves to worse states based on probability. Both aim to find an optimal or near-optimal solution but can get stuck in local optima.
This document discusses hill climbing, an optimization technique used to find the best solution to a problem. It begins by explaining hill climbing search and its implementation. It then provides examples of applying hill climbing to solve the N-Queen problem and the 8-puzzle problem. The document notes some drawbacks of hill climbing and introduces random restart hill climbing as a variation that can help overcome local maxima issues.
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
A rational agent is an artificial intelligence agent that has clear preferences, models uncertainty, and acts to maximize its performance based on possible actions. Rational agents are designed to make the right decisions using techniques from game theory and decision theory for real-world problems. An AI agent is considered rational if it selects the best possible action in each situation to receive a positive reward, avoiding wrong actions that result in negative rewards. The rationality of an agent depends on factors like its performance measure, prior knowledge of its environment, available actions, and percepts.
The Wumpus World is a simulated cave environment where an agent must explore rooms connected by passageways to find gold and escape without being eaten by the Wumpus or falling in a pit. The agent has sensors to detect stench, breeze, glitter, bumps and screams but can only see local information. It can move between rooms or use actions like shoot, grab, and climb out. The goal is to get the highest score by finding gold and escaping while taking the fewest actions and avoiding dangers.
1) The document describes uninformed search algorithms, including breadth-first search and depth-first search.
2) Breadth-first search explores all neighbors of the initial node before moving to the next level, finding the shortest path.
3) Depth-first search explores deep paths first, expanding the deepest node at each step and implementing the fringe as a stack.
Production systems provide a structure for modeling problem solving as a search process. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. The rules take the form of condition-action pairs. The control strategy determines the order of rule application and resolves conflicts. Production systems can be classified based on whether rule application is monotonic or non-monotonic. They provide modularity and a natural representation but can suffer from opacity, inefficiency, and lack of learning abilities. Choosing the right production system depends on characteristics of the problem such as decomposability and predictability.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Prolog is a declarative logic programming language where programs consist of facts and rules. Facts are terms that are always true, while rules define relationships between terms using logic notation "if-then". A Prolog program is run by asking queries of the program's database. Variables must start with an uppercase letter and are used to represent unknown values, while atoms are constants that represent known values.
- A state space consists of nodes representing problem states and arcs representing moves between states. It can be represented as a tree or graph.
- To solve a problem using search, it must first be represented as a state space with an initial state, goal state(s), and legal operators defining state transitions.
- Different search algorithms like depth-first, breadth-first, A*, and best-first are then applied to traverse the state space to find a solution path from initial to goal state.
- Heuristic functions can be used to guide search by estimating state proximity to the goal, improving efficiency over uninformed searches.
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...vikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This document discusses network congestion and congestion control. It defines congestion as occurring when there are too many packets present in part of a subnet, degrading performance. Factors that can influence congestion include bursty traffic patterns, insufficient router memory or bandwidth, and slow router processing. Congestion control techniques aim to prevent or remove congestion through open-loop methods like traffic scheduling, or closed-loop methods using feedback to adjust system operations. Traffic-aware routing and admission control are also discussed as ways to minimize congestion.
This document summarizes key points from a session on problem solving by search in artificial intelligence. It discusses search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers local search algorithms for continuous spaces, including hill climbing and simulated annealing. Examples of local maxima, plateaus, and ridges as problems for hill climbing are provided along with solutions like backtracking, random moves, and bidirectional search. The next session topics are on non-deterministic actions, partial observations, online search, and unknown environments.
This document discusses various problem solving techniques through search. It begins with an introduction to problem representation, problem solving through search, and examples like the 8-puzzle and missionaries and cannibals problem. It then covers search methods and algorithms like breadth-first search, depth-first search, and A* search. Key concepts discussed include problem states, operators, initial states, goals, and search strategies. Real-world problems are abstracted and represented as states, operators, and paths for solving through search techniques.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
A Heuristic is a technique to solve a problem faster than classic methods, or to find an approximate solution when classic methods cannot. This is a kind of a shortcut as we often trade one of optimality, completeness, accuracy, or precision for speed. A Heuristic (or a heuristic function) takes a look at search algorithms. At each branching step, it evaluates the available information and makes a decision on which branch to follow.
The A* algorithm is used to find the shortest path between nodes on a graph. It uses two lists - OPEN and CLOSED - to track nodes. The algorithm calculates f(n)=g(n)+h(n) to determine which node to expand next, where g(n) is the cost to reach node n from the starting node and h(n) is a heuristic estimate of the cost to reach the goal from n. The document provides an example of using A* to solve an 8-puzzle problem and find the shortest path between two nodes on a graph where edge distances and heuristic values are provided.
Forward chaining is a data-driven reasoning method that applies rules to existing facts to deduce new facts, adding them to the knowledge base. It starts with known facts and uses inference rules to reach a goal or conclusion. Backward chaining is a goal-driven method that starts with a desired goal and works backwards to see if existing facts and rules can support reaching that goal. Both methods have tradeoffs in efficiency depending on whether the starting point is facts or a specific goal.
Best-first search is a heuristic search algorithm that expands the most promising node first. It uses an evaluation function f(n) that estimates the cost to reach the goal from each node n. Nodes are ordered in the fringe by increasing f(n). A* search is a special case of best-first search that uses an admissible heuristic function h(n) and is guaranteed to find the optimal solution.
This document discusses different search algorithms for traversing tree structures:
- Depth-first search (DFS) explores the deepest paths first, using a stack data structure. It is complete but not optimal.
- Breadth-first search (BFS) explores all nodes at each depth level first, before deeper levels, using a queue. It finds the minimum depth goal node.
- Uniform cost search prioritizes exploring the lowest cost path first, using a priority queue ordered by path cost. It is optimal, finding the least cost goal node.
16890 unit 2 heuristic search techniquesJais Balta
The document discusses heuristic search techniques for artificial intelligence. It covers greedy search which uses a heuristic function f(n) = h(n) to choose the successor node with the lowest estimated cost to reach the goal. An example of the travelling salesman problem is provided to illustrate greedy search.
1) Hill climbing is a local search algorithm that continuously moves in the direction of increasing value to find the optimal solution. It terminates when no neighbor has a higher value.
2) It has a linear time complexity but constant space complexity. It is used to optimize mathematical problems like the traveling salesman problem.
3) There are different types of hill climbing algorithms like simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing that vary in how they evaluate neighbor states.
Local search algorithms operate by examining the current node and its neighbors. They are suitable for problems where the solution is the goal state itself rather than the path to get there. Hill-climbing and simulated annealing are examples of local search algorithms. Hill-climbing continuously moves to higher value neighbors until a local peak is reached. Simulated annealing also examines random moves and can accept moves to worse states based on probability. Both aim to find an optimal or near-optimal solution but can get stuck in local optima.
This document discusses hill climbing, an optimization technique used to find the best solution to a problem. It begins by explaining hill climbing search and its implementation. It then provides examples of applying hill climbing to solve the N-Queen problem and the 8-puzzle problem. The document notes some drawbacks of hill climbing and introduces random restart hill climbing as a variation that can help overcome local maxima issues.
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
A rational agent is an artificial intelligence agent that has clear preferences, models uncertainty, and acts to maximize its performance based on possible actions. Rational agents are designed to make the right decisions using techniques from game theory and decision theory for real-world problems. An AI agent is considered rational if it selects the best possible action in each situation to receive a positive reward, avoiding wrong actions that result in negative rewards. The rationality of an agent depends on factors like its performance measure, prior knowledge of its environment, available actions, and percepts.
The Wumpus World is a simulated cave environment where an agent must explore rooms connected by passageways to find gold and escape without being eaten by the Wumpus or falling in a pit. The agent has sensors to detect stench, breeze, glitter, bumps and screams but can only see local information. It can move between rooms or use actions like shoot, grab, and climb out. The goal is to get the highest score by finding gold and escaping while taking the fewest actions and avoiding dangers.
1) The document describes uninformed search algorithms, including breadth-first search and depth-first search.
2) Breadth-first search explores all neighbors of the initial node before moving to the next level, finding the shortest path.
3) Depth-first search explores deep paths first, expanding the deepest node at each step and implementing the fringe as a stack.
Production systems provide a structure for modeling problem solving as a search process. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. The rules take the form of condition-action pairs. The control strategy determines the order of rule application and resolves conflicts. Production systems can be classified based on whether rule application is monotonic or non-monotonic. They provide modularity and a natural representation but can suffer from opacity, inefficiency, and lack of learning abilities. Choosing the right production system depends on characteristics of the problem such as decomposability and predictability.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Prolog is a declarative logic programming language where programs consist of facts and rules. Facts are terms that are always true, while rules define relationships between terms using logic notation "if-then". A Prolog program is run by asking queries of the program's database. Variables must start with an uppercase letter and are used to represent unknown values, while atoms are constants that represent known values.
- A state space consists of nodes representing problem states and arcs representing moves between states. It can be represented as a tree or graph.
- To solve a problem using search, it must first be represented as a state space with an initial state, goal state(s), and legal operators defining state transitions.
- Different search algorithms like depth-first, breadth-first, A*, and best-first are then applied to traverse the state space to find a solution path from initial to goal state.
- Heuristic functions can be used to guide search by estimating state proximity to the goal, improving efficiency over uninformed searches.
I.ITERATIVE DEEPENING DEPTH FIRST SEARCH(ID-DFS) II.INFORMED SEARCH IN ARTIFI...vikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This document discusses network congestion and congestion control. It defines congestion as occurring when there are too many packets present in part of a subnet, degrading performance. Factors that can influence congestion include bursty traffic patterns, insufficient router memory or bandwidth, and slow router processing. Congestion control techniques aim to prevent or remove congestion through open-loop methods like traffic scheduling, or closed-loop methods using feedback to adjust system operations. Traffic-aware routing and admission control are also discussed as ways to minimize congestion.
This document summarizes key points from a session on problem solving by search in artificial intelligence. It discusses search strategies like breadth-first search, uniform cost search, depth-first search, greedy best-first search, and A* search. It also covers local search algorithms for continuous spaces, including hill climbing and simulated annealing. Examples of local maxima, plateaus, and ridges as problems for hill climbing are provided along with solutions like backtracking, random moves, and bidirectional search. The next session topics are on non-deterministic actions, partial observations, online search, and unknown environments.
This document discusses various problem solving techniques through search. It begins with an introduction to problem representation, problem solving through search, and examples like the 8-puzzle and missionaries and cannibals problem. It then covers search methods and algorithms like breadth-first search, depth-first search, and A* search. Key concepts discussed include problem states, operators, initial states, goals, and search strategies. Real-world problems are abstracted and represented as states, operators, and paths for solving through search techniques.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
A Heuristic is a technique to solve a problem faster than classic methods, or to find an approximate solution when classic methods cannot. This is a kind of a shortcut as we often trade one of optimality, completeness, accuracy, or precision for speed. A Heuristic (or a heuristic function) takes a look at search algorithms. At each branching step, it evaluates the available information and makes a decision on which branch to follow.
The A* algorithm is used to find the shortest path between nodes on a graph. It uses two lists - OPEN and CLOSED - to track nodes. The algorithm calculates f(n)=g(n)+h(n) to determine which node to expand next, where g(n) is the cost to reach node n from the starting node and h(n) is a heuristic estimate of the cost to reach the goal from n. The document provides an example of using A* to solve an 8-puzzle problem and find the shortest path between two nodes on a graph where edge distances and heuristic values are provided.
Forward chaining is a data-driven reasoning method that applies rules to existing facts to deduce new facts, adding them to the knowledge base. It starts with known facts and uses inference rules to reach a goal or conclusion. Backward chaining is a goal-driven method that starts with a desired goal and works backwards to see if existing facts and rules can support reaching that goal. Both methods have tradeoffs in efficiency depending on whether the starting point is facts or a specific goal.
Best-first search is a heuristic search algorithm that expands the most promising node first. It uses an evaluation function f(n) that estimates the cost to reach the goal from each node n. Nodes are ordered in the fringe by increasing f(n). A* search is a special case of best-first search that uses an admissible heuristic function h(n) and is guaranteed to find the optimal solution.
This document discusses different search algorithms for traversing tree structures:
- Depth-first search (DFS) explores the deepest paths first, using a stack data structure. It is complete but not optimal.
- Breadth-first search (BFS) explores all nodes at each depth level first, before deeper levels, using a queue. It finds the minimum depth goal node.
- Uniform cost search prioritizes exploring the lowest cost path first, using a priority queue ordered by path cost. It is optimal, finding the least cost goal node.
16890 unit 2 heuristic search techniquesJais Balta
The document discusses heuristic search techniques for artificial intelligence. It covers greedy search which uses a heuristic function f(n) = h(n) to choose the successor node with the lowest estimated cost to reach the goal. An example of the travelling salesman problem is provided to illustrate greedy search.
Adversarial search is an algorithm used in game playing to plan ahead when other agents are planning against you. The minimax algorithm determines the optimal strategy by assuming the opponent will make the best counter-move. It searches the game tree to find the move with the highest minimum payoff. α-β pruning improves on minimax by pruning branches that cannot affect the choice of move. State-of-the-art game programs use techniques like precomputed databases, deep search trees, and pattern knowledge bases to defeat human champions at games like checkers, chess, and Othello.
Iterative deepening search (IDS) is an algorithm that combines the completeness of breadth-first search with the memory efficiency of depth-first search. IDS performs an exhaustive depth-first search, increasing the depth limit by one each iteration, until the goal is found. IDS is guaranteed to find a solution if one exists, uses less memory than breadth-first search by limiting the depth of search at each iteration, and is more efficient than depth-first search which can get stuck in infinite loops.
Iterative Deepening Search (IDS) combines the optimality of breadth-first search with the low memory usage of depth-first search. IDS performs a depth-limited depth-first search at each level, starting from depth 0 and incrementally increasing the depth limit. This allows it to find optimal solutions while only requiring O(bd) memory like DFS. IDA* extends this idea to A* search by imposing a limit on the estimated total path cost f and iteratively relaxing that limit, reducing memory usage while maintaining completeness and optimality.
This document discusses search algorithms in artificial intelligence. It describes how search is used to find solutions in many AI problems by exploring possible states. The key aspects covered are:
- Search involves exploring a space of possible states through actions or operators to find a goal state.
- Problems are formulated as a search problem defined by the initial state, operators/actions, goal test, and cost function.
- Common uninformed search algorithms like breadth-first search (BFS) and depth-first search (DFS) are described and their properties analyzed.
- BFS is complete but can require exponential time and space, while DFS is not complete but uses less space in most cases.
- Uniform
This document discusses search techniques used in artificial intelligence problems. It defines key concepts related to search spaces such as states, actions, goals, and costs. It provides examples of search problems in domains like the 8-puzzle, robot assembly, and missionaries and cannibals. It analyzes search algorithms like breadth-first search, depth-first search, uniform cost search, iterative deepening search, and informed searches using heuristics. The document compares the properties of different search strategies.
The document discusses different types of problem-solving agents and search algorithms. It describes single-state, sensorless, contingency, and exploration problem types. It also summarizes common uninformed search strategies like breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search and analyzes their properties in terms of completeness, time complexity, space complexity, and optimality. Examples of problems that can be modeled as state space searches are also provided, like the vacuum world, 8-puzzle, and robotic assembly problems.
Artificial intelligence techniques can be used to solve search problems by modeling them as trees. Common search strategies include breadth-first search, depth-first search, uniform cost search, and iterative deepening search. These strategies differ in terms of completeness, optimality, time complexity, and space complexity. More advanced techniques like bidirectional search can improve search efficiency by exploring the problem space from both the initial and goal states simultaneously.
The document discusses uninformed search techniques. It provides examples of representing problems as states and operators that transform states. This includes problems like the water jug problem, 8-puzzle, and 8-queens. It then describes common uninformed search algorithms like breadth-first search, depth-first search, iterative deepening, and uniform cost search. It analyzes the properties of these algorithms like completeness, time complexity, space complexity, and optimality.
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
The document discusses problem solving through search. It defines intelligent agents, search problems, and search graphs. Search problems are formulated using states, operators, start states, and goal states. Several search algorithms are introduced, including depth-first search and breadth-first search. Examples of search problems discussed include finding a route from Arad to Bucharest in Romania, the vacuum world problem, the 8-queens problem, and the 8-puzzle problem. The document outlines how to represent these problems as state spaces and formulates them in terms of states, actions, initial states, and goal tests. It also introduces tree search algorithms and strategies for searching state spaces, such as uninformed blind search and informed heuristic search.
problem solve and resolving in ai domain , problomsSlimAmiri
The document describes search algorithms for problem solving. It defines key concepts like state, node, problem representation, and search strategies. It then explains blind search algorithms like breadth-first search, depth-first search, depth-limited search, and iterative deepening search. Finally, it covers heuristic search algorithms like greedy search and A* search, which use an evaluation function and heuristic to guide the search towards more promising solutions.
This document provides a summary of Lecture 3 on problem-solving by searching. It describes how problem-solving agents can formulate goals and problems, represent the problem as a state space, and find solutions using search algorithms like breadth-first search, uniform-cost search, depth-first search, and iterative deepening search. Examples of search problems discussed include the Romania pathfinding problem, vacuum world, and the 8-puzzle.
The document provides an overview of problem spaces and problem solving through searching techniques used in artificial intelligence. It defines a problem space as a set of states and connections between states to represent a problem. Search strategies for finding solutions include breadth-first search, depth-first search, and heuristic search. Real-world problems discussed that can be solved through searching include route finding, layout problems, task scheduling, and the water jug problem is presented as a toy problem example.
The document discusses problem formulation for solving problems using search algorithms. It provides examples of formulating problems like route finding between cities and solving the 8-puzzle as state space problems. Key components of problem formulation are defined as the initial state, successor function, goal test, and path cost. Real-life applications that can be formulated as search problems are also presented, such as robot navigation, vehicle routing, and assembly sequencing.
This document describes a course on artificial intelligence called CS 188 at UC Berkeley that covers search algorithms. It introduces search problems, which involve a state space, successor function, start state, and goal test. It then describes different uninformed search methods like depth-first search, breadth-first search, and uniform-cost search and compares their properties and performance on different types of problems.
Breadth-first search expands nodes in order of their distance from the root, searching shallow nodes before deeper ones. It uses a queue to store nodes at each level, processing the shallowest unexpanded node first. This continues until the goal is found or the entire search space is explored. Breadth-first search is complete but can require large amounts of memory and time for problems with large state spaces.
The document discusses various search algorithms used in artificial intelligence problem solving including breadth-first search, uniform-cost search, depth-first search, iterative deepening depth-first search, and bidirectional search. It provides examples of route finding problems and defines the components of a search problem. It also analyzes and compares the algorithms based on their completeness, time complexity, space complexity, and ability to find optimal solutions.
Problem solving
Problem formulation
Search Techniques for Artificial Intelligence
Classification of AI searching Strategies
What is Search strategy ?
Defining a Search Problem
State Space Graph versus Search Trees
Graph vs. Tree
Problem Solving by Search
This document discusses search algorithms and problem solving through searching. It begins by defining search problems and representing them using graphs with states as nodes and actions as edges. It then covers uninformed search strategies like breadth-first and depth-first search. Informed search strategies use heuristics to guide the search toward more promising areas of the problem space. Examples of single agent pathfinding problems are given like the traveling salesman problem and Rubik's cube.
What is artificial intelligence,Hill Climbing Procedure,Hill Climbing Procedure,State Space Representation and Search,classify problems in AI,AO* ALGORITHM
This document discusses various problem solving agents and search strategies. It describes well defined and non well defined problem types. Examples provided include traveling salesman problem and 8-puzzle. Search strategies covered are uninformed searches like breadth-first, depth-first, uniform cost, iterative deepening and bidirectional search. Their properties like completeness, optimality and time/space complexity are evaluated. Different graph and tree search algorithms are also discussed.
The document discusses various search algorithms used in artificial intelligence problem solving. It defines key search terminology like problem space, states, actions, and goals. It then explains different types of search problems and provides examples like the 8-puzzle and vacuuming world problems. Finally, it summarizes uninformed search strategies like breadth-first search, depth-first search, and iterative deepening search as well as informed strategies like greedy best-first search and A* search which use heuristics to guide the search.
Heuristic search techniques use problem-specific knowledge beyond what is given in the problem statement to guide the search for a solution more efficiently. This document discusses various heuristic search algorithms including best-first search, greedy best-first search, A* search, local search techniques like hill-climbing and simulated annealing, and genetic algorithms. Key aspects like admissibility and monotonicity of heuristics that allow algorithms like A* to find optimal solutions are also covered. Examples of applying these techniques to problems like the 8-puzzle and n-queens are provided.
Similar to CptS 440 / 540 Artificial Intelligence (20)
Este documento analiza el modelo de negocio de YouTube. Explica que YouTube y otros sitios de video online representan un nuevo modelo de negocio para contenidos audiovisuales debido al cambio en los hábitos de consumo causado por las nuevas tecnologías. Describe cómo YouTube aprovecha la participación de los usuarios para mejorar continuamente y atraer una audiencia diferente a la de los medios tradicionales.
The defense was successful in portraying Michael Jackson favorably to the jury in several ways:
1) They dressed Jackson in ornate costumes that conveyed images of purity, innocence, and humility.
2) Jackson was shown entering the courtroom as if on a red carpet, emphasizing his celebrity status.
3) Jackson appeared vulnerable, childlike, and in declining health during the trial, eliciting sympathy from jurors.
4) Defense attorney Tom Mesereau effectively presented a coherent narrative of Jackson as a victim and portrayed Neverland as a place of refuge, undermining the prosecution's arguments.
Michael Jackson was born in 1958 in Gary, Indiana and rose to fame in the 1960s as the lead singer of The Jackson 5, topping music charts in the 1970s. As a solo artist in the 1980s, his album Thriller broke music records. In the 1990s and 2000s, Jackson faced several legal issues related to child abuse allegations while continuing to release music. He married Lisa Marie Presley and Debbie Rowe and had two children before his death in 2009.
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...butest
This document appears to be a list of popular books from various authors. It includes over 150 book titles across many genres such as fiction, non-fiction, memoirs, and novels. The books cover a wide range of topics from politics to cooking to autobiographies.
The prosecution lost the Michael Jackson trial due to several key mistakes and weaknesses in their case:
1) The lead prosecutor, Thomas Sneddon, was too personally invested in the case against Jackson, having pursued him for over a decade without success.
2) Sneddon's opening statement was disorganized and weak, failing to effectively outline the prosecution's case.
3) The accuser's mother was not credible and damaged the prosecution's case through her erratic testimony, history of lies and con artist behavior.
4) Many prosecution witnesses were not credible due to prior lawsuits against Jackson, debts owed to him, or having been fired by him. Several witnesses even took the Fifth Amendment.
Here are three examples of public relations from around the world:
1. The UK government's "Be Clear on Cancer" campaign which aims to raise awareness of cancer symptoms and encourage early diagnosis.
2. Samsung's global brand marketing and sponsorship activities which aim to increase brand awareness and favorability of Samsung products worldwide.
3. The Brazilian government's efforts to improve its international image and relations with other countries through strategic communication and diplomacy.
The three most important functions of public relations are:
1. Media relations because the media is how most organizations reach their key audiences. Strong media relationships are crucial.
2. Writing, because written communication is at the core of public relations and how most information is
Michael Jackson Please Wait... provides biographical information about Michael Jackson including his birthdate, birthplace, parents, height, interests, idols, favorite foods, films, and more. It discusses his background, career highlights including influential albums like Thriller, and films he appeared in such as The Wiz and Moonwalker. The document contains photos and details about Jackson's life and illustrious music career.
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazzbutest
The document discusses the process of manufacturing celebrity and its negative byproducts. It argues that celebrities are rarely the best in their individual pursuits like singing, dancing, etc. but become famous due to being products of a system controlled by wealthy elites. This system stifles opportunities for worthy artists and creates feudalism. The document also asserts that manufactured celebrities should not be viewed as role models due to behaviors like drug abuse and narcissism that result from the celebrity-making process.
Michael Jackson was a child star who rose to fame with the Jackson 5 in the late 1960s and early 1970s. As a solo artist in the 1970s and 1980s, he had immense commercial success with albums like Off the Wall, Thriller, and Bad, which featured hit singles and groundbreaking music videos. However, his career and public image were plagued by controversies related to allegations of child sexual abuse in the 1990s and 2000s. He continued recording and performing but faced ongoing media scrutiny into his private life until his death in 2009.
Social Networks: Twitter Facebook SL - Slide 1butest
The document discusses using social networking tools like Twitter and Facebook in K-12 education. Twitter allows students and teachers to share short updates and can be used to give parents a window into classroom activities. Facebook allows targeted advertising that could be used to promote educational activities. Both tools could help facilitate communication between schools and communities if used properly while managing privacy and security concerns.
Facebook has over 300 million active users who log on daily, and allows brands to create public profile pages to interact with users. Pages are for brands and organizations only, while groups can be made by any user about any topic. Pages do not show admin names and have no limits on fans, while groups display admin names and are limited to 5,000 members. Content on pages should aim to provoke action from subscribers and establish a regular posting schedule using a conversational tone.
Executive Summary Hare Chevrolet is a General Motors dealership ...butest
Hare Chevrolet is a car dealership located in Noblesville, Indiana that has successfully used social media platforms like Twitter, Facebook, and YouTube to create a positive brand image. They invest significant time interacting directly with customers online to foster a sense of community rather than overtly advertising. As a result, Hare Chevrolet has built a large, engaged audience on social media and serves as a model for how brands can use online presences strategically.
Welcome to the Dougherty County Public Library's Facebook and ...butest
This document provides instructions for signing up for Facebook and Twitter accounts. It outlines the sign up process for both platforms, including filling out forms with name, email, password and other details. It describes how the platforms will then search for friends and suggest people to connect with. It also explains how to search for and follow the Dougherty County Public Library page on both Facebook and Twitter once signed up. The document concludes by thanking participants and providing a contact for any additional questions.
Paragon Software announces the release of Paragon NTFS for Mac OS X 8.0, which provides full read and write access to NTFS partitions on Macs. It is the fastest NTFS driver on the market, achieving speeds comparable to native Mac file systems. Paragon NTFS for Mac 8.0 fully supports the latest Mac OS X Snow Leopard operating system in 64-bit mode and allows easy transfer of files between Windows and Mac partitions without additional hardware or software.
This document provides compatibility information for Olympus digital products used with Macintosh OS X. It lists various digital cameras, photo printers, voice recorders, and accessories along with their connection type and any notes on compatibility. Some products require booting into OS 9.1 for software compatibility or do not support devices that need a serial port. Drivers and software are available for download from Olympus and other websites for many products to enable use with OS X.
To use printers managed by the university's Information Technology Services (ITS), students and faculty must install the ITS Remote Printing software on their Mac OS X computer. This allows them to add network printers, log in with their ITS account credentials, and print documents while being charged per page to funds in their pre-paid ITS account. The document provides step-by-step instructions for installing the software, adding a network printer, and printing to that printer from any internet connection on or off campus. It also explains the pay-in-advance printing payment system and how to check printing charges.
The document provides an overview of the Mac OS X user interface for beginners, including descriptions of the desktop, login screen, desktop elements like the dock and hard disk, and how to perform common tasks like opening files and folders. It also addresses frequently asked questions for Windows users switching to Mac OS X, such as where documents are stored, how to save or find documents, and what the equivalent of the C: drive is in Mac OS X. The document concludes with sections on file management tasks like creating and deleting folders, organizing files within applications, using Spotlight search, and an overview of the Dashboard feature.
This document provides a checklist for securing Mac OS X version 10.5, focusing on hardening the operating system, securing user accounts and administrator accounts, enabling file encryption and permissions, implementing intrusion detection, and maintaining password security. It describes the Unix infrastructure and security framework that Mac OS X is built on, leveraging open source software and following the Common Data Security Architecture model. The checklist can be used to audit a system or harden it against security threats.
This document summarizes a course on web design that was piloted in the summer of 2003. The course was a 3 credit course that met 4 times a week for lectures and labs. It covered topics such as XHTML, CSS, JavaScript, Photoshop, and building a basic website. 18 students from various majors enrolled. Student and instructor evaluations found the course to be very successful overall, though some improvements were suggested like ensuring proper software and pairing programming/non-programming students. The document also discusses implications of incorporating web design material into existing computer science curriculums.
2. Search Search permeates all of AI What choices are we searching through? Problem solvingAction combinations (move 1, then move 3, then move 2...) Natural language Ways to map words to parts of speech Computer vision Ways to map features to object model Machine learning Possible concepts that fit examples seen so far Motion planning Sequence of moves to reach goal destination An intelligent agent is trying to find a set or sequence of actions to achieve a goal This is a goal-based agent
3. Problem-solving Agent SimpleProblemSolvingAgent(percept) state = UpdateState(state, percept) if sequence is empty then goal = FormulateGoal(state) problem = FormulateProblem(state, g) sequence = Search(problem) action = First(sequence) sequence = Rest(sequence) Return action
5. Assumptions Static or dynamic? Fully or partially observable? Environment is fully observable
6. Assumptions Static or dynamic? Fully or partially observable? Discrete or continuous? Environment is discrete
7. Assumptions Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Environment is deterministic
8. Assumptions Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Episodic or sequential? Environment is sequential
9. Assumptions Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Episodic or sequential? Single agent or multiple agent?
10. Assumptions Static or dynamic? Fully or partially observable? Discrete or continuous? Deterministic or stochastic? Episodic or sequential? Single agent or multiple agent?
11. Search Example Formulate goal: Be in Bucharest. Formulate problem: states are cities, operators drive between pairs of cities Find solution: Find a sequence of cities (e.g., Arad, Sibiu, Fagaras, Bucharest) that leads from the current state to a state meeting the goal condition
12. Search Space Definitions State A description of a possible state of the world Includes all features of the world that are pertinent to the problem Initial state Description of all pertinent aspects of the state in which the agent starts the search Goal test Conditions the agent is trying to meet (e.g., have $1M) Goal state Any state which meets the goal condition Thursday, have $1M, live in NYC Friday, have $1M, live in Valparaiso Action Function that maps (transitions) from one state to another
13. Search Space Definitions Problem formulation Describe a general problem as a search problem Solution Sequence of actions that transitions the world from the initial state to a goal state Solution cost (additive) Sum of the cost of operators Alternative: sum of distances, number of steps, etc. Search Process of looking for a solution Search algorithm takes problem as input and returns solution We are searching through a space of possible states Execution Process of executing sequence of actions (solution)
14. Problem Formulation A search problem is defined by the Initial state (e.g., Arad) Operators (e.g., Arad -> Zerind, Arad -> Sibiu, etc.) Goal test (e.g., at Bucharest) Solution cost (e.g., path cost)
15. Example Problems – Eight Puzzle States: tile locations Initial state: one specific tile configuration Operators: move blank tile left, right, up, or down Goal: tiles are numbered from one to eight around the square Path cost: cost of 1 per move (solution cost same as number of most or path length) Eight puzzle applet
16.
17. parts of the object to be assembledOperators: rotation of joint angles Goal test: complete assembly Path cost: time to complete assembly
18.
19. cannot move disk if other disks on topGoal test: disks from largest (at bottom) to smallest on goal pole Path cost: 1 per move Towers of Hanoi applet
20. Example Problems – Rubik’s Cube States: list of colors for each cell on each face Initial state: one specific cube configuration Operators: rotate row x or column y on face z direction a Goal: configuration has only one color on each face Path cost: 1 per move Rubik’s cube applet
21. Example Problems – Eight Queens States: locations of 8 queens on chess board Initial state: one specific queens configuration Operators: move queen x to row y and column z Goal: no queen can attack another (cannot be in same row, column, or diagonal) Path cost: 0 per move Eight queens applet
22.
23. boat holds at most m occupantsGoal: all objects on far river bank Path cost: 1 per river crossing Missionaries and cannibals applet
26. dump contents of jug x down drainGoal: (2,n) Path cost: 1 per fill Saving the world, Part I Saving the world, Part II
27. Sample Search Problems Graph coloring Protein folding Game playing Airline travel Proving algebraic equalities Robot motion planning
28. Visualize Search Space as a Tree States are nodes Actions are edges Initial state is root Solution is path from root to goal node Edges sometimes have associated costs States resulting from operator are children
30. Search Function –Uninformed Searches Open = initial state // open list is all generated states // that have not been “expanded” While open not empty // one iteration of search algorithm state = First(open) // current state is first state in open Pop(open) // remove new current state from open if Goal(state) // test current state for goal condition return “succeed” // search is complete // else expand the current state by // generating children and // reorder open list per search strategy else open = QueueFunction(open, Expand(state)) Return “fail”
31. Search Strategies Search strategies differ only in QueuingFunction Features by which to compare search strategies Completeness (always find solution) Cost of search (time and space) Cost of solution, optimal solution Make use of knowledge of the domain “uninformed search” vs. “informed search”
32. Breadth-First Search Generate children of a state, QueueingFn adds the children to the end of the open list Level-by-level search Order in which children are inserted on open list is arbitrary In tree, assume children are considered left-to-right unless specified differently Number of children is “branching factor” b
33. b = 2 Example trees Search algorithms applet BFS Examples
38. Depth-First Search QueueingFn adds the children to the front of the open list BFS emulates FIFO queue DFS emulates LIFO stack Net effect Follow leftmost path to bottom, then backtrack Expand deepest node first
50. Avoiding Repeated States Can we do it? Do not return to parent or grandparent state In 8 puzzle, do not move up right after down Do not create solution paths with cycles Do not generate repeated states (need to store and check potentially large number of states)
51. Maze Example States are cells in a maze Move N, E, S, or W What would BFS do (expand E, then N, W, S)? What would DFS do? What if order changed to N, E, S, W and loops are prevented?
52. Uniform Cost Search (Branch&Bound) QueueingFn is SortByCostSoFar Cost from root to current node n is g(n) Add operator costs along path First goal found is least-cost solution Space & time can be exponential because large subtrees with inexpensive steps may be explored before useful paths with costly steps If costs are equal, time and space are O(bd) Otherwise, complexity related to cost of optimal solution
72. Iterative Deepening Search DFS with depth bound QueuingFn is enqueue at front as with DFS Expand(state) only returns children such that depth(child) <= threshold This prevents search from going down infinite path First threshold is 1 If do not find solution, increment threshold and repeat
81. Bidirectional Search Search forward from initial state to goal AND backward from goal state to initial state Can prune many options Considerations Which goal state(s) to use How determine when searches overlap Which search to use for each direction Here, two BFS searches Time and space is O(bd/2)
82. Informed Searches Best-first search, Hill climbing, Beam search, A*, IDA*, RBFS, SMA* New terms Heuristics Optimal solution Informedness Hill climbing problems Admissibility New parameters g(n) = estimated cost from initial state to state n h(n) = estimated cost (distance) from state n to closest goal h(n) is our heuristic Robot path planning, h(n) could be Euclidean distance 8 puzzle, h(n) could be #tiles out of place Search algorithms which use h(n) to guide search are heuristic search algorithms
83. Best-First Search QueueingFn is sort-by-h Best-first search only as good as heuristic Example heuristic for 8 puzzle: Manhattan Distance
94. Hill Climbing (Greedy Search) QueueingFn is sort-by-h Only keep lowest-h state on open list Best-first search is tentative Hill climbing is irrevocable Features Much faster Less memory Dependent upon h(n) If bad h(n), may prune away all goals Not complete
97. Hill Climbing Issues Also referred to as gradient descent Foothill problem / local maxima / local minima Can be solved with random walk or more steps Other problems: ridges, plateaus global maxima local maxima
99. Beam Search QueueingFn is sort-by-h Only keep best (lowest-h) n nodes on open list n is the “beam width” n = 1, Hill climbing n = infinity, Best first search
110. A* QueueingFn is sort-by-f f(n) = g(n) + h(n) Note that UCS and Best-first both improve search UCS keeps solution cost low Best-first helps find solution quickly A* combines these approaches
111. Power of f If heuristic function is wrong it either overestimates (guesses too high) underestimates (guesses too low) Overestimating is worse than underestimating A* returns optimal solution if h(n) is admissible heuristic function is admissible if never overestimates true cost to nearest goal if search finds optimal solution using admissible heuristic, the search is admissible
112. Overestimating A (15) 3 3 2 B (6) C (20) D (10) 15 6 20 10 5 I(0) E (20) F(0) G (12) H (20) Solution cost: ABF = 9 ADI = 8 Open list: A (15) B (9) F (9) Missed optimal solution
122. Optimality of A* Suppose a suboptimal goal G2 is on the open list Let n be unexpanded node on smallest-cost path to optimal goal G1 f(G2) = g(G2) since h(G2) = 0 >= g(G1) since G2 is suboptimal >= f(n) since h is admissible Since f(G2) > f(n), A* will never select G2 for expansion
124. IDA* Series of Depth-First Searches Like Iterative Deepening Search, except Use A* cost threshold instead of depth threshold Ensures optimal solution QueuingFnenqueues at front if f(child) <= threshold Threshold h(root) first iteration Subsequent iterations f(min_child) min_child is the cut off child with the minimum f value Increase always includes at least one new node Makes sure search never looks beyond optimal cost solution
153. Analysis Some redundant search Small amount compared to work done on last iteration Dangerous if continuous-valued h(n) values or if values very close If threshold = 21.1 and value is 21.2, probably only include 1 new node each iteration Time complexity is O(bm) Space complexity is O(m)
155. RBFS Recursive Best First Search Linear space variant of A* Perform A* search but discard subtrees when perform recursion Keep track of alternative (next best) subtree Expand subtree until f value greater than bound Update f values before (from parent) and after (from descendant) recursive call
156. Algorithm // Input is current node and f limit // Returns goal node or failure, updated limit RBFS(n, limit) if Goal(n) return n children = Expand(n) if children empty return failure, infinity for each c in children f[c] = max(g(c)+h(c), f[n]) // Update f[c] based on parent repeat best = child with smallest f value if f[best] > limit return failure, f[best] alternative = second-lowest f-value among children newlimit = min(limit, alternative) result, f[best] = RBFS(best, newlimit) // Update f[best] based on descendant if result not equal to failure return result
163. Analysis Optimal if h(n) is admissible Space is O(bm) Features Potentially exponential time in cost of solution More efficient than IDA* Keeps more information than IDA* but benefits from storing this information
164. SMA* Simplified Memory-Bounded A* Search Perform A* search When memory is full Discard worst leaf (largest f(n) value) Back value of discarded node to parent Optimal if solution fits in memory
165. Example Let MaxNodes = 3 Initially B&G added to open list, then hit max B is larger f value so discard but save f(B)=15 at parent A Add H but f(H)=18. Not a goal and cannot go deper, so set f(h)=infinity and save at G. Generate next child I with f(I)=24, bigger child of A. We have seen all children of G, so reset f(G)=24. Regenerate B and child C. This is not goal so f(c) reset to infinity Generate second child D with f(D)=24, backing up value to ancestors D is a goal node, so search terminates.
166. Heuristic Functions Q: Given that we will only use heuristic functions that do not overestimate, what type of heuristic functions (among these) perform best? A: Those that produce higher h(n) values.
167. Reasons Higher h value means closer to actual distance Any node n on open list with f(n) < f*(goal) will be selected for expansion by A* This means if a lot of nodes have a low underestimate (lower than actual optimum cost) All of them will be expanded Results in increased search time and space
168. Informedness If h1 and h2 are both admissible and For all x, h1(x) > h2(x), then h1 “dominates” h2 Can also say h1 is “more informed” than h2 Example h1(x): h2(x): Euclidean distance h2 dominates h1
169. Effect on Search Cost If h2(n) >= h1(n) for all n (both are admissible) then h2 dominates h1 and is better for search Typical search costs d=14, IDS expands 3,473,941 nodes A* with h1 expands 539 nodes A* with h2 expands 113 nodes d=24, IDS expands ~54,000,000,000 nodes A* with h1 expands 39,135 nodes A* with h2 expands 1,641 nodes
170. Which of these heuristics are admissible?Which are more informed? h1(n) = #tiles in wrong position h2(n) = Sum of Manhattan distance between each tile and goal location for the tile h3(n) = 0 h4(n) = 1 h5(n) = min(2, h*[n]) h6(n) = Manhattan distance for blank tile h7(n) = max(2, h*[n])
171. Generating Heuristic Functions Generate heuristic for simpler (relaxed) problem Relaxed problem has fewer restrictions Eight puzzle where multiple tiles can be in the same spot Cost of optimal solution to relaxed problem is an admissible heuristic for the original problem Learn heuristic from experience
173. Iterative Improvement Algorithms For many optimization problems, solution path is irrelevant Just want to reach goal state State space / search space Set of “complete” configurations Want to find optimal configuration (or at least one that satisfies goal constraints) For these cases, use iterative improvement algorithm Keep a single current state Try to improve it Constant memory
175. Example N-queens Put n queens on an n × n board with no two queens on the same row, column, or diagonal Operator: Move queen to reduce #conflicts
176. Hill Climbing (gradient ascent/descent) “Like climbing Mount Everest in thick fog with amnesia”
177.
178. Local Beam Search Keep k states instead of 1 Choose top k of all successors Problem Many times all k states end up on same local hill Choose k successors RANDOMLY Bias toward good ones Similar to natural selection
179. Simulated Annealing Pure hill climbing is not complete, but pure random search is inefficient. Simulated annealing offers a compromise. Inspired by annealing process of gradually cooling a liquid until it changes to a low-energy state. Very similar to hill climbing, except include a user-defined temperature schedule. When temperature is “high”, allow some random moves. When temperature “cools”, reduce probability of random move. If T is decreased slowly enough, guaranteed to reach best state.
180. Algorithm function SimulatedAnnealing(problem, schedule) // returns solution state current = MakeNode(Initial-State(problem)) for t = 1 to infinity T = schedule[t] if T = 0 return current next = randomly-selected child of current = Value[next] - Value[current] if > 0 current = next // if better than accept state else current = next with probability Simulated annealing applet Traveling salesman simulated annealing applet
181. Genetic Algorithms What is a Genetic Algorithm (GA)? An adaptation procedure based on the mechanics of natural genetics and natural selection Gas have 2 essential components Survival of the fittest Recombination Representation Chromosome = string Gene = single bit or single subsequence in string, represents 1 attribute
182.
183. Humans DNA made up of 4 nucleic acids (4-bit code) 46 chromosomes in humans, each contain 3 billion DNA 43 billion combinations of bits Can random search find humans? Assume only 0.1% genome must be discovered, 3(106) nucleotides Assume very short generation, 1 generation/second 3.2(10107) individuals per year, but 101.8(107) alternatives 1018106 years to generate human randomly Self reproduction, self repair, adaptability are the rule in natural systems, they hardly exist in the artificial world Finding and adopting nature’s approach to computational design should unlock many doors in science and engineering
184. GAs Exhibit Search Each attempt a GA makes towards a solution is called a chromosome A sequence of information that can be interpreted as a possible solution Typically, a chromosome is represented as sequence of binary digits Each digit is a gene A GA maintains a collection or population of chromosomes Each chromosome in the population represents a different guess at the solution
185. The GA Procedure Initialize a population (of solution guesses) Do (once for each generation) Evaluate each chromosome in the population using a fitness function Apply GA operators to population to create a new population Finish when solution is reached or number of generations has reached an allowable maximum.
187. Reproduction Select individuals x according to their fitness values f(x) Like beam search Fittest individuals survive (and possibly mate) for next generation
188. Crossover Select two parents Select cross site Cut and splice pieces of one parent to those of the other 1 1 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1
189. Mutation With small probability, randomly alter 1 bit Minor operator An insurance policy against lost bits Pushes out of local minima Population: 1 1 0 0 0 0 1 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 Goal: 0 1 1 1 1 1 Mutation needed to find the goal
190. Example Solution = 0 0 1 0 1 0 Fitness(x) = #digits that match solution A) 0 1 0 1 0 1 Score: 1 B) 1 1 1 1 0 1 Score: 1 C) 0 1 1 0 1 1 Score: 3 D) 1 0 1 1 0 0 Score: 3 Recombine top two twice. Note: 64 possible combinations
192. Issues How select original population? How handle non-binary solution types? What should be the size of the population? What is the optimal mutation rate? How are mates picked for crossover? Can any chromosome appear more than once in a population? When should the GA halt? Local minima? Parallel algorithms?
201. Diversity Measure Fitness ignores diversity As a result, populations tend to become uniform Rank-space method Sort population by sum of fitness rank and diversity rank Diversity rank is the result of sorting by the function 1/d2