- A state space consists of nodes representing problem states and arcs representing moves between states. It can be represented as a tree or graph.
- To solve a problem using search, it must first be represented as a state space with an initial state, goal state(s), and legal operators defining state transitions.
- Different search algorithms like depth-first, breadth-first, A*, and best-first are then applied to traverse the state space to find a solution path from initial to goal state.
- Heuristic functions can be used to guide search by estimating state proximity to the goal, improving efficiency over uninformed searches.
The document discusses search strategies in artificial intelligence. It defines key terms like search space, start state, goal test, and search tree. It describes properties of search algorithms like completeness, optimality, time complexity, and space complexity. It differentiates between uninformed searches, which do not use domain knowledge, like breadth-first search and depth-first search, and informed searches, which use heuristics to guide the search more efficiently, like greedy search and A* search. The document outlines the differences between informed and uninformed searches.
The document discusses procedural versus declarative knowledge representation and how logic programming languages like Prolog allow knowledge to be represented declaratively through logical rules. It also covers topics like forward and backward reasoning, matching rules to facts in working memory, and using control knowledge to guide the problem solving process. Logic programming represents knowledge through Horn clauses and uses backward chaining inference to attempt to prove goals.
Conceptual Dependency (CD) is a theory developed by Schank in the 1970s to represent the meaning of natural language sentences using conceptual primitives rather than words. CD representations are built using primitives that capture the intended meaning, are language independent, and help draw inferences. There are different primitive actions, conceptual categories, and rules to build CD representations from sentences. While CD provides a general model for knowledge representation, it can be difficult to construct original sentences from representations and represent complex actions without many primitives.
This document discusses problem solving agents in artificial intelligence. It explains that problem solving agents focus on satisfying goals by formulating the goal based on the current situation, then formulating the problem by determining the actions needed to achieve the goal. Key components of problem formulation include the initial state, possible actions, transition model describing how actions change the state, a goal test, and path cost function. Two examples of well-defined problems are given: the 8-puzzle problem and the 8-queens problem.
Hill climbing is a heuristic search algorithm used to find optimal solutions to mathematical problems. It works by starting with an initial solution and iteratively moving to a neighboring solution that improves the value of an objective function until a local optimum is reached. However, hill climbing may not find the global optimum solution and can get stuck in local optima. Variants include simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing.
The document discusses state space search problems and techniques for solving them. It defines state space search as a process of considering successive configurations or states of a problem instance to find a goal state. Various search techniques like breadth-first search, depth-first search, and heuristic search are described. It also discusses problem characteristics that help determine the most appropriate search method, such as whether a problem can be decomposed or solution steps ignored/undone. Examples of search problems like the 8-puzzle, chess, and water jug problems are provided to illustrate state space formulation and solutions.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
The document discusses search strategies in artificial intelligence. It defines key terms like search space, start state, goal test, and search tree. It describes properties of search algorithms like completeness, optimality, time complexity, and space complexity. It differentiates between uninformed searches, which do not use domain knowledge, like breadth-first search and depth-first search, and informed searches, which use heuristics to guide the search more efficiently, like greedy search and A* search. The document outlines the differences between informed and uninformed searches.
The document discusses procedural versus declarative knowledge representation and how logic programming languages like Prolog allow knowledge to be represented declaratively through logical rules. It also covers topics like forward and backward reasoning, matching rules to facts in working memory, and using control knowledge to guide the problem solving process. Logic programming represents knowledge through Horn clauses and uses backward chaining inference to attempt to prove goals.
Conceptual Dependency (CD) is a theory developed by Schank in the 1970s to represent the meaning of natural language sentences using conceptual primitives rather than words. CD representations are built using primitives that capture the intended meaning, are language independent, and help draw inferences. There are different primitive actions, conceptual categories, and rules to build CD representations from sentences. While CD provides a general model for knowledge representation, it can be difficult to construct original sentences from representations and represent complex actions without many primitives.
This document discusses problem solving agents in artificial intelligence. It explains that problem solving agents focus on satisfying goals by formulating the goal based on the current situation, then formulating the problem by determining the actions needed to achieve the goal. Key components of problem formulation include the initial state, possible actions, transition model describing how actions change the state, a goal test, and path cost function. Two examples of well-defined problems are given: the 8-puzzle problem and the 8-queens problem.
Hill climbing is a heuristic search algorithm used to find optimal solutions to mathematical problems. It works by starting with an initial solution and iteratively moving to a neighboring solution that improves the value of an objective function until a local optimum is reached. However, hill climbing may not find the global optimum solution and can get stuck in local optima. Variants include simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing.
The document discusses state space search problems and techniques for solving them. It defines state space search as a process of considering successive configurations or states of a problem instance to find a goal state. Various search techniques like breadth-first search, depth-first search, and heuristic search are described. It also discusses problem characteristics that help determine the most appropriate search method, such as whether a problem can be decomposed or solution steps ignored/undone. Examples of search problems like the 8-puzzle, chess, and water jug problems are provided to illustrate state space formulation and solutions.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
The document discusses AND/OR graphs, which are a type of graph or tree used to represent solutions to problems that can be decomposed into smaller subproblems. AND/OR graphs have nodes that represent goals or states, with successors labeled as either AND or OR branches. AND branches signify subgoals that must all be achieved to satisfy the parent goal, while OR branches indicate alternative subgoals that could achieve the parent goal. The graph helps model how decomposed subproblems relate and their solutions combine to solve the overall problem.
Lecture 17 Iterative Deepening a star algorithmHema Kashyap
Iterative Deepening A* (IDA*) is an extension of A* search that combines the benefits of depth-first and breadth-first search. It performs depth-first search with an iterative deepening limit on the cost function f(n), increasing the limit if the goal is not found. This allows IDA* to be optimal and complete like breadth-first search while having modest memory requirements like depth-first search. The algorithm starts with an initial f-limit of the start node's f-value, pruning any nodes where f exceeds the limit. If the goal is not found, the limit is increased to the minimum f among pruned nodes for the next iteration.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
2. forward chaining and backward chainingmonircse2
This document provides an overview of forward chaining and backward chaining in artificial intelligence. Forward chaining applies inference rules to existing data to derive new facts until reaching a goal. It proceeds from known facts to a conclusion. Backward chaining starts from a goal and moves backward to determine what facts must be true to satisfy the goal. It uses modus ponens inference to break the goal into subgoals until reaching initial conditions. Examples are given to illustrate the difference between the two approaches.
Knowledge representation techniques face several issues including representing important attributes of objects, relationships between attributes, choosing the level of detail in representations, depicting sets of multiple objects, and determining appropriate structures as needed.
Local search algorithms operate by examining the current node and its neighbors. They are suitable for problems where the solution is the goal state itself rather than the path to get there. Hill-climbing and simulated annealing are examples of local search algorithms. Hill-climbing continuously moves to higher value neighbors until a local peak is reached. Simulated annealing also examines random moves and can accept moves to worse states based on probability. Both aim to find an optimal or near-optimal solution but can get stuck in local optima.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
Problem reduction AND OR GRAPH & AO* algorithm.pptarunsingh660
This document summarizes AND-OR graphs and the AO* algorithm. It defines AND-OR graphs as being useful for representing problems that can be solved by decomposing them into smaller subproblems. It then outlines the basic AND-OR graph algorithm involving initializing a graph, expanding nodes, and computing f' values for successor nodes. Finally, it describes the key aspects of the AO* algorithm, which uses AND-OR graphs to represent problems that can be divided into parts that can be combined. The AO* algorithm involves initializing a graph with the initial node, expanding nodes, generating successors, computing h' values, and propagating decision information back through the graph.
This document discusses state space search in artificial intelligence. It defines state space search as consisting of a tree of symbolic states generated by iteratively applying operations to represent a problem. It discusses the initial, intermediate, and final states in state space search. As an example, it describes the water jug problem and its state representation. It also outlines different types of state space search algorithms, including heuristic search algorithms like A* and uninformed search algorithms like breadth-first search.
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
Syntax-Directed Translation into Three Address Codesanchi29
The document discusses syntax-directed translation of code into three-address code. It defines semantic rules for generating three-address code for expressions, boolean expressions, and control flow statements. Temporary variables are generated for subexpressions and intermediate values. The semantic rules specify generating three-address code statements using temporary variables. Backpatching is also discussed as a technique to replace symbolic names in goto statements with actual addresses after code generation.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This document summarizes graph coloring using backtracking. It defines graph coloring as minimizing the number of colors used to color a graph. The chromatic number is the fewest colors needed. Graph coloring is NP-complete. The document outlines a backtracking algorithm that tries assigning colors to vertices, checks if the assignment is valid (no adjacent vertices have the same color), and backtracks if not. It provides pseudocode for the algorithm and lists applications like scheduling, Sudoku, and map coloring.
The inference engine applies logical rules to facts in the knowledge base to infer new information. It uses two approaches:
- Forward chaining starts with known facts and fires rules until reaching the goal, applying rules in a bottom-up manner.
- Backward chaining starts with the goal and works backwards through rules to find supporting facts, taking a top-down approach.
Both are illustrated using examples of determining an animal's color. Forward chaining applies rules to known facts about an animal to conclude its color, while backward chaining starts with the color goal and applies rules in reverse to find facts proving the goal.
In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do.
The method of solving problem through AI involves the process of defining the search space, deciding start and goal states and then finding the path from start state to goal state through search space.
State space search is a process used in the field of computer science, including artificial intelligence(AI), in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property.
What is artificial intelligence,Hill Climbing Procedure,Hill Climbing Procedure,State Space Representation and Search,classify problems in AI,AO* ALGORITHM
The document provides an overview of problem spaces and problem solving through searching techniques used in artificial intelligence. It defines a problem space as a set of states and connections between states to represent a problem. Search strategies for finding solutions include breadth-first search, depth-first search, and heuristic search. Real-world problems discussed that can be solved through searching include route finding, layout problems, task scheduling, and the water jug problem is presented as a toy problem example.
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
The document discusses AND/OR graphs, which are a type of graph or tree used to represent solutions to problems that can be decomposed into smaller subproblems. AND/OR graphs have nodes that represent goals or states, with successors labeled as either AND or OR branches. AND branches signify subgoals that must all be achieved to satisfy the parent goal, while OR branches indicate alternative subgoals that could achieve the parent goal. The graph helps model how decomposed subproblems relate and their solutions combine to solve the overall problem.
Lecture 17 Iterative Deepening a star algorithmHema Kashyap
Iterative Deepening A* (IDA*) is an extension of A* search that combines the benefits of depth-first and breadth-first search. It performs depth-first search with an iterative deepening limit on the cost function f(n), increasing the limit if the goal is not found. This allows IDA* to be optimal and complete like breadth-first search while having modest memory requirements like depth-first search. The algorithm starts with an initial f-limit of the start node's f-value, pruning any nodes where f exceeds the limit. If the goal is not found, the limit is increased to the minimum f among pruned nodes for the next iteration.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
2. forward chaining and backward chainingmonircse2
This document provides an overview of forward chaining and backward chaining in artificial intelligence. Forward chaining applies inference rules to existing data to derive new facts until reaching a goal. It proceeds from known facts to a conclusion. Backward chaining starts from a goal and moves backward to determine what facts must be true to satisfy the goal. It uses modus ponens inference to break the goal into subgoals until reaching initial conditions. Examples are given to illustrate the difference between the two approaches.
Knowledge representation techniques face several issues including representing important attributes of objects, relationships between attributes, choosing the level of detail in representations, depicting sets of multiple objects, and determining appropriate structures as needed.
Local search algorithms operate by examining the current node and its neighbors. They are suitable for problems where the solution is the goal state itself rather than the path to get there. Hill-climbing and simulated annealing are examples of local search algorithms. Hill-climbing continuously moves to higher value neighbors until a local peak is reached. Simulated annealing also examines random moves and can accept moves to worse states based on probability. Both aim to find an optimal or near-optimal solution but can get stuck in local optima.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
Problem reduction AND OR GRAPH & AO* algorithm.pptarunsingh660
This document summarizes AND-OR graphs and the AO* algorithm. It defines AND-OR graphs as being useful for representing problems that can be solved by decomposing them into smaller subproblems. It then outlines the basic AND-OR graph algorithm involving initializing a graph, expanding nodes, and computing f' values for successor nodes. Finally, it describes the key aspects of the AO* algorithm, which uses AND-OR graphs to represent problems that can be divided into parts that can be combined. The AO* algorithm involves initializing a graph with the initial node, expanding nodes, generating successors, computing h' values, and propagating decision information back through the graph.
This document discusses state space search in artificial intelligence. It defines state space search as consisting of a tree of symbolic states generated by iteratively applying operations to represent a problem. It discusses the initial, intermediate, and final states in state space search. As an example, it describes the water jug problem and its state representation. It also outlines different types of state space search algorithms, including heuristic search algorithms like A* and uninformed search algorithms like breadth-first search.
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
Syntax-Directed Translation into Three Address Codesanchi29
The document discusses syntax-directed translation of code into three-address code. It defines semantic rules for generating three-address code for expressions, boolean expressions, and control flow statements. Temporary variables are generated for subexpressions and intermediate values. The semantic rules specify generating three-address code statements using temporary variables. Backpatching is also discussed as a technique to replace symbolic names in goto statements with actual addresses after code generation.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This document summarizes graph coloring using backtracking. It defines graph coloring as minimizing the number of colors used to color a graph. The chromatic number is the fewest colors needed. Graph coloring is NP-complete. The document outlines a backtracking algorithm that tries assigning colors to vertices, checks if the assignment is valid (no adjacent vertices have the same color), and backtracks if not. It provides pseudocode for the algorithm and lists applications like scheduling, Sudoku, and map coloring.
The inference engine applies logical rules to facts in the knowledge base to infer new information. It uses two approaches:
- Forward chaining starts with known facts and fires rules until reaching the goal, applying rules in a bottom-up manner.
- Backward chaining starts with the goal and works backwards through rules to find supporting facts, taking a top-down approach.
Both are illustrated using examples of determining an animal's color. Forward chaining applies rules to known facts about an animal to conclude its color, while backward chaining starts with the color goal and applies rules in reverse to find facts proving the goal.
In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do.
The method of solving problem through AI involves the process of defining the search space, deciding start and goal states and then finding the path from start state to goal state through search space.
State space search is a process used in the field of computer science, including artificial intelligence(AI), in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property.
What is artificial intelligence,Hill Climbing Procedure,Hill Climbing Procedure,State Space Representation and Search,classify problems in AI,AO* ALGORITHM
The document provides an overview of problem spaces and problem solving through searching techniques used in artificial intelligence. It defines a problem space as a set of states and connections between states to represent a problem. Search strategies for finding solutions include breadth-first search, depth-first search, and heuristic search. Real-world problems discussed that can be solved through searching include route finding, layout problems, task scheduling, and the water jug problem is presented as a toy problem example.
The document discusses various concepts related to state-space search problems and algorithms. It begins by introducing state-space representation and search trees, then describes concepts like search paths, costs, and strategies. It contrasts uninformed searches like breadth-first search which expand nodes by depth, with informed searches like A* that use heuristics. Breadth-first search is discussed in more detail, including that it expands the shallowest nodes first and adds generated states to the back of the queue.
This document summarizes key concepts from a university lecture on artificial intelligence representation and search. It introduces representation as capturing problem features to enable problem solving. Search strategies like depth-first and breadth-first are then discussed, along with their properties and tradeoffs. State space graphs are used to represent problems, with nodes as states and edges as moves. Different problems like tic-tac-toe, the 8-puzzle, and traveling salesperson are provided as examples.
This document discusses search algorithms in artificial intelligence. It describes how search is used to find solutions in many AI problems by exploring possible states. The key aspects covered are:
- Search involves exploring a space of possible states through actions or operators to find a goal state.
- Problems are formulated as a search problem defined by the initial state, operators/actions, goal test, and cost function.
- Common uninformed search algorithms like breadth-first search (BFS) and depth-first search (DFS) are described and their properties analyzed.
- BFS is complete but can require exponential time and space, while DFS is not complete but uses less space in most cases.
- Uniform
problem solve and resolving in ai domain , problomsSlimAmiri
The document describes search algorithms for problem solving. It defines key concepts like state, node, problem representation, and search strategies. It then explains blind search algorithms like breadth-first search, depth-first search, depth-limited search, and iterative deepening search. Finally, it covers heuristic search algorithms like greedy search and A* search, which use an evaluation function and heuristic to guide the search towards more promising solutions.
The document discusses search techniques in artificial intelligence. It defines search as finding a sequence of actions to achieve a goal state. Common problems that use search include problem solving, natural language processing, computer vision, and machine learning. Search involves defining a search space with states, operators to transition between states, an initial state, and a goal test. Popular uninformed search techniques like breadth-first search and depth-first search are explained. The document also introduces informed search techniques like uniform cost search that use cost information to guide the search towards optimal solutions.
This document discusses search techniques used in artificial intelligence problems. It defines key concepts related to search spaces such as states, actions, goals, and costs. It provides examples of search problems in domains like the 8-puzzle, robot assembly, and missionaries and cannibals. It analyzes search algorithms like breadth-first search, depth-first search, uniform cost search, iterative deepening search, and informed searches using heuristics. The document compares the properties of different search strategies.
This document provides a summary of Lecture 3 on problem-solving by searching. It describes how problem-solving agents can formulate goals and problems, represent the problem as a state space, and find solutions using search algorithms like breadth-first search, uniform-cost search, depth-first search, and iterative deepening search. Examples of search problems discussed include the Romania pathfinding problem, vacuum world, and the 8-puzzle.
This document provides problem formulations for four planning problems and answers additional questions about search algorithms. It formulates planning problems for coloring a map, helping a monkey get bananas, finding an illegal input record, and measuring a gallon of water. It then discusses properties of breadth-first, depth-limited, and iterative deepening search on a binary tree state space. Finally, it proves statements about search algorithms and identifies special cases of algorithms.
This document discusses search algorithms and problem solving through searching. It begins by defining search problems and representing them using graphs with states as nodes and actions as edges. It then covers uninformed search strategies like breadth-first and depth-first search. Informed search strategies use heuristics to guide the search toward more promising areas of the problem space. Examples of single agent pathfinding problems are given like the traveling salesman problem and Rubik's cube.
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
1. The document discusses various AI techniques and problems. It defines AI technique as a method that exploits knowledge represented to capture generalizations, be understood by people, be easily modified, and be used in many situations.
2. It provides examples of common AI problems like tic-tac-toe, the water jug problem, various puzzles, and language understanding.
3. It then discusses problem solving and representation, defining key concepts like states, state space, operators, initial and goal states. It outlines general problem solving steps and state space representation.
The document discusses various aspects of problem solving and production systems including:
- Problem characteristics like decomposability and recoverability impact the appropriate problem solving approach.
- Production systems consist of rules, databases, and a control strategy to apply rules.
- Well-designed heuristics can efficiently guide search toward solutions without exploring all possibilities.
- Different problem types like classification and design are suited to different control strategies like proposing and refining solutions.
The document discusses problem formulation for solving problems using search algorithms. It provides examples of formulating problems like route finding between cities and solving the 8-puzzle as state space problems. Key components of problem formulation are defined as the initial state, successor function, goal test, and path cost. Real-life applications that can be formulated as search problems are also presented, such as robot navigation, vehicle routing, and assembly sequencing.
The document discusses uninformed search techniques. It provides examples of representing problems as states and operators that transform states. This includes problems like the water jug problem, 8-puzzle, and 8-queens. It then describes common uninformed search algorithms like breadth-first search, depth-first search, iterative deepening, and uniform cost search. It analyzes the properties of these algorithms like completeness, time complexity, space complexity, and optimality.
The document discusses various concepts related to defining and solving problems using state space representation and search algorithms. It defines a problem as any task or goal that can be represented precisely using an initial state, goal state, and applicable rules. Search algorithms like breadth-first search and depth-first search are described for systematically exploring the problem space. Heuristic search methods are discussed which use domain knowledge to guide the search.
The document discusses problem solving techniques in artificial intelligence. It covers defining problems in a state space, using production systems to represent search spaces, and different search algorithms like breadth-first, depth-first, and heuristic search. It also discusses characteristics of problems that influence which techniques are best suited, such as decomposability, predictability, and interaction requirements.
The document discusses various concepts related to defining and solving problems using state space and search algorithms. It defines a problem as any task or goal that can be represented precisely using a state space with initial and goal states. Production systems use rules to search this space and find solutions. Common search algorithms like breadth-first, depth-first, and heuristic search are described. Characteristics of problems like decomposability, predictability, and interaction help determine the best approach.
Similar to State Space Representation and Search (20)
Virtualization: A Key to Efficient Cloud ComputingHitesh Mohapatra
The document discusses various types of virtualization used in cloud computing. It describes virtualization as a technique that allows sharing of physical resources among multiple customers. There are two main types of hypervisors - Type 1 hypervisors run directly on hardware while Type 2 hypervisors run on a host operating system. The document also summarizes different types of virtualization including hardware, software, memory, storage, network, and desktop virtualization. Benefits of virtualization include improved efficiency, outsourcing of hardware costs, testing software in isolated environments, and emulating machines beyond physical availability.
Automating the Cloud: A Deep Dive into Virtual Machine ProvisioningHitesh Mohapatra
Virtual machine provisioning allows users to quickly provision new virtual machines through a self-service interface in minutes, rather than the days it previously took to provision physical servers. Virtual machine migration also allows live migration of virtual machines between physical hosts in milliseconds for maintenance or upgrades. Standards like OVF and OCCI help ensure interoperability and portability of virtual machines across platforms. The virtual machine lifecycle includes provisioning, serving requests, and deprovisioning resources when the service is ended.
Harnessing the Power of Google Cloud Platform: Strategies and ApplicationsHitesh Mohapatra
The document discusses Google Cloud Platform (GCP), a suite of cloud computing services provided by Google. It provides infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). GCP allows users to access computing power, storage, databases, and other applications through remote servers on the internet. It offers advantages like scalability, security, redundancy, and cost-effectiveness compared to traditional data centers. Example applications of GCP include enabling collaborative document editing in real-time.
Scheduling refers to allocating computing resources like processor time and memory to processes. In cloud computing, scheduling maps jobs to virtual machines. There are two levels of scheduling - at the host level to distribute VMs, and at the VM level to distribute tasks. Common scheduling algorithms include first-come first-served (FCFS), shortest job first (SJF), round robin, and max-min. FCFS prioritizes older jobs but has high wait times. SJF prioritizes shorter jobs but can starve longer ones. Max-min prioritizes longer jobs to optimize resource use. The choice depends on goals like throughput, latency, and fairness.
This document provides a template for submitting case studies to a case study compendium on cloud computing solutions. The template requests information on the customer organization, industry, location, the cloud solution provider, area of application of the cloud solution, challenges addressed, objectives, timeline of implementation, solution approach, challenges during implementation, benefits to the customer, innovation enabled, partnerships involved, and a customer testimonial. It requests details on the cloud solution type (IaaS, PaaS, or SaaS), quantitative and qualitative benefits realized by the customer, and how the solution helped boost innovation. Contact details of the submitter are also requested. The focus is on how cloud platforms and solutions enabled customer enterprises to innovate and
RAID (Redundant Array of Independent Disks) uses multiple hard disks or solid-state drives to protect data by storing it across the drives in a way that if one drive fails, the data can still be accessed from the other drives. There are different RAID levels that provide varying levels of data protection and performance. A RAID controller manages the drives in an array, presenting them as a single logical drive and improving performance and reliability. Common RAID levels include RAID 0 for performance without redundancy, RAID 1 for disk mirroring, and RAID 5 for striping with parity data distributed across drives. [/SUMMARY]
Cloud load balancing distributes workloads and network traffic across computing resources in a cloud environment to improve performance and availability. It routes incoming traffic to multiple servers or other resources while balancing the load. Load balancing in the cloud is typically software-based and offers benefits like scalability, reliability, reduced costs, and flexibility compared to traditional hardware-based load balancing. Common cloud providers like AWS, Google Cloud, and Microsoft Azure offer multiple load balancing options that vary based on needs and network layers.
ITU-T requirement for cloud and cloud deployment modelHitesh Mohapatra
List and explain the functional requirements for networking as per the ITU-T technical report. List and explain cloud deployment models and list relative strengths and weaknesses of the deployment models with neat diagram.
The document contains descriptions of several LeetCode problems ranging from Medium to Hard difficulty. It provides details about the Maximum Level Sum of a Binary Tree, Jump Game III, Minesweeper, Binary Tree Level Order Traversal, Number of Operations to Make Network Connected, Open the Lock, Sliding Puzzle, and Trapping Rain Water II problems. It also includes pseudocode and explanations for solving the Number of Operations to Make Network Connected and Open the Lock problems.
The document discusses three problems: (1) finding the cheapest flight route between two cities with at most k stops using DFS and pruning; (2) merging k sorted linked lists into one sorted list using a priority queue; (3) using a sequence of acceleration (A) and reversing (R) instructions to reach a target position in the shortest number of steps for a car that can move to negative positions.
Trie Data Structure
LINK: https://leetcode.com/tag/trie/
Easy:
1. Longest Word in Dictionary
Medium:
1. Count Substrings That Differ by One Character
2. Replace Words
3. Top K Frequent Words
4. Maximum XOR of Two Numbers in an Array
5. Map Sum Pairs
Hard:
1. Concatenated Words
2. Word Search II
The document discusses the basics of relational databases. It defines what a database is, the advantages it provides over file-based data storage, and some disadvantages. It also covers relational database concepts like tables, records, fields, keys, and normalization. The document explains how to design a relational database by determining the purpose and entities, modeling relationships with E-R diagrams, and following steps to normalize the data.
The document discusses measures of query cost in database management systems. It explains that query cost can be measured by factors like the number of disk accesses, size of the table, and time taken by the CPU. It further breaks down disk access time into components like seek time, rotational latency, and sequential vs. random I/O. The document then provides an example formula to calculate estimated query cost based on these components.
This document discusses how wireless sensor networks (WSNs) can be used in smart city applications. It first defines WSNs as self-configured, infrastructure-less networks that use sensors to monitor conditions like temperature, sound, and pollution. It then discusses how WSNs can influence lifestyle by enabling applications in areas like healthcare, transportation, the environment and more. Finally, it discusses how WSNs are a primary strength for smart cities by allowing remote and cost-effective monitoring of infrastructure and resources across applications like smart water, smart grid, and smart transportation.
The document provides an overview and syllabus for a course on fundamentals of data structures. It covers topics such as linear and non-linear data structures including arrays, stacks, queues, linked lists, trees and graphs. It describes various data types in C like integers, floating-point numbers, characters and enumerated types. It also discusses operations on different data structures and analyzing algorithm complexity.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
1. State Space Representation and Search
Page 1
1. Introduction
In this section we examine the concept of a state space and the different searches
that can be used to explore the search space in order to find a solution. Before an
AI problem can be solved it must be represented as a state space. The state space
is then searched to find a solution to the problem. A state space essentially consists
of a set of nodes representing each state of the problem, arcs between nodes
representing the legal moves from one state to another, an initial state and a goal
state. Each state space takes the form of a tree or a graph.
Factors that determine which search algorithm or technique will be used include the
type of the problem and the how the problem can be represented. Search
techniques that will be examined in the course include:
• Depth First Search
• Depth First Search with Iterative Deepening
• Breadth First Search
• Best First Search
• Hill Climbing
• Branch and Bound Techniques
• A* Algorithm
2. Classic AI Problems
Three of the classic AI problems which will be referred to in this section is the
Traveling Salesman problem and the Towers of Hanoi problem and the 8 puzzle.
Traveling Salesman
A salesman has a list of cities, each of which he must visit exactly once. There are
direct roads between each pair of cities on the list. Find the route that the salesman
should follow for the shortest trip that both starts and finishes at any one of the cities.
Towers of Hanoi
In a monastery in the deepest Tibet there are three crystal columns and 64 golden
rings. The rings are different sizes and rest over the columns. At the beginning of
time all the rings rested on the leftmost column, and since than the monks have
toiled ceaselessly moving the rings one by one between the columns. In moving the
rings a larger ring must not be placed on a smaller ring. Furthermore, only one ring
at a time can be moved from one column to the next. A simplified version of this
problem which will consider involves only 2 or 3 rings instead of 64.
2. State Space Representation and Search
Page 2
8-Puzzle
1 2 3
8 4
7 6 5
The 8-Puzzle involves moving the tiles on the board above into a particular
configuration. The black square on the board represents a space. The player can
move a tile into the space, freeing that position for another tile to be moved into and
so on.
For example, given the initial state above we may want the tiles to be moved so that
the following goal state may be attained.
1 8 3
2 6 4
7 5
3. Problem Representation
A number of factors need to be taken into consideration when developing a state
space representation. Factors that must be addressed are:
• What is the goal to be achieved?
• What are the legal moves or actions?
• What knowledge needs to be represented in the state description?
• Type of problem - There are basically three types of problems. Some
problems only need a representation, e.g. crossword puzzles. Other
problems require a yes or no response indicating whether a solution can be
found or not. Finally, the last type problem are those that require a solution
path as an output, e.g. mathematical theorems, Towers of Hanoi. In these
cases we know the goal state and we need to know how to attain this state
• Best solution vs. Good enough solution - For some problems a good enough
solution is sufficient. For example, theorem proving , eight squares.
However, some problems require a best or optimal solution, e.g. the traveling
salesman problem.
3. State Space Representation and Search
Page 3
Figure 3.1: Towers of Hanoi state space representation
Towers Hanoi
A possible state space representation of the Towers Hanoi problem using a graph is
indicated in Figure 3.1.
The legal moves in this state space involve moving one ring from one pole to
another, moving one ring at a time, and ensuring that a larger ring is not placed on a
smaller ring.
4. State Space Representation and Search
Page 4
Figure 3.2: Eight-Puzzle Problem state space representation
8-Puzzle
Although a player moves the tiles around the board to change the configuration of
tiles. However, we will define the legal moves in terms of moving the space. The
space can be moved up, down, left and right.
Summary:
• A state space is a set of descriptions or states.
• Each search problem consists of:
< One or more initial states.
< A set of legal actions. Actions are represented by operators or moves
applied to each state. For example, the operators in a state space
representation of the 8-puzzle problem are left, right, up and down.
< One or more goal states.
• The number of operators are problem dependant and specific to a particular
state space representation. The more operators the larger the branching
factor of the state space. Thus, the number of operators should kept to a
minimum, e.g. 8-puzzle: operations are defined in terms of moving the space
instead of the tiles.
5. State Space Representation and Search
Page 5
Figure 4.1: Graph vs. Tree
• Why generate the state space at run-time, and not just have it built in
advance?
• A search algorithm is applied to a state space representation to find a solution
path. Each search algorithm applies a particular search strategy.
4. Graphs versus Trees
If states in the solution space can be revisited more than once a directed graph is
used to represent the solution space. In a graph more than one move sequence can
be used to get from one state to another. Moves in a graph can be undone. In a
graph there is more than one path to a goal whereas in a tree a path to a goal is
more clearly distinguishable. A goal state may need to appear more than once in a
tree. Search algorithms for graphs have to cater for possible loops and cycles in the
graph. Trees may be more “efficient” for representing such problems as loops and
cycles do not have to be catered for. The entire tree or graph will not be generated.
Figure 4.1 illustrates the both a graph a representation of the same the states.
Notice that there is more than one path connecting two particular nodes in a graph
whereas this is not so in a tree.
Example 1:
Prove x + (y +z) = y + (z+x) given
L+(M+N) = (L+M) + N ........(A)
M+N = N+M........................(C)
Figure 4.2 represents the corresponding state space as a graph. The same state
space is represented as a tree in Figure 4.3. Notice that the goal state is
represented more than once in the tree.
6. State Space Representation and Search
Page 6
Figure 4.2: Graph state space representation
Figure 4.3: Tree state space representation
5. Depth First Search
One of the searches that are commonly used to search a state space is the depth
first search. The depth first search searches to the lowest depth or ply of the tree.
Consider the tree in Figure 5.1. In this case the tree has been generated prior to the
search.
7. State Space Representation and Search
Page 7
Figure 5.1
Each node is not visited more than once. A depth first search of the this tree
produces: A, B, E, K, S, L ,T, F, M, C, G, N, H, O, P, U, D, I, Q, J, R.
Although in this example the tree was generated first and then a search of the tree
was conducted. However, often there is not enough space to generate the entire
tree representing state space and then search it. The algorithm presented below
conducts a depth first search and generates parts of the tree as it goes along.
Depth First Search Algorithm
def dfs (in Start, out State)
open = [Start];
closed = [];
State = failure;
while (open <> []) AND (State <> success)
begin
remove the leftmost state from open, call it X;
if X is the goal, then
State = success
else begin
generate children of X;
put X on closed
eliminate the children of X on open or closed
put remaining children on left end of open
end else
endwhile
return State;
enddef
8. State Space Representation and Search
Page 8
Example 1: Suppose that the letters A,B, etc represent states in a problem. The
following moves are legal:
A to B and C
B to D and E
C to F and G
D to I and J
I to K and L
Start state: A Goal state: E , J
Let us now conduct a depth first search of the space. Remember that tree will not be
generated before hand but is generated as part of the search. The search and hence
generation of the tree will stop as soon as success is encountered or the open list is empty.
OPEN CLOSED X X’s Children State
1. A - failure
2. A B, C failure
3. B, C A failure
4. C A B D, E failure
5. D, E, C A, B failure
6. E, C A, B D I, J failure
7. I, J, E, C A, B, D failure
8. J, E, C A, B, D I K, L failure
9. K, L, J, E, C A, B, D, I failure
10. L, J, E, C A, B, D, I, K none failure
11. J, E, C A, B, D, I L none failure
12. E, C A, B, D, I J success
6. Depth First Search with Iterative Deepening
Iterative deepening involves performing the depth first search on a number of
different iterations with different depth bounds or cutoffs. On each iteration the cutoff
is increased by one. There iterative process terminates when a solution path is
found. Iterative deepening is more advantageous in cases where the state space
representation has an infinite depth rather than a constant depth. The state space is
searched at a deeper level on each iteration.
9. State Space Representation and Search
Page 9
DFID is guaranteed to find the shortest path. The algorithm does retain information
between iterations and thus prior iterations are “wasted”.
Depth First Iterative Deepening Algorithm
procedure DFID (intial_state, goal_states)
begin
search_depth=1
while (solution path is not found)
begin
dfs(initial_state, goals_states) with a depth bound of search_depth
increment search_depth by 1
endwhile
end
Example 1: Suppose that the letters A,B, etc represent states in a problem. The
following moves are legal:
A to B and C
B to D and E
C to F and G
D to I and J
I to K and L
Start state: A Goal state: E , J
Iteration 1: search_depth =1
OPEN CLOSED X X’s Children State
1. A - failure
2. A B, C failure
3. B, C A failure
4. C A B failure
5. C A, B failure
6. A, B C failure
7. A, B, C failure
10. State Space Representation and Search
Page 10
Iteration 2: search_depth = 2
OPEN CLOSED X X’s Children State
1. A - failure
2. A B, C failure
3. B, C A failure
4. C A B D, E failure
5. D, E, C A, B failure
6. E, C A, B D failure
7. E, C A, B, D failure
8. C A, B, D E success
7. Breadth First Search
The breadth first search visits nodes in a left to right manner at each level of the tree.
Each node is not visited more than once. A breadth first search of the tree in
Figure 5.1. produces A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U.
Although in this example the tree was generated first and then a search of the tree
was conducted. However, often there is not enough space to generate the entire
tree representing state space and then search it.
The algorithm presented below conducts a breadth first search and generates parts
of the tree as it goes along.
11. State Space Representation and Search
Page 11
Breadth First Search Algorithm
def bfs (in Start, out State)
open = [Start];
closed = [];
State = failure;
while (open <> []) AND (State <> success)
begin
remove the leftmost state from open, call it X;
if X is the goal, then
State = success
else begin
generate children of X;
put X on closed
eliminate the children of X on open or closed
put remaining children on right end of open
end else
endwhile
return State;
enddef
Let us now apply the breadth first search to the problem in Example 1 in this section.
OPEN CLOSED X X’s Children State
1. A - failure
2. A B, C failure
3. B, C A failure
4. C A B D, E failure
5. C, D, E A, B failure
6. D, E A, B C F, G failure
7. D, E, F, G A, B, C failure
8. E, F, G A, B, C D I, J failure
9. E, F, G, I, J A, B, C, D failure
10. F, G, I, J A, B, C, D E success
12. State Space Representation and Search
Page 12
8. Differences Between Depth First and Breadth First
Breadth first is guaranteed to find the shortest path from the start to the goal. DFS
may get “lost” in search and hence a depth-bound may have to be imposed on a
depth first search. BFS has a bad branching factor which can lead to combinatorial
explosion. The solution path found by the DFS may be long and not optimal. DFS
more efficient for search spaces with many branches, i.e. the branching factor is
high.
The following factors need to be taken into consideration when deciding which
search to use:
• The importance of finding the shortest path to the goal.
• The branching of the state space.
• The available time and resources.
• The average length of paths to a goal node.
• All solutions or only the first solution.
9. Heuristic Search
DFS and BFS may require too much memory to generate an entire state space - in
these cases heuristic search is used. Heuristics help us to reduce the size of the
search space. An evaluation function is applied to each goal to assess how
promising it is in leading to the goal. Examples of heuristic searches:
• Best first search
• A* algorithm
• Hill climbing
Heuristic searches incorporate the use of domain-specific knowledge in the process
of choosing which node to visit next in the search process. Search methods that
include the use of domain knowledge in the form of heuristics are described as
“weak” search methods. The knowledge used is “weak” in that it usually helps but
does not always help to find a solution.
9.1 Calculating heuristics
Heuristics are rules of thumb that may find a solution but are not guaranteed to.
Heuristic functions have also been defined as evaluation functions that estimate the
cost from a node to the goal node. The incorporation of domain knowledge into the
search process by means of heuristics is meant to speed up the search process.
Heuristic functions are not guaranteed to be completely accurate. If they were there
would be no need for the search process. Heuristic values are greater than and
equal to zero for all nodes. Heuristic values are seen as an approximate cost of
finding a solution. A heuristic value of zero indicates that the state is a goal state.
13. State Space Representation and Search
Page 13
Figure 8.2: Heuristic functions for the 8-puzzle
problem
A heuristic that never overestimates the cost to the goal is referred to as an
admissible heuristic. Not all heuristics are necessarily admissible. A heuristic value
of infinity indicates that the state is a “deadend” and is not going to lead anywhere.
A good heuristic must not take long to compute. Heuristics are often defined on a
simplified or relaxed version of the problem, e.g. the number of tiles that are out of
place. A heuristic function h1 is better than some heuristic function h2 if fewer nodes
are expanded during the search when h1 is used than when h2 is used.
Example 1: The 8-puzzle problem
Each of the following could serve as a heuristic function for the 8-Puzzle problem:
• Number of tiles out of place - count the number of tiles out of place in each
state compared to the goal .
• Sum all the distance that the tiles are out of place.
• Tile reversals - multiple the number of tile reversals by 2.
Exercise: Derive a heuristic
14. State Space Representation and Search
Page 14
Figure 10.1: Best First Search
function for the following problem
Experience has shown that it is difficult to devise heuristic functions. Furthermore,
heuristics are falliable and are by no means perfect.
10. Best First Search
Figure 10.1 illustrates the application of the best first search on a tree that had been
generated before the algorithm was applied
The best-first search is described as a general search where the minimum cost
nodes are expanded first. The best- first search is not guaranteed to find the
shortest solution path. It is more efficient to not revisit nodes (unless you want to
find the shortest path). This can be achieved by restricting the legal moves so as not
to include nodes that have already been visited. The best-first search attempts to
minimize the cost of finding a solution. Is a combination of the depth first-search and
breadth-first search with heuristics. Two lists are maintained in the implementation of
the best-first algorithm: OPEN and CLOSED. OPEN contains those nodes that have
been evaluated by the heuristic function but have not been expanded into
successors yet. CLOSED contains those nodes that have already been visited.
Application of the best first search: Internet spiders.
15. State Space Representation and Search
Page 15
The algorithm for the best first search is described below. The algorithm generates
the tree/graph representing the state space.
The Best First Search
def bestfs(in Start, out State)
open = [Start],
close = [],
State = failure;
while (open <> []) AND (State <> success)
begin
remove the leftmost state from open, call it X;
if X is the goal, then
State = success
else begin
generate children of X;
for each child of X do
case
the child is not on open or closed
begin
assign the child a heuristic value,
add the child to open,
end;
the child is already on open
if the child was reached by a shorter path then
give the state on open the shorter path
the child is already on closed:
if the child is reached by a shorter path then
begin
remove the state from closed,
add the child to open
end;
endcase
endfor
put X on closed;
re-order states on open by heuristic merit (best leftmost);
endwhile;
return State;
end.
We will now apply the best first search to a problem.
16. State Space Representation and Search
Page 16
Example 1:
Suppose that each letter represents a state in a solution space. The following
moves are legal :
A5 to B4 and C4
B4 to D6 and E5
C4 to F4 and G5
D6 to I7 and J8
I7 to K7 and L8
Start state : A
Goal state : E
The numbers next to the letters represent the heuristic value associated with that
particular state.
OPEN CLOSED X X’s Children State
1 A5 - failure
2 A5 B4, C4 failure
3 B4, C4 A5 failure
4 C4 A5 B5 D6, E5 failure
5 C4, E5, D6 A5, B4 failure
6 E5, D6 A5, B4 C4 F4, G5 failure
7 F4, E5, G5, D6 A5, B4, C4 failure
8 E5, G5, D6 A,5 B4, C4 F4 none failure
9 E5, G5, D6 A5, B4, C4, F4 failure
10 G5, D6 A5, B4, C4, F4 E5 success
11. Hill-Climbing
Hill-climbing is similar to the best first search. While the best first search considers
states globally, hill-climbing considers only local states. The application of the hill-
climbing algorithm to a tree that has been generated prior to the search is illustrated
in Figure 11.1.
17. State Space Representation and Search
Page 17
Figure 11.1
The hill-climbing algorithm is described below. The hill-climbing algorithm generates
a partial tree/graph.
Hill Climbing Algorithm
def hill_climbing(in Start, out State)
open = [Start], close = [],
State = failure;
while (open <> []) AND (State <> success)
begin
remove the leftmost state from open, call it X;
if X is the goal, then
State = success
else begin
generate children of X;
for each child of X do
case
the child is not on open or closed
begin
assign the child a heuristic value,
end;
the child is already on open
if the child was reached by a shorter path then
give the state on open the shorter path
the child is already on closed:
if the child is reached by a shorter path then
begin
remove the state from closed,
end;
endcase
endfor
put X on closed;
re-order the children states by heuristic merit (best leftmost);
place the reordered list on the leftmost side of open;
endwhile;
return State;
end.
18. State Space Representation and Search
Page 18
Example 1: Suppose that each of the letters represent a state in a state space
representation. Legals moves:
A to B3 and C2
B3 to D2 and E3
C2 to F2 and G4
D2 to H1 and I99
F2 to J99
G4 to K99 and L3
Start state: A
Goal state: H, L
The result of applying the hill-climbing algorithm to this problem is illustrated in the
talbe below.
OPEN CLOSED X X’s Children State
1 A - failure
2 A B3, C2 failure
3 C2, B3 A failure
4 B3 A C2 F2, G4 failure
5 F2, G4, B3 A, C2 failure
6 G4, B3 A, C2 F2 J99 failure
7 J99, G4, B3 A, C2, F2 failure
8 G4, B3 A, C2, F2 J99 none failure
9 G4, B3 A, C2, F2, J99 failure
10 B3 A, C2, F2, J99 G4 K99, L3 failure
11 L3,K99, B3 A, C2, F2,J99, G4 failure
12 K99, B3 A, C2, F2, J99, G4 L3 success
19. State Space Representation and Search
Page 19
11.1 Greedy Hill Climbing
This is a form of hill climbing which chooses the node with the least estimated cost
at each state. It does not guarantee an optimal solution path. Greedy hill climbing is
susceptible to the problem of getting stuck at a local optimum.
The Greedy Hill Climbing Algorithm
1) Evaluate the initial state.
2) Select a new operator.
3) Evaluate the new state
4) If it is closer to the goal state than the current state make it the current state.
5) If it is no better ignore
6) If the current state is the goal state or no new operators are available, quit.
Otherwise repeat steps 2 to 4.
Example 1: Greedy Hill Climbing with Backtracking
Example 2: Greedy Hill Climbing with Backtracking
20. State Space Representation and Search
Page 20
Example 1: Greedy Hill Climbing without Backtracking
Example 2: Greedy Hill Climbing without Backtracking
12. The A Algorithm
The A algorithm is essentially the best first search implemented with the following function:
f(n) = g(n) + h(n)
where g(n) - measures the length of the path from any state n to the start state
h(n) - is the heuristic measure from the state n to the goal state
The use of the evaluation function f(n) applied to the 8-Puzzle problem is illustrated
in for following the initial and goal state Figure 12.1.
21. State Space Representation and Search
Page 21
Figure 12.1
The heuristic used for h(n) is the number of tiles out of place. The numbers next to
the letters represent the heuristic value associated with that particular state.
13. The A* Algorithm
Search algorithms that are guaranteed to find the shortest path are called admissible
algorithms. The breadth first search is an example of an admissible algorithm. The
evaluation function we have considered with the best first algorithm is
f(n) = g(n) + h(n),
where
g(n) - is an estimate of the cost of the distance from the start node to some
node n, e.g. the depth at which the state is found in the graph
h(n) - is the heuristic estimate of the distance from n to a goal
22. State Space Representation and Search
Page 22
An evaluation function used by admissible algorithms is:
f*(n) = g*(n) + h*(n)
where
g*(n) - estimates the shortest path from the start node to the node n.
h*(n) - estimates the actual cost from the start node to the goal node that
passes through n.
f* does not exist in practice and we try to estimate f as closely as possible to f*. g(n)
is a reasonable estimate of g*(n). Usually g(n) >= g*(n). It is not usually possible to
compute h*(n). Instead we try to find a heuristic h(n) which is bounded from above
by the actual cost of the shortest path to the goal, i.e. h(n) <= h*(n). The best first
search used with f(n) is called the A algorithm. If h(n) <= h*(n) in the A algorithm
then the A algorithm is called the A* algorithm.
The heuristic that we have developed for the 8- puzzle problem are bounded above
by the number of moves required to move to the goal position. The number of tiles
out of place and the sum of the distance from each correct tile position is less than
the number of required moves to move to the goal state. Thus, the best first search
applied to the 8-puzzle using these heuristics is in fact an A* algorithm.
Exercise 1: Consider the following state space. G is the goal state. Note that the
values of g(n) are given on the arcs:
(a) Is h(n) admissible?
(b) What is the optimal solution path?
(c) What is the cost of this path?
Note: Remember we cannot see the entire state space when we apply the
search algorithm.
23. State Space Representation and Search
Page 23
Figure 14.1
Exercise 2: Consider the following state space. G1 and G2 are goal states:
(a) Is h(n) admissible?
(b) What is the optimal solution path?
(c) What is the cost of this path?
Note: Remember we cannot see the entire state space when we apply the
search algorithm.
14. Branch and Bound Techniques
Branch and bound techniques are used to find the most optimal solution. Each solution path
has a cost associated with it. Branch and bound techniques keep track of the solution and
its cost and continue searching for a better solution further on. The entire search space is
usually searched. Each path and hence each arc (rather than the node) has a cost
associated with it. The assumption made is that the cost increases with path length. Paths
that are estimated to have a higher cost than paths already found are not pursued. These
techniques are used in conjunction with depth first and breadth first searches as well as
iterative deepening.
Consider the tree in Figure 14.1.
In this example d, g, h and f represent the goal state.
24. State Space Representation and Search
Page 24
Figure 15.1: NIM state space
A branch and bound technique applied to this tree will firstly find the path from a to d
which has a cost of 4. The technique will then compare the cost of each sub-path to
4 and will not pursue the path if the sub-path has a cost exceeding 4. Both the sub-
paths a to e and a to c have a cost exceeding 4 these sub-paths are not pursued any
further.
15. Game Playing
One of the earliest areas in artificial intelligence is game playing. In this section we
examine search algorithms that are able to look ahead in order to decide which move
should be made in a two-person zero-sum game, i.e. a game in which there can be at
most one winner and two players. Firstly, we will look at games for which the state
space is small enough for the entire space to be generated. We will then look at
examples for which the entire state space cannot be generated. In the latter case
heuristics will be used to guide the search.
15.1 The Game Nim
The game begins with a pile of seven tokens. Two players can play the game at a time,
and there is one winner and one loser. Each player has a turn to divide the stack of
tokens. A stack can only be divided into two stacks of different size. The game tree for
the game is illustrated below.
If the game tree is small enough we can generate the entire tree and look ahead to
decide which move to make. In a game where there are two players and only one can
25. State Space Representation and Search
Page 25
Figure 15.2: MAX plays first
Figure 15.3: MIN plays first
win, we want to maximize the score of one player MAX and minimize the score of the
other MIN. The game tree with MAX playing first is depicted in Figure 15.2. The MAX
and MIN labels indicate which player’s turn it is to divide the stack of tokens. The 0 at
a leaf node indicates a loss for MAX and a win for MIN and the 1 represents a win for
MAX and a loss for MIN. Figure 15.3 depicts the game space if MIN plays first.
26. State Space Representation and Search
Page 26
Figure 15.4: Minimax applied to NIM
15.3 The Minimax Search Algorithm
The players in the game are referred to as MAX and MIN. MAX represents the person
who is trying to win the game and hence maximize his or her score. MIN represents the
opponent who is trying to minimize MAX’s score. The technique assumes that the
opponent has and uses the same knowledge of the state space as MAX. Each level in
the search space contains a label MAX or MIN indicating whose move it is at that
particular point in the game.
This algorithm is used to look ahead and decide which move to make first. If the game
space is small enough the entire space can be generated and leaf nodes can be
allocated a win (1) or loss (0) value. These values can then be propagated back up the
tree to decide which node to use. In propagating the values back up the tree a MAX
node is given the maximum value of all its children and MIN node is given the minimum
values of all its children.
Algorithm 1: Minimax Algorithm
Repeat
1. If the limit of search has been reached, compute the static value of the
current position relative to the appropriate player. Report the result.
2. Otherwise, if the level is a minimizing level, use the minimax on the children
of the current position. Report the minimum value of the results.
3. Otherwise, if the level is a maximizing level, use the minimax on the
children of the current position. Report the maximum of the results.
Until the entire tree is traversed
Figure 15.4 illustrates the minimax algorithm applied to the NIM game tree.
27. State Space Representation and Search
Page 27
Figure 15.5: Minimax applied to a partially generated game tree
The value propagated to the top of the tree is a 1. Thus, no matter which move MIN
makes, MAX still has an option that can lead to a win.
If the game space is large the game tree can be generated to a particular depth or ply.
However, win and loss values cannot be assigned to leaf nodes at the cut-off depth as
these are not final nodes. A heuristic value is used instead. The heuristic values are
an estimate of how promising the node is in leading to a win for MAX. The depth to
which a tree is searched is dependant on the time and computer resource limitations.
One of the problems associated with generating a state space to a certain depth is the
horizon effect. This refers to the problem of a look-ahead to a certain depth not
detecting a promising path and instead leading you to a worse situation. Figure 14.5
illustrates the minimax algorithm applied to a game tree that has been generated to a
particular depth. The integer values at the cutoff depth have been calculated using a
heuristic function.
The function used to calculate the heuristic values at the cutoff depth is problem
dependant and differs from one game to the next. For example, a heuristic function that
can be used for the game tic-tac-toe : h(n) = x(n) - o(n) where x(n) is the total of MAX’s
possible winning lines (we assume MAX is playing x); o(n) is the total of the opponent’s,
i.e. MIN’s winning lines and h(n) is the total evaluation for a state n. The following
examples illustrate the calculation of h(n):
Example 1:
X
O
28. State Space Representation and Search
Page 28
X has 6 possible winning paths
O has 5 possible winning paths
h(n) = 6 - 5 = 1
Example 2:
X
O
X has 4 possible winning paths
O has 6 possible winning paths
h(n) = 4 - 6 = -2
Example 3:
O
X
X has 5 possible winning paths
O has 4 possible winning paths
h(n) = 5 - 4 = 1
29. State Space Representation and Search
Page 29
Exercise: Write a program that takes a file as input. The file will contain nodes
for a game tree together with the heuristic values for the leaf nodes. For
example, for the following tree:
the file should contain:
S: ABC
A: DEF
B: GH
C: IJ
D: 100
E: 3
F: -1
G: 6
H: 5
I: 2
J: 9
The program must implement the minimax algorithm. The program must output
the minimax value and move the that MAX must make. You may assume that
MAX makes the first move.
30. State Space Representation and Search
Page 30
Figure 1.6: Game Tree
15.4 Alpha-Beta Pruning
The main disadvantage of the minimax algorithm is that all the nodes in the game tree
cutoff to a certain depth are examined. Alpha-beta pruning helps reduce the number
of nodes explored. Consider the game tree in Figure 15.6.
The children of B have static values of 3 and 5 and B is at a MIN level, thus the value
return to B is 3. If we look at F it has a value of -5. Thus, the value returned to C will
be at most -5. However, A is at a MAX level thus the value at A will be at least three
which is greater than -5. Thus, no matter what the value of G is, A will have a value of
3. Therefore, we do not need to explore and evaluate it G a s the its value will have not
effect on the value returned to A. This is called an alpha cut. Similarly, we can get a
beta-cut if the value of a subtree will always be bigger than the value returned by its
sibling to a node at a minimizing level.
Algorithm 2: Minimax Algorithm with Alpha-Beta Pruning
Set alpha to -infinity and set beta to infinity
If the node is a leaf node return the value
If the node is a min node then
For each of the children apply the minimax algorithm with alpha-beta pruning.
If the value returned by a child is less then beta set beta to this value
If at any stage beta is less than or equal to alpha do not examine any more
children
Return the value of beta
If the node is a max node
For each of the children apply the minimax algorithm with alpha-beta pruning.
If the value returned by a child is greater then alpha set alpha to this value
If at any stage alpha is greater than or equal to beta do not examine any more
children
Return the value of alpha