1. The document discusses defining problems as state space searches which involves representing the problem as a graph with nodes as states and edges as operators to transition between states.
2. It provides examples of representing chess and the water jug problem as state space searches, defining the initial states, goal states, and production rules for the possible state transitions.
3. Search algorithms like breadth-first search and depth-first search are described for systematically exploring the state space to find a solution path from start to goal.
The document discusses state space search problems and techniques for solving them. It defines state space search as a process of considering successive configurations or states of a problem instance to find a goal state. Various search techniques like breadth-first search, depth-first search, and heuristic search are described. It also discusses problem characteristics that help determine the most appropriate search method, such as whether a problem can be decomposed or solution steps ignored/undone. Examples of search problems like the 8-puzzle, chess, and water jug problems are provided to illustrate state space formulation and solutions.
This document discusses various heuristic search techniques, including generate-and-test, hill climbing, best first search, and simulated annealing. Generate-and-test involves generating possible solutions and testing them until a solution is found. Hill climbing iteratively improves the current state by moving in the direction of increased heuristic value until no better state can be found or a goal is reached. Best first search expands the most promising node first based on heuristic evaluation. Simulated annealing is based on hill climbing but allows moves to worse states probabilistically to escape local maxima.
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.
1. Planning involves finding a sequence of actions that achieves a goal starting from an initial state. It uses a set of operators that define the possible actions and their effects.
2. A plan is a sequence of operator instances that transforms the initial state into a goal state. Classical planning assumes fully observable, deterministic environments.
3. Planning problems can be represented using a logical language that describes states, goals, actions and their preconditions and effects. This representation allows planning algorithms to operate over problems.
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
Knowledge representation and Predicate logicAmey Kerkar
1. The document discusses knowledge representation and predicate logic.
2. It explains that knowledge representation involves representing facts through internal representations that can then be manipulated to derive new knowledge. Predicate logic allows representing objects and relationships between them using predicates, quantifiers, and logical connectives.
3. Several examples are provided to demonstrate representing simple facts about individuals as predicates and using quantifiers like "forall" and "there exists" to represent generalized statements.
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
The document discusses state space search problems and techniques for solving them. It defines state space search as a process of considering successive configurations or states of a problem instance to find a goal state. Various search techniques like breadth-first search, depth-first search, and heuristic search are described. It also discusses problem characteristics that help determine the most appropriate search method, such as whether a problem can be decomposed or solution steps ignored/undone. Examples of search problems like the 8-puzzle, chess, and water jug problems are provided to illustrate state space formulation and solutions.
This document discusses various heuristic search techniques, including generate-and-test, hill climbing, best first search, and simulated annealing. Generate-and-test involves generating possible solutions and testing them until a solution is found. Hill climbing iteratively improves the current state by moving in the direction of increased heuristic value until no better state can be found or a goal is reached. Best first search expands the most promising node first based on heuristic evaluation. Simulated annealing is based on hill climbing but allows moves to worse states probabilistically to escape local maxima.
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.
1. Planning involves finding a sequence of actions that achieves a goal starting from an initial state. It uses a set of operators that define the possible actions and their effects.
2. A plan is a sequence of operator instances that transforms the initial state into a goal state. Classical planning assumes fully observable, deterministic environments.
3. Planning problems can be represented using a logical language that describes states, goals, actions and their preconditions and effects. This representation allows planning algorithms to operate over problems.
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
Knowledge representation and Predicate logicAmey Kerkar
1. The document discusses knowledge representation and predicate logic.
2. It explains that knowledge representation involves representing facts through internal representations that can then be manipulated to derive new knowledge. Predicate logic allows representing objects and relationships between them using predicates, quantifiers, and logical connectives.
3. Several examples are provided to demonstrate representing simple facts about individuals as predicates and using quantifiers like "forall" and "there exists" to represent generalized statements.
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
Hill Climbing Algorithm in Artificial IntelligenceBharat Bhushan
Hill Climbing Algorithm in Artificial Intelligence
Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing elevation/value to find the peak of the mountain or best solution to the problem. It terminates when it reaches a peak value where no neighbor has a higher value.
Hill climbing algorithm is a technique which is used for optimizing the mathematical problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which we need to minimize the distance traveled by the salesman.
It is also called greedy local search as it only looks to its good immediate neighbor state and not beyond that.
A node of hill climbing algorithm has two components which are state and value.
Hill Climbing is mostly used when a good heuristic is available.
In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a single current state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate and Test method produce feedback which helps to decide which direction to move in the search space.
Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.
No backtracking: It does not backtrack the search space, as it does not remember the previous states.
State-space Diagram for Hill Climbing:
The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph between various states of algorithm and Objective function/Cost.
On Y-axis we have taken the function which can be an objective function or cost function, and state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the global minimum and local minimum. If the function of Y-axis is Objective function, then the goal of the search is to find the global maximum and local maximum.
This document discusses production systems and different search strategies used in artificial intelligence. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. Control strategies must cause motion and be systematic. Breadth-first search explores all neighbors of the initial node before moving to the next level, while depth-first search explores as far as possible along each branch before backtracking. Heuristic search uses heuristics or rules of thumb to guide the search towards the most promising paths.
The document provides an overview of constraint satisfaction problems (CSPs). It defines a CSP as consisting of variables with domains of possible values, and constraints specifying allowed value combinations. CSPs can represent many problems using variables and constraints rather than explicit state representations. Backtracking search is commonly used to solve CSPs by trying value assignments and backtracking when constraints are violated.
2. forward chaining and backward chainingmonircse2
This document provides an overview of forward chaining and backward chaining in artificial intelligence. Forward chaining applies inference rules to existing data to derive new facts until reaching a goal. It proceeds from known facts to a conclusion. Backward chaining starts from a goal and moves backward to determine what facts must be true to satisfy the goal. It uses modus ponens inference to break the goal into subgoals until reaching initial conditions. Examples are given to illustrate the difference between the two approaches.
Production systems provide a structure for modeling problem solving as a search process. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. The rules take the form of condition-action pairs. The control strategy determines the order of rule application and resolves conflicts. Production systems can be classified based on whether rule application is monotonic or non-monotonic. They provide modularity and a natural representation but can suffer from opacity, inefficiency, and lack of learning abilities. Choosing the right production system depends on characteristics of the problem such as decomposability and predictability.
The document describes logical agents and knowledge representation. It contains the following key points:
- Logical agents use knowledge representation and reasoning to solve problems and generate new knowledge. This enables intelligent behavior in partially observable environments.
- A knowledge-based agent's central component is its knowledge base, which contains sentences in a formal language that can be queried or added to.
- Wumpus World is described as an example environment, where the agent must navigate, avoid dangers, and find gold using limited sensory information and logical reasoning.
- Propositional and predicate logic are introduced as knowledge representation languages. Forward and backward chaining are also described as techniques for logical inference.
Forward chaining is a data-driven reasoning method that applies rules to existing facts to deduce new facts, adding them to the knowledge base. It starts with known facts and uses inference rules to reach a goal or conclusion. Backward chaining is a goal-driven method that starts with a desired goal and works backwards to see if existing facts and rules can support reaching that goal. Both methods have tradeoffs in efficiency depending on whether the starting point is facts or a specific goal.
This document discusses several search strategies including uninformed search, breadth-first search, depth-first search, uniform cost search, iterative deepening search, and bi-directional search. It provides algorithms and examples to explain how each strategy works. Key points include: breadth-first search visits nodes by level of depth; depth-first search generates nodes along the largest depth first before moving up; uniform cost search expands the lowest cost node; and iterative deepening search avoids infinite depth by searching each level iteratively and increasing the depth limit.
Hill climbing is a heuristic search algorithm used to find optimal solutions to mathematical problems. It works by starting with an initial solution and iteratively moving to a neighboring solution that improves the value of an objective function until a local optimum is reached. However, hill climbing may not find the global optimum solution and can get stuck in local optima. Variants include simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing.
Intro to AI STRIPS Planning & Applications in Video-games Lecture6-Part1Stavros Vassos
This is a short course that aims to provide an introduction to the techniques currently used for the decision making of non-player characters (NPCs) in commercial video games, and show how a simple deliberation technique from academic artificial intelligence research can be employed to advance the state-of-the art.
For more information and downloading the supplementary material please use the following links:
http://stavros.lostre.org/2012/05/19/video-games-sapienza-roma-2012/
http://tinyurl.com/AI-NPC-LaSapienza
A situated planning agent treats planning and acting as a single process rather than separate processes. It uses conditional planning to construct plans that account for possible contingencies by including sensing actions. The agent resolves any flaws in the conditional plan before executing actions when their conditions are met. When facing uncertainty, the agent must have preferences between outcomes to make decisions using utility theory and represent probabilities using a joint probability distribution over variables in the domain.
This document provides an overview of data mining techniques discussed in Chapter 3, including parametric and nonparametric models, statistical perspectives on point estimation and error measurement, Bayes' theorem, decision trees, neural networks, genetic algorithms, and similarity measures. Nonparametric techniques like neural networks, decision trees, and genetic algorithms are particularly suitable for data mining applications involving large, dynamically changing datasets.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
The document discusses production systems and search techniques used to solve problems modeled as state spaces. A production system uses production rules to transform an initial state into a goal state. It consists of a knowledge base of rules, a rule applicator that matches rules to states, and a control strategy to determine the order of rule application. Control strategies can be uninformed/blind searches that explore all states or informed searches that use heuristics to guide the search. Examples of problems solved using this approach include the 8-puzzle, tic-tac-toe, and the water jug problem.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This document summarizes key topics from a session on problem solving by search algorithms in artificial intelligence. It discusses uninformed search strategies like breadth-first search and depth-first search. It also covers informed, heuristic search strategies such as greedy best-first search and A* search which use heuristic functions to estimate distance to the goal. Examples are provided to illustrate best first search, and it describes how this algorithm expands nodes and uses priority queues to order nodes by estimated cost. The next session is slated to cover the A* search algorithm in more detail.
The document discusses knowledge representation in cognitive science and artificial intelligence. It describes several ways of representing knowledge, including predicate logic, semantic networks, frames, and conceptual dependency networks. Semantic networks represent knowledge through interconnected nodes and labeled arcs, allowing for inheritance of properties up hierarchical structures. They provide an intuitive way to represent taxonomically structured knowledge but have limitations representing logical statements.
The document discusses various aspects of problem solving and production systems including:
- Problem characteristics like decomposability and recoverability impact the appropriate problem solving approach.
- Production systems consist of rules, databases, and a control strategy to apply rules.
- Well-designed heuristics can efficiently guide search toward solutions without exploring all possibilities.
- Different problem types like classification and design are suited to different control strategies like proposing and refining solutions.
This document discusses problem solving as a state space search. It covers defining the problem as a state space, production systems, search space control strategies, heuristic search techniques like best-first search and branch-and-bound search, problem reduction, constraint satisfaction, and means-ends analysis. It uses chess and a water jug problem to illustrate representing problems as state spaces and defining the rules and operators to solve them through searching the problem space.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
Hill Climbing Algorithm in Artificial IntelligenceBharat Bhushan
Hill Climbing Algorithm in Artificial Intelligence
Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing elevation/value to find the peak of the mountain or best solution to the problem. It terminates when it reaches a peak value where no neighbor has a higher value.
Hill climbing algorithm is a technique which is used for optimizing the mathematical problems. One of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in which we need to minimize the distance traveled by the salesman.
It is also called greedy local search as it only looks to its good immediate neighbor state and not beyond that.
A node of hill climbing algorithm has two components which are state and value.
Hill Climbing is mostly used when a good heuristic is available.
In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a single current state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The Generate and Test method produce feedback which helps to decide which direction to move in the search space.
Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.
No backtracking: It does not backtrack the search space, as it does not remember the previous states.
State-space Diagram for Hill Climbing:
The state-space landscape is a graphical representation of the hill-climbing algorithm which is showing a graph between various states of algorithm and Objective function/Cost.
On Y-axis we have taken the function which can be an objective function or cost function, and state-space on the x-axis. If the function on Y-axis is cost then, the goal of search is to find the global minimum and local minimum. If the function of Y-axis is Objective function, then the goal of the search is to find the global maximum and local maximum.
This document discusses production systems and different search strategies used in artificial intelligence. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. Control strategies must cause motion and be systematic. Breadth-first search explores all neighbors of the initial node before moving to the next level, while depth-first search explores as far as possible along each branch before backtracking. Heuristic search uses heuristics or rules of thumb to guide the search towards the most promising paths.
The document provides an overview of constraint satisfaction problems (CSPs). It defines a CSP as consisting of variables with domains of possible values, and constraints specifying allowed value combinations. CSPs can represent many problems using variables and constraints rather than explicit state representations. Backtracking search is commonly used to solve CSPs by trying value assignments and backtracking when constraints are violated.
2. forward chaining and backward chainingmonircse2
This document provides an overview of forward chaining and backward chaining in artificial intelligence. Forward chaining applies inference rules to existing data to derive new facts until reaching a goal. It proceeds from known facts to a conclusion. Backward chaining starts from a goal and moves backward to determine what facts must be true to satisfy the goal. It uses modus ponens inference to break the goal into subgoals until reaching initial conditions. Examples are given to illustrate the difference between the two approaches.
Production systems provide a structure for modeling problem solving as a search process. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. The rules take the form of condition-action pairs. The control strategy determines the order of rule application and resolves conflicts. Production systems can be classified based on whether rule application is monotonic or non-monotonic. They provide modularity and a natural representation but can suffer from opacity, inefficiency, and lack of learning abilities. Choosing the right production system depends on characteristics of the problem such as decomposability and predictability.
The document describes logical agents and knowledge representation. It contains the following key points:
- Logical agents use knowledge representation and reasoning to solve problems and generate new knowledge. This enables intelligent behavior in partially observable environments.
- A knowledge-based agent's central component is its knowledge base, which contains sentences in a formal language that can be queried or added to.
- Wumpus World is described as an example environment, where the agent must navigate, avoid dangers, and find gold using limited sensory information and logical reasoning.
- Propositional and predicate logic are introduced as knowledge representation languages. Forward and backward chaining are also described as techniques for logical inference.
Forward chaining is a data-driven reasoning method that applies rules to existing facts to deduce new facts, adding them to the knowledge base. It starts with known facts and uses inference rules to reach a goal or conclusion. Backward chaining is a goal-driven method that starts with a desired goal and works backwards to see if existing facts and rules can support reaching that goal. Both methods have tradeoffs in efficiency depending on whether the starting point is facts or a specific goal.
This document discusses several search strategies including uninformed search, breadth-first search, depth-first search, uniform cost search, iterative deepening search, and bi-directional search. It provides algorithms and examples to explain how each strategy works. Key points include: breadth-first search visits nodes by level of depth; depth-first search generates nodes along the largest depth first before moving up; uniform cost search expands the lowest cost node; and iterative deepening search avoids infinite depth by searching each level iteratively and increasing the depth limit.
Hill climbing is a heuristic search algorithm used to find optimal solutions to mathematical problems. It works by starting with an initial solution and iteratively moving to a neighboring solution that improves the value of an objective function until a local optimum is reached. However, hill climbing may not find the global optimum solution and can get stuck in local optima. Variants include simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing.
Intro to AI STRIPS Planning & Applications in Video-games Lecture6-Part1Stavros Vassos
This is a short course that aims to provide an introduction to the techniques currently used for the decision making of non-player characters (NPCs) in commercial video games, and show how a simple deliberation technique from academic artificial intelligence research can be employed to advance the state-of-the art.
For more information and downloading the supplementary material please use the following links:
http://stavros.lostre.org/2012/05/19/video-games-sapienza-roma-2012/
http://tinyurl.com/AI-NPC-LaSapienza
A situated planning agent treats planning and acting as a single process rather than separate processes. It uses conditional planning to construct plans that account for possible contingencies by including sensing actions. The agent resolves any flaws in the conditional plan before executing actions when their conditions are met. When facing uncertainty, the agent must have preferences between outcomes to make decisions using utility theory and represent probabilities using a joint probability distribution over variables in the domain.
This document provides an overview of data mining techniques discussed in Chapter 3, including parametric and nonparametric models, statistical perspectives on point estimation and error measurement, Bayes' theorem, decision trees, neural networks, genetic algorithms, and similarity measures. Nonparametric techniques like neural networks, decision trees, and genetic algorithms are particularly suitable for data mining applications involving large, dynamically changing datasets.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
The document discusses production systems and search techniques used to solve problems modeled as state spaces. A production system uses production rules to transform an initial state into a goal state. It consists of a knowledge base of rules, a rule applicator that matches rules to states, and a control strategy to determine the order of rule application. Control strategies can be uninformed/blind searches that explore all states or informed searches that use heuristics to guide the search. Examples of problems solved using this approach include the 8-puzzle, tic-tac-toe, and the water jug problem.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This document summarizes key topics from a session on problem solving by search algorithms in artificial intelligence. It discusses uninformed search strategies like breadth-first search and depth-first search. It also covers informed, heuristic search strategies such as greedy best-first search and A* search which use heuristic functions to estimate distance to the goal. Examples are provided to illustrate best first search, and it describes how this algorithm expands nodes and uses priority queues to order nodes by estimated cost. The next session is slated to cover the A* search algorithm in more detail.
The document discusses knowledge representation in cognitive science and artificial intelligence. It describes several ways of representing knowledge, including predicate logic, semantic networks, frames, and conceptual dependency networks. Semantic networks represent knowledge through interconnected nodes and labeled arcs, allowing for inheritance of properties up hierarchical structures. They provide an intuitive way to represent taxonomically structured knowledge but have limitations representing logical statements.
The document discusses various aspects of problem solving and production systems including:
- Problem characteristics like decomposability and recoverability impact the appropriate problem solving approach.
- Production systems consist of rules, databases, and a control strategy to apply rules.
- Well-designed heuristics can efficiently guide search toward solutions without exploring all possibilities.
- Different problem types like classification and design are suited to different control strategies like proposing and refining solutions.
This document discusses problem solving as a state space search. It covers defining the problem as a state space, production systems, search space control strategies, heuristic search techniques like best-first search and branch-and-bound search, problem reduction, constraint satisfaction, and means-ends analysis. It uses chess and a water jug problem to illustrate representing problems as state spaces and defining the rules and operators to solve them through searching the problem space.
This document discusses state space search techniques and heuristic search algorithms. It begins by defining problems as state space searches and describing how to represent problems as state spaces. It then discusses uninformed search techniques like breadth-first search and depth-first search. Next, it covers heuristic search techniques and algorithms like hill climbing, best-first search, and A*. It provides examples to illustrate hill climbing and discusses issues that can arise like local maxima. Finally, it summarizes generate-and-test and steepest-ascent hill climbing algorithms. In summary, the document outlines different search strategies and algorithms that can be used to solve problems modeled as state space searches.
The document discusses state space search and problem solving techniques in artificial intelligence. It defines problems as state spaces and describes representing problems formally using states, initial states, goal states, and rules defining operators or actions. It provides examples of representing the chess and water jug problems as state space searches. Key techniques discussed include production systems, heuristic search, and representing domain knowledge in rules.
The document discusses various concepts related to defining and solving problems using state space representation and search algorithms. It defines a problem as any task or goal that can be represented precisely using an initial state, goal state, and applicable rules. Search algorithms like breadth-first search and depth-first search are described for systematically exploring the problem space. Heuristic search methods are discussed which use domain knowledge to guide the search.
The document discusses problem solving techniques in artificial intelligence. It covers defining problems in a state space, using production systems to represent search spaces, and different search algorithms like breadth-first, depth-first, and heuristic search. It also discusses characteristics of problems that influence which techniques are best suited, such as decomposability, predictability, and interaction requirements.
The document discusses various concepts related to defining and solving problems using state space and search algorithms. It defines a problem as any task or goal that can be represented precisely using a state space with initial and goal states. Production systems use rules to search this space and find solutions. Common search algorithms like breadth-first, depth-first, and heuristic search are described. Characteristics of problems like decomposability, predictability, and interaction help determine the best approach.
This document provides a summary of Lecture 3 on problem-solving by searching. It describes how problem-solving agents can formulate goals and problems, represent the problem as a state space, and find solutions using search algorithms like breadth-first search, uniform-cost search, depth-first search, and iterative deepening search. Examples of search problems discussed include the Romania pathfinding problem, vacuum world, and the 8-puzzle.
The document discusses various AI problem solving techniques. It begins by defining problems in AI as having a start state, goal state, and set of rules to transform between states. Common problems discussed include the water jug problem, tic-tac-toe, 8-puzzle, 8-queens, chess, and the missionaries and cannibals problem. It also discusses representing problems formally and using production systems with rules to systematically search the state space from start to goal states.
This document discusses rule-based systems and different approaches to reasoning with rules, including:
- Procedural vs declarative knowledge representations.
- Forward and backward reasoning approaches. Forward reasoning works from the initial states while backward reasoning works from the goal states.
- Backward-chaining rule systems like PROLOG are good for goal-directed problem solving. Forward-chaining rule systems match rules against the current state and add assertions to the state.
The document discusses various concepts related to state-space search problems and algorithms. It begins by introducing state-space representation and search trees, then describes concepts like search paths, costs, and strategies. It contrasts uninformed searches like breadth-first search which expand nodes by depth, with informed searches like A* that use heuristics. Breadth-first search is discussed in more detail, including that it expands the shallowest nodes first and adds generated states to the back of the queue.
The document discusses automatic problem solving, which involves generating sequences of actions (plans) to achieve desired goals by transforming one world model into another. It describes the basic capabilities needed for automatic problem solving systems, including managing state description models, deductive machinery to reason about states, and action models to describe how the world changes with actions. It also outlines common control strategies for plan generation, such as means-ends analysis and backtracking, and notes tactics can improve the efficiency of these strategies, such as hierarchical planning.
The document discusses problem solving and the problem solving cycle. It describes the problem solving cycle as having 7 steps: problem identification, definition and representation, strategy formulation, organization of information, resource allocation, monitoring, and evaluation. It also discusses different types of problems, distinguishing between well-structured and ill-structured problems. Finally, it provides an example of solving the water jug problem using a state space representation and production rules to systematically search for a solution.
The document provides an overview of problem spaces and problem solving through searching techniques used in artificial intelligence. It defines a problem space as a set of states and connections between states to represent a problem. Search strategies for finding solutions include breadth-first search, depth-first search, and heuristic search. Real-world problems discussed that can be solved through searching include route finding, layout problems, task scheduling, and the water jug problem is presented as a toy problem example.
This document contains answers to assignment questions on operations research. It defines operations research and describes types of operations research models including physical and mathematical models. It also outlines the phases of operations research including the judgment, research, and action phases. Additionally, it provides explanations and examples of linear programming problems and their graphical solution method, as well as addressing how to solve degeneracies in transportation problems and explaining the MODI optimality test procedure.
This document provides an overview of problem solving in artificial intelligence. It discusses problem solving as a systematic search to reach a goal or solution. It describes four general steps in problem solving: goal formulation, problem formulation, search, and execution. It also discusses state space representation and uses the vacuum world problem as an example. Constraint satisfaction problems are characterized as having variables, domains, and constraints. The water jug problem is presented as an example problem that can be solved using a production system approach.
This document discusses problem solving techniques in artificial intelligence, including state-space search and control strategies. It provides examples of representing problems as state spaces with start and goal states, such as the 8-puzzle and missionaries and cannibals problem. Search algorithms like breadth-first search are introduced for systematically exploring state spaces to find solutions. Production systems are also presented as a way to structure problem solving using rules and state transitions.
This document discusses search algorithms and problem solving through searching. It begins by defining search problems and representing them using graphs with states as nodes and actions as edges. It then covers uninformed search strategies like breadth-first and depth-first search. Informed search strategies use heuristics to guide the search toward more promising areas of the problem space. Examples of single agent pathfinding problems are given like the traveling salesman problem and Rubik's cube.
Algorithm and flowchart with pseudo codehamza javed
1. Initialize the biggest price to the first price in the list
2. Loop through the remaining 99 prices
3. Compare each price to the biggest and update biggest if greater
4. After the loop, reduce the biggest price by 10%
5. Output the reduced biggest price
Similar to 2.Problems Problem Spaces and Search.ppt (20)
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataKiwi Creative
Harness the power of AI-backed reports, benchmarking and data analysis to predict trends and detect anomalies in your marketing efforts.
Peter Caputa, CEO at Databox, reveals how you can discover the strategies and tools to increase your growth rate (and margins!).
From metrics to track to data habits to pick up, enhance your reporting for powerful insights to improve your B2B tech company's marketing.
- - -
This is the webinar recording from the June 2024 HubSpot User Group (HUG) for B2B Technology USA.
Watch the video recording at https://youtu.be/5vjwGfPN9lw
Sign up for future HUG events at https://events.hubspot.com/b2b-technology-usa/
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
2. Problems, Problem Spaces and Search:
To build a AI system to solve a particular problem, we need to do
the following four things;
1. Define the problem precisely.
2. Analyse the problem.
3. Isolate and represent the task knowledge that is necessary to
solve the problem.
4.Choose the best problem-solving techniques and apply it to
the particular problem.
The state space representation forms the basis of most of the AI
methods.
2
3. Defining the problem as State Space Search
Its structure corresponds to the structure of problem solving in
two important ways:
It allows for a formal definition of a problem as the need to
convert some given situation into some desired situation using
a set of permissible operations.
It permits us to define the process of solving a particular
problem as a combination of known techniques (each
represented as a rule defining a single step in the space) and
search, the general technique of exploring the space to try to
find some path from current state to a goal state.
Search is a very important process in the solution of hard
problems for which no more direct techniques are available. 3
4. Defining the problem as State Space Search
Example: Playing Chess
To build a program that could “play chess”, we could first have to
specify the starting position of the chess board, the rules that
define the legal moves, and the board positions that represent a
win for one side or the other.
In addition, we must make explicit the previously implicit goal of
not only playing the legal game of chess but also winning the
game, if possible,
The starting position can be described as an 8x8 array where
each position contains a symbol for appropriate piece.
We can define as our goal to check mate position.
4
5. Defining the problem as State Space Search
The legal moves provide the way of getting from initial state to a
goal state.
They can be described easily as a set of rules consisting of two
parts:
A left side that serves as a pattern to be matched against the
current board position.
And a right side that describes the change to be made to reflect
the move.
However, this approach leads to large number of rules 10^120
board positions!!
5
6. Defining the problem as State Space Search
Using so many rules poses problems such as:
No person could ever supply a complete set of such rules.
No program could easily handle all those rules. Just storing so
many rules poses serious difficulties.
In order to minimize such problems, We need to write the rules
describing the legal moves in as general a way as possible.
For example:
6
7. Defining the problem as State Space Search
In general, we can describe the rules we need, the less work we will
have to do to provide them and more efficient the program.
7
8. Water Jug Problem
The state space for this problem can be described as the set of
ordered pairs of integers (x, y) such that x = 0, 1,2, 3 or 4 and y = 0,1,2 or
3;
How can we get two gallons of water in the first jug?
x represents the number of gallons of water in the 4-gallon jug
and y represents the quantity of water in 3-gallon jug.
The start state is (0,0) and the goal state is (2,n), where n may be any
but it is limited to three holding from 0 to 3 gallons of water or empty.
8
11. To solve the water jug problem
Required a control structure that loops through a simple cycle in
which some rule whose left side matches the current state is chosen,
the appropriate change to the state is made as described in the
corresponding right side, and the resulting state is checked to see if it
corresponds to goal state.
Shortest such sequence will have a impact on the choice of
appropriate mechanism to guide the search for solution.
11
12. Summarizing the formal description of the problem
1. Define a state space that contains all the possible configurations of
the relevant objects.
2. Specify one or more states within that space which describe possible
situations from which the problem solving process may start ( initial
state).
3. Specify one or more states that would be acceptable as solutions to
the problem ( goal states).
4. Specify a set of rules that describe the actions ( operations)
available.
12
13. Production Systems
A production system is a model of computation that provides pattern-
directed control of a problem – solving process and consists of a set
of production rules, a working memory, and a recognize – act control
cycle.
All of these systems provide the overall architecture of a production
systems and allow the programmer to write rules that define
particular problems to be solved.
13
14. Production Systems
A production system consists of:
A set of rules, each consisting of a left side that determines the
applicability of the rule and a right side that describes the operation to
be performed if that rule is applied.
One or more knowledge/database(KB) that contain whatever
information is appropriate for the particular task. Some parts of the
database may be permanent, while other parts of it may pertain only
to the solution of the current problem.
A control strategy that specifies the order in which the rules will be
compared to the database and a way of resolving the conflicts that
arise when several rules match at once.
A rule applier; Check current state with LHS of rules in KB and find
appropriate rule to apply. 14
15. Production Systems
One can often represent the expertise that someone uses to do an expert
task as rules.
A rule means a structure which has an if component and a then
component.
The statement, or set of statements, after the word if represents some pattern
which you may observe.
The statement, or set of statements, after the word then represents some
conclusion that you can draw, or some action that you should take.
IF some condition(s) exists THEN perform some action(s)
• IF-THEN
• Test-Action
15
16. Production Systems
Therefore production system can be described as set of production rules,
together with software that can reason with them.
Production System are sometimes known as Rule Based Expert System.
A rule-based system consists of
• a set of IF-THEN rules
• a set of facts
• an interpreter controlling the application of the rules, given the facts.
16
17. Production Systems
In order to solve a problem:
We must reduce it to one for which a precise statement can be given.
This can be done by defining the problem’s state space ( start and goal
states) and a set of operators for moving that space.
The problem can then be solved by searching for a path through the
state space from an initial state to a goal state.
The process of solving the problem can usefully be modelled as a
production system.
17
18. Example 8-Puzzle problem
Search process contains:
1. Expanding a node
2. Generating a node
3. Comparing
4. Track the path and path cost
18
20. Control Strategies
How to decide which rule to apply next during the process of
searching for a solution to a problem?
The two requirements of good control strategy are
it should cause motion.
It should be systematic
20
21. Algorithm: Breadth First Search
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty
do:
a. Remove the first element from NODE-LIST and call it E. If NODE-
LIST was empty, quit.
b. For each way that each rule can match the state described in E
do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii.Otherwise, add the new state to the end of NODE-LIST
21
22. Algorithm: Depth First Search
1. If the initial state is a goal state, quit and return success
2. Otherwise, do the following until success or failure is signalled:
do:
a. Generate a successor, E, of initial state. If there are no more
successors, signal failure.
b. Call Depth-First Search, with E as the initial state
c. If success is returned, signal success. Otherwise continue in this
loop.
22
23. Depth First Search(Cont.…)
In this search, we pursue a single branch of the tree until it yields a
solution or until a decision to terminate the path is made.
It makes sense to terminate a path if it reaches dead end, produces a
previous state. In such a state backtracking occurs.
Chronological Backtracking: Order in which steps are undone
depends only on the temporal sequence in which steps were initially
made.
Specifically most recent step is always the first to be undone.
This is also simple backtracking.
23
24. Advantages
Advantages of DFS
DFS requires less memory since only the nodes on the current path
are stored.
By chance, DFS may find a solution without examining much of the
search space at all.
Advantages of BFS
BFS will not get trapped exploring a blind alley.
If there is a solution, BFS is guaranteed to find it.
If there are multiple solutions, then a minimal solution will be found.
24
25. Traveling Salesman Problem (TSP)
A salesman has a list of cities, each of which he must visit exactly once.
There are direct roads between each pair of cities on the list.
Find the route the salesman should follow for the shortest possible
round trip that both starts and finishes at any one of the cities.
A simple motion causing and systematic control structure could solve
this problem.
Simply explore all possible paths in the tree and return the shortest
path.
If there are N cities, then number of different paths among them is
1.2….(N-1) or (N-1)!.
The time to examine single path is proportional to N!.
25
26. Traveling Salesman Problem (TSP)
So the total time required to perform this search is proportional to
N!. For 10 cities, 10! = 3,628,800 which is very large number.
The salesman could normally visits nearly 25 cities.
This phenomenon is called Combinatorial explosion.
So above approach will take more time to solve the TSP.
To beat the Combinatorial explosion technique, a new strategy
known as branch and bound technique came into existence.
26
27. Traveling Salesman Problem (TSP)
Branch and Bound:
Begin generating complete paths, keeping track of the shortest
path found so far.
Give up exploring any path as soon as its partial length becomes
greater than the shortest path found so far.
Using this algorithm, we are guaranteed to find the shortest path.
It still requires exponential time.
The time it saves depends on the order in which paths are explored.
But it is still inadequate for solving large problems.
27
28. Heuristic Search
It is technique designed to solve a problem quickly.
A Heuristic is a technique that improves the efficiency of a search
process, possibly by sacrificing claims of completeness.
Heuristics are like tour guides.
They are good to the extent that they point in generally interesting
directions;
They are bad to the extent that they may miss points of interest to
particular individuals.
On the average, they improve the quality of the paths that are
explored.
28
29. Heuristic Search
Using Heuristics, we can hope to get good ( though possibly non-
optimal ) solutions to hard problems such TSP in non exponential
time.
There are good general purpose heuristics that are useful in a wide
variety of problem domains.
Special purpose heuristics exploit domain specific knowledge.
It gives good solution. The solution may be optimal but there is no
guarantee.
Try to solve the problem from non-polynomial to polynomial time
complexity.
29
30. Heuristic Search
One of the good purpose heuristic that is useful for variety of
combinatorial problems is the Nearest Neighbour Heuristic which
works by selecting the locally superior alternative at each step.
Applying it to TSP; The following procedure should be followed;
1. Arbitrarily select a starting city
2. To select the next city, look at all cities not yet visited and select
the one closest to the current city. Go to next step.
3. Repeat step 2 until all cities have been visited.
This procedure executes in time proportional to N^2
It is possible to prove an upper bound on the error it incurs.
This provides reassurance that one is not paying too high price in
accuracy for speed. 30
31. Heuristic Function
This is a function that maps from problem state descriptions to
measures of desirability, usually represented as numbers.
Which aspects of the problem state are considered,
How those aspects are evaluated, and
The weights given to individual aspects are chosen in such a way
that
The value of the heuristic function at a given node in the search
process gives as good an estimate as possible of whether that node
is on the desired path to a solution.
Well designed heuristic functions can play an important part in
efficiently guiding a search process toward a solution.
31
32. Examples: Simple Heuristic functions
Chess : The material advantage of our side over opponent.
TSP: the sum of distances so far.
Tic-Tac-Toe: 1 for each row in which we could win and in we already
have one piece plus 2 for each such row in we have two pieces.
32
33. Problem Characteristics
In order to choose the most appropriate method for a particular
problem, it is necessary to analyze the problem along several key
dimensions:
Is the problem decomposable into a set of independent smaller
or easier subproblems?
Can solution steps be ignored or at least undone if they prove
unwise?
Is the problem’s universe predictable?
Is a good solution to the problem obvious without comparison to
all other possible solutions?
Is the desired solution a state of the world or a path to a state?
33
34. Problem Characteristics
Is a large amount of knowledge absolutely required to solve the
problem or is knowledge important only to constrain the
search?
Can a computer that is simply given the problem return the
solution or will the solution of the problem require interaction
between the computer and a person?
34
35. Is the problem Decomposable?
Whether the problem can be decomposed into smaller problems?
Using the technique of problem decomposition, we can often solve
very large problems easily.
Suppose, We want to solve the problem of computing the
expression;
35
36. Blocks World Problem
The blocks-world problem is known as Sussman Anomaly.
Noninterleaved planners of the early 1970s were unable to solve
this problem, hence it is considered as anomalous.
When two subgoals G1 and G2 are given, a noninterleaved planner
produces either a plan for G1 concatenated with a plan for G2, or
vice-versa.
In blocks-world problem, three blocks labeled as 'A', 'B', 'C' are
allowed to rest on the flat surface/table. The given condition is that
only one block can be moved at a time to achieve the goal.
36
38. Can Solution Steps be ignored or undone?
Suppose we are trying to prove a mathematical theorem. We can
prove a lemma. If we find the lemma is not of any help, we can still
continue.
8-puzzle problem.
Chess: A move cannot be taken back.
Important classes of problems:
Ignorable ( theorem proving)
Recoverable ( 8-puzzle)
Irrecoverable ( Chess)
38
39. Can Solution Steps be ignored or undone?
The recoverability of a problem plays an important role in
determining the complexity of the control structure necessary for
the problem’s solution.
Ignorable problems can be solved using a simple control
structure that never backtracks
Recoverable problems can be solved by a slightly more
complicated control strategy that does sometimes make
mistakes
Irrecoverable problems will need to be solved by systems that
expends a great deal of effort making each decision since
decision must be final.
39
40. Is the universe Predictable?
Certain Outcome ( ex: 8-puzzle).
Uncertain Outcome ( ex: Bridge, controlling a robot arm).
For solving certain outcome problems, open loop approach (
without feedback) will work fine.
For uncertain-outcome problems, planning can at best generate a
sequence of operators that has a good probability of leading to a
solution.
We need to allow for a process of plan revision to take place.
40
41. Is a good solution absolute or relative?
Any path problem
Best path problem
Any path problems can often be solved in a reasonable amount of
time by using heuristics that suggest good paths to explore.
Best path problems are computationally harder than any path
problems.
41
42. Is a good solution absolute or relative?
For example: The salesman starts from Boston. The following
figure
42
43. Is the solution a state or a path?
Consider the problem of finding a consistent interpretation for the
sentence;
“The bank president ate a dish of pasta salad with the fork”.
We need to find the interpretation but not the record of the
processing.
Water jug Problem: Here it is not sufficient to report that we have
solved, but the path that we found to the state (2,0). Thus a
statement of a solution to this problem must be a sequence of
operations ( Plan) that produces the final state.
43
44. What is the role of knowledge?
Two examples; Chess: Knowledge is required to constrain the search
for a solution.
Newspaper story understanding: Lot of knowledge is required even to
be able to recognize a solution.
Consider a problem of scanning daily newspapers to decide which are
supporting the democrats and which are supporting the republicans in
some election. We need lots of knowledge to answer such questions as:
The names of the candidates in each party.
The facts that if the major thing you want to see done is have taxes
lowered, you are probably supporting the republicans.
The fact that if the major thing you want to see done is improved
education for minority students, you are probably supporting the
democrats.
44
45. Does the task require Interaction with a person?
Sometimes, it is useful to program computers to solve a problems
in ways that the majority of people would not be able to
understand.
That is fine if the level o interaction between the computer and its
human users is problem-in solution-out.
But increasingly, we are building programs that require
intermediate interaction with people for additional inputs and to
provided reassurance to the user.
45
46. Does the task require Interaction with a person?
Thus we must distinguish between two types of programs:
• Solitary, in which computer is given a problem description and
produces an answer with no intermediate communication and
with no demand for an explanation of the reasoning process.
• Conversational, in which there is intermediate communication
between person and the computer, either to provide additional
assistance to the computer or to provide additional information
to the user or both.
• Decision on using one of these approaches will be important in the
choice of problem solving method.
46
47. Problem Classification
There are several broad classes into which the problems fall.
These classes can each be associated with generic control
strategy that is appropriate for solving the problems:
Classification : medical diagnostics,diagnosis of faults in
mechanical devices.
Propose and Refine: design and planning
47
48. Production System Characteristics
We have seen that the production systems are good way to
describe the operations that can be performed in search for a
solution to a problem. Two questions, we might reasonably ask,
are:
1. Can production systems, like problems, be described by a set of
characteristics that shed some light on how they can easily be
implemented?
2. If so, what relationships are there between problem types and
the types of production systems best suited to solving the
problems?
The answer of the first question is yes.
48
49. Production System Characteristics
Consider the following definitions of classes of Production systems:
Monotonic Production System: The application of a rule never
prevents the later application of another rule that could also have
been applied at the time the first rule was selected. i.e., rules are
independent.
Non-Monotonic Production system is one in which this is not true.
Partially commutative Production system: property that if application
of a particular sequence of rules transforms state x to state y, then
permutation of those rules allowable, also transforms state x into
state y.
A commutative Production system is the production system that is
both monotonic and partially commutative.
49
50. Production System Characteristics
Partially Commutative, Monotonic Production Systems:
These production systems are useful for solving ignorable
problems. Example: Theorem Proving
They can be implemented without the ability to backtrack to
previous states when it is discovered that an incorrect path has
been followed.
This often results in a considerable increase in efficiency,
particularly because since the database will never have to be
restored
It is not necessary to keep track of where in the search process
every change was made.
They are good for problems where things do not change; new
things get created. 50
51. Production System Characteristics
Non Monotonic, Partially Commutative Production
Systems:
Useful for problems in which changes occur but can be reversed
and in which order of operations is not critical.
Example: Robot Navigation, 8-puzzle, blocks world
Suppose the robot has the following ops: go North (N), go East (E),
go South (S), go West (W). To reach its goal, it does not matter
whether the robot executes the N-N-E or N-E-N.
51
52. Production System Characteristics
Non Partially Commutative Production Systems:
Problems in which irreversible change occurs. Example: chemical
synthesis.
The ops can be : Add chemical x to the pot, change the temperature
to t degrees.
These ops may cause irreversible changes to the portion being
brewed.
The order in which they are performed can be very important in
determining the final output.
(X+y) +z is not the same as (z+y) +x
Non partially commutative production systems are less likely to
produce the same node many times in search process.
52
53. Production System Characteristics
Non Partially Commutative Production Systems:
When dealing with ones that describe irreversible processes, it is
partially important to make correct decisions the first time,
although if the universe is predictable, planning can be used to
make that less important.
Four Categories of Production System
53
54. Issues in the design of search programs
The direction in which to conduct the search ( forward versus
backward reasoning).
How to select applicable rules ( Matching) •
How to represent each node of the search process ( knowledge
representation problem)
54
55. Summary
Four steps for designing a program to solve a problem:
1. Define the problem precisely.
2. Analyse the problem
3. Identify and represent the knowledge required by the task
4. Choose one or more techniques for problem solving and apply
those techniques to the problem.
55