This document discusses problem solving through state space search. It explains that state space search involves representing a problem as an initial state, goal state, set of actions that can transform one state into another, and the set of all possible states. The document provides examples of applying state space search to problems like the missionaries and cannibals problem and the 8-queens puzzle. It also discusses strategies for controlling the order of applying actions during the search.
Slides by Míriam Bellver at the UPC Reading group for the paper:
Liu, Wei, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. "SSD: Single Shot MultiBox Detector." ECCV 2016.
Full listing of papers at:
https://github.com/imatge-upc/readcv/blob/master/README.md
Local beam search is an algorithm that maintains a limited set (beam) of the best candidates solutions found during its search. It generates the successors of the current candidates and selects the best k successors to proceed to the next iteration. Two examples are provided where the beam is set to 2 and 3 respectively. The algorithm terminates when a solution state is found among the successors or no improvement is seen after exploring all successors.
The document describes the "Monkey Banana Problem" which poses the scenario of a monkey trying to reach bananas hanging from the ceiling but being unable to do so directly. The monkey must use a box to stand on to reach the bananas. The document outlines the initial setup with locations for the monkey, bananas, and box. It then defines the possible actions like go, push, climb, and grasp. Finally, it provides the step-by-step solution for the monkey to get the bananas using those defined actions.
YOLO (You Only Look Once) is a real-time object detection system that frames object detection as a regression problem. It uses a single neural network that predicts bounding boxes and class probabilities directly from full images in one evaluation. This approach allows YOLO to process images and perform object detection over 45 frames per second while maintaining high accuracy compared to previous systems. YOLO was trained on natural images from PASCAL VOC and can generalize to new domains like artwork without significant degradation in performance, unlike other methods that struggle with domain shift.
The document describes the sequence-to-sequence (seq2seq) model with an encoder-decoder architecture. It explains that the seq2seq model uses two recurrent neural networks - an encoder RNN that processes the input sequence into a fixed-length context vector, and a decoder RNN that generates the output sequence from the context vector. It provides details on how the encoder, decoder, and training process work in the seq2seq model.
1) The document describes the steps of the resolution method for automated theorem proving, including converting facts to first-order logic, converting to conjunctive normal form, negating the statement to prove, and drawing a resolution graph.
2) It provides an example of using resolution to prove that "John likes peanuts" from given statements about food, eating, and likes. The example shows converting the statements to first-order logic and conjunctive normal form.
3) The document also mentions forward chaining as an alternative to resolution and provides a reference for further information on automated reasoning techniques.
Residual neural networks (ResNets) solve the vanishing gradient problem through shortcut connections that allow gradients to flow directly through the network. The ResNet architecture consists of repeating blocks with convolutional layers and shortcut connections. These connections perform identity mappings and add the outputs of the convolutional layers to the shortcut connection. This helps networks converge earlier and increases accuracy. Variants include basic blocks with two convolutional layers and bottleneck blocks with three layers. Parameters like number of layers affect ResNet performance, with deeper networks showing improved accuracy. YOLO is a variant that replaces the softmax layer with a 1x1 convolutional layer and logistic function for multi-label classification.
Slides by Míriam Bellver at the UPC Reading group for the paper:
Liu, Wei, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, and Scott Reed. "SSD: Single Shot MultiBox Detector." ECCV 2016.
Full listing of papers at:
https://github.com/imatge-upc/readcv/blob/master/README.md
Local beam search is an algorithm that maintains a limited set (beam) of the best candidates solutions found during its search. It generates the successors of the current candidates and selects the best k successors to proceed to the next iteration. Two examples are provided where the beam is set to 2 and 3 respectively. The algorithm terminates when a solution state is found among the successors or no improvement is seen after exploring all successors.
The document describes the "Monkey Banana Problem" which poses the scenario of a monkey trying to reach bananas hanging from the ceiling but being unable to do so directly. The monkey must use a box to stand on to reach the bananas. The document outlines the initial setup with locations for the monkey, bananas, and box. It then defines the possible actions like go, push, climb, and grasp. Finally, it provides the step-by-step solution for the monkey to get the bananas using those defined actions.
YOLO (You Only Look Once) is a real-time object detection system that frames object detection as a regression problem. It uses a single neural network that predicts bounding boxes and class probabilities directly from full images in one evaluation. This approach allows YOLO to process images and perform object detection over 45 frames per second while maintaining high accuracy compared to previous systems. YOLO was trained on natural images from PASCAL VOC and can generalize to new domains like artwork without significant degradation in performance, unlike other methods that struggle with domain shift.
The document describes the sequence-to-sequence (seq2seq) model with an encoder-decoder architecture. It explains that the seq2seq model uses two recurrent neural networks - an encoder RNN that processes the input sequence into a fixed-length context vector, and a decoder RNN that generates the output sequence from the context vector. It provides details on how the encoder, decoder, and training process work in the seq2seq model.
1) The document describes the steps of the resolution method for automated theorem proving, including converting facts to first-order logic, converting to conjunctive normal form, negating the statement to prove, and drawing a resolution graph.
2) It provides an example of using resolution to prove that "John likes peanuts" from given statements about food, eating, and likes. The example shows converting the statements to first-order logic and conjunctive normal form.
3) The document also mentions forward chaining as an alternative to resolution and provides a reference for further information on automated reasoning techniques.
Residual neural networks (ResNets) solve the vanishing gradient problem through shortcut connections that allow gradients to flow directly through the network. The ResNet architecture consists of repeating blocks with convolutional layers and shortcut connections. These connections perform identity mappings and add the outputs of the convolutional layers to the shortcut connection. This helps networks converge earlier and increases accuracy. Variants include basic blocks with two convolutional layers and bottleneck blocks with three layers. Parameters like number of layers affect ResNet performance, with deeper networks showing improved accuracy. YOLO is a variant that replaces the softmax layer with a 1x1 convolutional layer and logistic function for multi-label classification.
The document discusses recurrences and the master theorem for finding asymptotic bounds of recursive equations. It covers the substitution method, recursive tree method, and master theorem. The master theorem provides bounds for recurrences of the form T(n) = aT(n/b) + f(n) based on comparing f(n) to nlogba. It also discusses exceptions, gaps in the theorem, and proofs of the main results.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
The document discusses problem solving by searching. It describes problem solving agents and how they formulate goals and problems, search for solutions, and execute solutions. Tree search algorithms like breadth-first search, uniform-cost search, and depth-first search are described. Example problems discussed include the 8-puzzle, 8-queens, and route finding problems. The strategies of different uninformed search algorithms are explained.
This presentation provides an introduction to the Particle Swarm Optimization topic, it shows the PSO basic idea, PSO parameters, advantages, limitations and the related applications.
The document discusses reinforcement learning and its key concepts. It covers defining the reinforcement learning problem through reward maximization and Bellman's equation. It then discusses learning methods like Monte Carlo, temporal difference learning, and Q-learning. It also covers improvements like the importance of exploration versus exploitation and eligibility traces for accelerated learning.
1. The document describes an artificial intelligence implementation of the tic-tac-toe game using the minimax algorithm.
2. It provides details on the game rules, initial and goal states, and the state space tree and winning conditions.
3. The minimax approach is then explained as a recursive algorithm that evaluates all possible future moves from the current state and assumes the opponent will make the choice that results in the least preferred outcome.
The document discusses the Travelling Salesman Problem (TSP). TSP aims to find the shortest possible route for a salesman to visit each city in a list only once and return to the origin city. It describes the problem as finding the optimal or least cost Hamiltonian circuit in a graph where cities are nodes and distances between cities are edge costs. The document provides an example problem with 5 cities, calculates possible routes and costs, and illustrates the branch and bound algorithm to solve TSP by systematically eliminating suboptimal routes until the optimal route is found.
This document provides an overview of artificial intelligence and various search strategies used in AI. It begins with a summary of the course which states that AI is an evolving field that builds and studies intelligent entities. The document then discusses the Turing test for evaluating machine intelligence and provides examples of graph representations for problems like the 8 puzzle and traveling salesman problem. It explains state space search and various search strategies like depth-first search, breadth-first search, iterative deepening search, and informed heuristic search methods.
Example of iterative deepening search & bidirectional searchAbhijeet Agarwal
There are the some examples of Iterative deepening search & Bidirectional Search with some definitions and some theory related to the both searches. If you have any query please ask in comment or mail i will be happy to help you
Adversarial search is a technique used in game playing to determine the best move when facing an opponent who is also trying to maximize their score. It involves searching through possible future game states called a game tree to evaluate the best outcome. The minimax algorithm searches the entire game tree to determine the optimal move by assuming the opponent will make the best counter-move. Alpha-beta pruning improves on minimax by pruning branches that cannot affect the choice of move. Modern game programs use techniques like precomputed databases, sophisticated evaluation functions, and extensive search to defeat human champions at games like checkers, chess, and Othello.
Yolo is an end-to-end, real-time object detection system that uses a single convolutional neural network to predict bounding boxes and class probabilities directly from full images. It uses a deeper Darknet-53 backbone network and multi-scale predictions to achieve state-of-the-art accuracy while running faster than other algorithms. Yolo is trained on a merged ImageNet and COCO dataset and predicts bounding boxes using predefined anchor boxes and associated class probabilities at three different scales to localize and classify objects in images with just one pass through the network.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
This document provides an outline for a course on neural networks and fuzzy systems. The course is divided into two parts, with the first 11 weeks covering neural networks topics like multi-layer feedforward networks, backpropagation, and gradient descent. The document explains that multi-layer networks are needed to solve nonlinear problems by dividing the problem space into smaller linear regions. It also provides notation for multi-layer networks and shows how backpropagation works to calculate weight updates for each layer.
The document discusses various search techniques used in artificial intelligence including:
- Informed and uninformed searches that can use heuristics to guide the solution process.
- Common problems that use search techniques include pathfinding, constraint satisfaction, and two-player games.
- Depth-limited search avoids failures of depth-first search by limiting depth to avoid infinite loops.
- Backtracking search is a modified depth-first search used for constraint satisfaction problems that prunes unpromising branches.
- Adversarial search models multi-agent systems and is useful for games, employing techniques like minimax to determine the best move.
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
John likes all foods, apples and chicken are foods, anything that does not kill someone who eats it is a food, Bill eats peanuts and is still alive so peanuts are food, and Sue eats everything that Bill eats. This document translates statements about people and foods into logical forms using predicates and quantifiers, and then expresses them in conjunctive normal form.
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
Introduction For seq2seq(sequence to sequence) and RNNHye-min Ahn
This is my slides for introducing sequence to sequence model and Recurrent Neural Network(RNN) to my laboratory colleagues.
Hyemin Ahn, @CPSLAB, Seoul National University (SNU)
This document discusses state space search problems and algorithms. It begins by outlining the key components of goal-based agents: the goal, actions, and state representation. Several example problems are then described in more detail, including the 8-puzzle, missionaries and cannibals, cryptarithmetic, and water jug problems. For each problem, the document specifies the goal, state representation, initial state, and possible actions/operators. It also discusses issues in knowledge representation and choosing an appropriate level of abstraction for problem states.
This document discusses state space search algorithms. It defines key concepts like the state representation, operators/actions, initial and goal states. Example problems are presented like the 8-puzzle, missionaries and cannibals, cryptarithmetic etc. Generic state space search is formalized using a graph of nodes and operators. Key procedures like expand, goal test and queueing functions are discussed. Bookkeeping, search tree issues and ways to evaluate strategies are also covered at a high level.
The document discusses recurrences and the master theorem for finding asymptotic bounds of recursive equations. It covers the substitution method, recursive tree method, and master theorem. The master theorem provides bounds for recurrences of the form T(n) = aT(n/b) + f(n) based on comparing f(n) to nlogba. It also discusses exceptions, gaps in the theorem, and proofs of the main results.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
The document discusses problem solving by searching. It describes problem solving agents and how they formulate goals and problems, search for solutions, and execute solutions. Tree search algorithms like breadth-first search, uniform-cost search, and depth-first search are described. Example problems discussed include the 8-puzzle, 8-queens, and route finding problems. The strategies of different uninformed search algorithms are explained.
This presentation provides an introduction to the Particle Swarm Optimization topic, it shows the PSO basic idea, PSO parameters, advantages, limitations and the related applications.
The document discusses reinforcement learning and its key concepts. It covers defining the reinforcement learning problem through reward maximization and Bellman's equation. It then discusses learning methods like Monte Carlo, temporal difference learning, and Q-learning. It also covers improvements like the importance of exploration versus exploitation and eligibility traces for accelerated learning.
1. The document describes an artificial intelligence implementation of the tic-tac-toe game using the minimax algorithm.
2. It provides details on the game rules, initial and goal states, and the state space tree and winning conditions.
3. The minimax approach is then explained as a recursive algorithm that evaluates all possible future moves from the current state and assumes the opponent will make the choice that results in the least preferred outcome.
The document discusses the Travelling Salesman Problem (TSP). TSP aims to find the shortest possible route for a salesman to visit each city in a list only once and return to the origin city. It describes the problem as finding the optimal or least cost Hamiltonian circuit in a graph where cities are nodes and distances between cities are edge costs. The document provides an example problem with 5 cities, calculates possible routes and costs, and illustrates the branch and bound algorithm to solve TSP by systematically eliminating suboptimal routes until the optimal route is found.
This document provides an overview of artificial intelligence and various search strategies used in AI. It begins with a summary of the course which states that AI is an evolving field that builds and studies intelligent entities. The document then discusses the Turing test for evaluating machine intelligence and provides examples of graph representations for problems like the 8 puzzle and traveling salesman problem. It explains state space search and various search strategies like depth-first search, breadth-first search, iterative deepening search, and informed heuristic search methods.
Example of iterative deepening search & bidirectional searchAbhijeet Agarwal
There are the some examples of Iterative deepening search & Bidirectional Search with some definitions and some theory related to the both searches. If you have any query please ask in comment or mail i will be happy to help you
Adversarial search is a technique used in game playing to determine the best move when facing an opponent who is also trying to maximize their score. It involves searching through possible future game states called a game tree to evaluate the best outcome. The minimax algorithm searches the entire game tree to determine the optimal move by assuming the opponent will make the best counter-move. Alpha-beta pruning improves on minimax by pruning branches that cannot affect the choice of move. Modern game programs use techniques like precomputed databases, sophisticated evaluation functions, and extensive search to defeat human champions at games like checkers, chess, and Othello.
Yolo is an end-to-end, real-time object detection system that uses a single convolutional neural network to predict bounding boxes and class probabilities directly from full images. It uses a deeper Darknet-53 backbone network and multi-scale predictions to achieve state-of-the-art accuracy while running faster than other algorithms. Yolo is trained on a merged ImageNet and COCO dataset and predicts bounding boxes using predefined anchor boxes and associated class probabilities at three different scales to localize and classify objects in images with just one pass through the network.
The document discusses the greedy method algorithmic approach. It provides an overview of greedy algorithms including that they make locally optimal choices at each step to find a global optimal solution. The document also provides examples of problems that can be solved using greedy methods like job sequencing, the knapsack problem, finding minimum spanning trees, and single source shortest paths. It summarizes control flow and applications of greedy algorithms.
This document provides an outline for a course on neural networks and fuzzy systems. The course is divided into two parts, with the first 11 weeks covering neural networks topics like multi-layer feedforward networks, backpropagation, and gradient descent. The document explains that multi-layer networks are needed to solve nonlinear problems by dividing the problem space into smaller linear regions. It also provides notation for multi-layer networks and shows how backpropagation works to calculate weight updates for each layer.
The document discusses various search techniques used in artificial intelligence including:
- Informed and uninformed searches that can use heuristics to guide the solution process.
- Common problems that use search techniques include pathfinding, constraint satisfaction, and two-player games.
- Depth-limited search avoids failures of depth-first search by limiting depth to avoid infinite loops.
- Backtracking search is a modified depth-first search used for constraint satisfaction problems that prunes unpromising branches.
- Adversarial search models multi-agent systems and is useful for games, employing techniques like minimax to determine the best move.
Word embedding, Vector space model, language modelling, Neural language model, Word2Vec, GloVe, Fasttext, ELMo, BERT, distilBER, roBERTa, sBERT, Transformer, Attention
John likes all foods, apples and chicken are foods, anything that does not kill someone who eats it is a food, Bill eats peanuts and is still alive so peanuts are food, and Sue eats everything that Bill eats. This document translates statements about people and foods into logical forms using predicates and quantifiers, and then expresses them in conjunctive normal form.
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
Introduction For seq2seq(sequence to sequence) and RNNHye-min Ahn
This is my slides for introducing sequence to sequence model and Recurrent Neural Network(RNN) to my laboratory colleagues.
Hyemin Ahn, @CPSLAB, Seoul National University (SNU)
This document discusses state space search problems and algorithms. It begins by outlining the key components of goal-based agents: the goal, actions, and state representation. Several example problems are then described in more detail, including the 8-puzzle, missionaries and cannibals, cryptarithmetic, and water jug problems. For each problem, the document specifies the goal, state representation, initial state, and possible actions/operators. It also discusses issues in knowledge representation and choosing an appropriate level of abstraction for problem states.
This document discusses state space search algorithms. It defines key concepts like the state representation, operators/actions, initial and goal states. Example problems are presented like the 8-puzzle, missionaries and cannibals, cryptarithmetic etc. Generic state space search is formalized using a graph of nodes and operators. Key procedures like expand, goal test and queueing functions are discussed. Bookkeeping, search tree issues and ways to evaluate strategies are also covered at a high level.
The document discusses state space search and problem solving techniques in artificial intelligence. It defines problems as state spaces and describes representing problems formally using states, initial states, goal states, and rules defining operators or actions. It provides examples of representing the chess and water jug problems as state space searches. Key techniques discussed include production systems, heuristic search, and representing domain knowledge in rules.
The document discusses production systems and search techniques used to solve problems modeled as state spaces. A production system uses production rules to transform an initial state into a goal state. It consists of a knowledge base of rules, a rule applicator that matches rules to states, and a control strategy to determine the order of rule application. Control strategies can be uninformed/blind searches that explore all states or informed searches that use heuristics to guide the search. Examples of problems solved using this approach include the 8-puzzle, tic-tac-toe, and the water jug problem.
This document discusses problem solving as a state space search. It covers defining the problem as a state space, production systems, search space control strategies, heuristic search techniques like best-first search and branch-and-bound search, problem reduction, constraint satisfaction, and means-ends analysis. It uses chess and a water jug problem to illustrate representing problems as state spaces and defining the rules and operators to solve them through searching the problem space.
The document discusses uninformed search techniques. It provides examples of representing problems as states and operators that transform states. This includes problems like the water jug problem, 8-puzzle, and 8-queens. It then describes common uninformed search algorithms like breadth-first search, depth-first search, iterative deepening, and uniform cost search. It analyzes the properties of these algorithms like completeness, time complexity, space complexity, and optimality.
1. The document discusses various AI techniques and problems. It defines AI technique as a method that exploits knowledge represented to capture generalizations, be understood by people, be easily modified, and be used in many situations.
2. It provides examples of common AI problems like tic-tac-toe, the water jug problem, various puzzles, and language understanding.
3. It then discusses problem solving and representation, defining key concepts like states, state space, operators, initial and goal states. It outlines general problem solving steps and state space representation.
This document discusses various problem solving techniques through search. It begins with an introduction to problem representation, problem solving through search, and examples like the 8-puzzle and missionaries and cannibals problem. It then covers search methods and algorithms like breadth-first search, depth-first search, and A* search. Key concepts discussed include problem states, operators, initial states, goals, and search strategies. Real-world problems are abstracted and represented as states, operators, and paths for solving through search techniques.
1. The document discusses defining problems as state space searches which involves representing the problem as a graph with nodes as states and edges as operators to transition between states.
2. It provides examples of representing chess and the water jug problem as state space searches, defining the initial states, goal states, and production rules for the possible state transitions.
3. Search algorithms like breadth-first search and depth-first search are described for systematically exploring the state space to find a solution path from start to goal.
This document describes problem solving using production systems and state-space formulations. It provides examples of representing problems as state spaces, including the water jug problem and missionaries and cannibals problem. Production rules are used to define the actions that change states and move toward the goal state. Knowledge databases contain information needed to apply the rules. Different control strategies can be used to systematically search the state space. Production systems provide a modular way to model problem solving and intelligent behavior.
AI-04 Production System - Search Problem.pptxPankaj Debbarma
Production Systems
A simple string rewriting production system example
Search Problem
Basic searching process
Algorithm’s performance and complexity
Computational complexity
‘Big - O’ notation
Tower of Hanoi
8 Puzzle
Water Jug Problem
Can Solution Steps be Ignored
Is Good Solution Absolute or Relative
Issues in the Design of Search Programs
The document discusses problem solving and the problem solving cycle. It describes the problem solving cycle as having 7 steps: problem identification, definition and representation, strategy formulation, organization of information, resource allocation, monitoring, and evaluation. It also discusses different types of problems, distinguishing between well-structured and ill-structured problems. Finally, it provides an example of solving the water jug problem using a state space representation and production rules to systematically search for a solution.
The document describes the basic planning problem and representations used in early planning systems like STRIPS. The planning problem involves finding a sequence of actions or operators that will achieve a given goal state when starting from an initial state. STRIPS uses a state list to represent the current state and a goal stack to manage the planning search. It pops goals and subgoals off the stack and tries to achieve them by applying operators, updating the state list and solution plan along the way. Operators have preconditions that must be true for application and add and delete lists that modify the state.
This document discusses problem solving techniques in artificial intelligence, including state-space search and control strategies. It provides examples of representing problems as state spaces with start and goal states, such as the 8-puzzle and missionaries and cannibals problem. Search algorithms like breadth-first search are introduced for systematically exploring state spaces to find solutions. Production systems are also presented as a way to structure problem solving using rules and state transitions.
This document provides an overview of an artificial intelligence course syllabus and key concepts in AI. It discusses topics like what AI is, the foundations and history of AI, production systems, state space search techniques including informed and uninformed searches. It also covers knowledge representation, reasoning, machine learning, computer vision, robotics and common AI problems. Key problems in AI like perception, natural language understanding, commonsense reasoning and more are explained.
The document discusses various aspects of problem solving and production systems including:
- Problem characteristics like decomposability and recoverability impact the appropriate problem solving approach.
- Production systems consist of rules, databases, and a control strategy to apply rules.
- Well-designed heuristics can efficiently guide search toward solutions without exploring all possibilities.
- Different problem types like classification and design are suited to different control strategies like proposing and refining solutions.
This document discusses state space representation in artificial intelligence. It provides examples of how state space representation can be used to model problems. Specifically, it describes:
1) The water jug problem, where the goal is to fill a 4 gallon jug with 2 gallons using only a 3 gallon jug. The initial and goal states are defined along with the possible state transitions.
2) Production rules for solving the water jug problem by pouring water between the jugs or emptying jugs.
3) The step-by-step solution to the water jug problem by applying the production rules to reach the goal state of filling the 4 gallon jug with 2 gallons.
This document discusses state space representation in artificial intelligence. It provides examples of how state space representation can be used to model problems. Specifically, it describes:
1) The water jug problem, where the goal is to fill a 4 gallon jug with 2 gallons using only a 3 gallon and 4 gallon jug.
2) It defines the initial state, goal state, and production rules to model the problem as transitions between states.
3) It then shows the step-by-step application of the rules to reach the goal state of filling the 4 gallon jug with 2 gallons.
Similar to Ch 2 State Space Search - slides part 1.pdf (20)
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
New techniques for characterising damage in rock slopes.pdf
Ch 2 State Space Search - slides part 1.pdf
1. 1
Ch 2 – Prob Solving by searching
To build a goal-based agent we need to answer the
following questions:
– What is the goal to be achieved?
– What are the actions?
– What relevant information is necessary to encode in
order to describe the state of the world, describe the
available transitions, and solve the problem?
Initial
state
Goal
state
Actions
2. 2
What is the goal to be achieved?
Could describe a situation we want to achieve, a
set of properties that we want to hold, etc.
Requires defining a “goal test” so that we know
what it means to have achieved/satisfied our
goal.
3. 3
What are the actions?
Characterize the primitive actions or events that are
available for making changes in the world in order to
achieve a goal.
Deterministic world: no uncertainty in an action’s
effects. Given an action (a.k.a. operator or move) and
a description of the current world state, the action
completely specifies
– whether that action can be applied to the current world
(i.e., is it applicable and legal), and
– what the exact state of the world will be after the action
is performed in the current world (i.e., no need for
“history” information to compute the new world).
4. 4
Representing actions
Note also that actions in this framework can all
be considered as discrete events that occur at
an instant of time.
–For example, if “Sita is in class” and then
performs the action “go home,” then in the
next situation she is “at home.” There is no
representation of a point in time where she is
neither in class nor at home (i.e., in the state
of “going home”).
5. 5
Representing actions
The number of actions / operators depends on the
representation used in describing a state.
– In the 8-puzzle, we could specify 4 possible moves for
each of the 8 tiles, resulting in a total of 4*8=32
operators.
– On the other hand, we could specify four moves for the
“blank” square and we would only need 4 operators.
Representational shift can greatly simplify a problem!
6. 6
Representing states
What information is necessary to encode about the world
to sufficiently describe all relevant aspects in solving the
goal? That is, what knowledge needs to be represented in
a state description to adequately describe the current state
or situation of the world?
The size of a problem is usually described in terms of
the number of states that are possible.
– Tic-Tac-Toe has about 39 states.
– Checkers has about 1040 states.
– Rubik’s Cube has about 1019 states.
– Chess has about 10120 states in a typical game.
7. 7
Defining a Problem as a State Space
1. Define a state space that contains all the possible
configurations of the relevant objects.
2. Specify one (or more) state(s) as the initial state(s).
3. Specify one (or more) state(s) as the goal state(s).
4. Specify a set of rules that describe available
actions (operators), considering:
What assumptions are present in the informal
problem description?
How general should the rules be?
How much of work required to solve the problem
should be precompiled and represented in the rules?
8. 8
2.2.1 Production Systems
A set of rules (Knowledge Base) :
– LHS RHS (if-part then-part)
– Pattern Action
– Antecedent Consequent
Knowledge containing information (temporary
or permanent) required to solve the current
task. (Working Memory)
A control strategy to specify the order of testing
patterns and resolving possible conflicts
(Inference Engine)
A rule applier.
9. 9
Production System Major
Components
knowledge base
– contains essential information about the
problem domain
– often represented as facts and rules
inference engine
– mechanism to derive new knowledge from
the knowledge base and the information
provided by the user
– often based on the use of rules
11. 11
Rule-Based System
knowledge is encoded as IF … THEN rules
– these rules can also be written as production rules
the inference engine determines which rule
antecedents are satisfied
– the left-hand side must “match” a fact in the working
memory
satisfied rules are placed on the agenda
rules on the agenda can be activated (“fired”)
– an activated rule may generate new facts through its
right-hand side
– the activation of one rule may subsequently cause the
activation of other rules
12. 12
Example Rules
IF … THEN Rules
Rule: Red_Light
IF the light is red
THEN stop
Rule: Green_Light
IF the light is green
THEN go
antecedent
(left-hand-side)
consequent
(right-hand-side)
Production Rules
the light is red ==> stop
the light is green ==> go
13. 13
Inference Engine Cycle
describes the execution of rules by the
inference engine
– match
• update the agenda
– add rules whose antecedents are satisfied to the agenda
– remove non-satisfied rules from agendas
– conflict resolution
• select the rule with the highest priority from the agenda
– execution
• perform the actions on the consequent of the selected rule
• remove the rule from the agenda
the cycle ends when
– no more rules are on the agenda, or
– an explicit stop command is encountered
14. 14
The Water Jugs Problem
2 jugs
– 5 gallon
– 3 gallon
How can you get exactly 4 gallons into the 5
gallon jug?
Possible operators:
– Empty jug
– Fill jug from tap
– Pour contents from one jug into another
• Empty contents of one jug into the other
• Transfer some of contents of one jug to fill-up the other
5
3
20. 20
2.2.2 State Space Search
State Space consists of 4 components
1. A set S of start (or initial) states
2. A set G of goal (or final) states
3. A set of nodes representing all possible
states
4. A set of arcs connecting nodes
representing possible actions in different
states.
21. Problem solving by search
Represent the problem as STATES and
OPERATORS that transform one state into
another state.
A solution to the problem is an OPERATOR
SEQUENCE that transforms the INITIAL
STATE into a GOAL STATE.
Finding the sequence requires SEARCHING
the STATE SPACE by GENERATING the
paths connecting the two.
21
22. 22
Missionaries and Cannibals: Initial
State and Actions
initial state:
– all missionaries, all
cannibals, and the
boat are on the left
bank
5 possible actions:
– one missionary
crossing
– one cannibal crossing
– two missionaries
crossing
– two cannibals crossing
– one missionary and
one cannibal crossing
24. 24
Missionaries and Cannibals: Goal
State and Path Cost
goal state:
– all missionaries, all
cannibals, and the
boat are on the
right bank.
path cost
– step cost: 1 for each
crossing
– path cost: number of
crossings = length of
path
solution path:
– 4 optimal solutions
– cost: 11
26. 26
Example: Measuring problem!
(1 possible) Solution:
a b c
0 0 0
3 0 0
0 0 3
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7
3 l 5 l
9 l
a b c
Initial state
Goal state
28. 28
Which solution do we prefer?
• Solution 1:
a b c
0 0 0
3 0 0
0 0 3
3 0 3
0 0 6
3 0 6
0 3 6
3 3 6
1 5 6
0 5 7
• Solution 2:
a b c
0 0 0
0 5 0
3 2 0
3 0 2
3 5 2
3 0 7
29. 29
8-queens
• State: any arrangement of up to 8 queens on the board
• Initial state: no queens on the board
• Operation: add a queen to any empty square
• Goal state: no queen is attacked (like the above board).
30. 30
8-queens… Improved states
• State: an arrangement of n (up to 8) queens, 1 each in the n
leftmost columns
• Operation: add a queen in leftmost empty column such that it is
not attacked by the other queens.
Improvement: Just 2057 possible states instead of P(64,8) 648
31. 31
Control Strategies
Control strategy is one of the most
important component of intelligent
systems and specify the order in which
rules are applied in a given current state.
A good control strategy should have the
following properties:
– Cause motion
– Be systematic
32. 32
Data/Goal-driven Strategies
Data-driven search = forward chaining.
– Start from an initial state and work towards a
goal state.
– Examples seen so far
Goal-driven search = backward chaining.
– Start from a goal state and work towards an
initial state.
– Prolog programming, theorem proving …
33. 33
Seven problem characteristics
1. Decomposability of a problem
Towers of Hanoi
2. Can solution steps be ignored or undone?
Ignorable : theorem proving
Some of the lemmas proved can be ignored
Recoverable : 8 tile puzzle
solution steps can be undone (backtracking)
Irrecoverable : chess
solution steps can not be undone
34. 34
3. Is the universe predictable?
• 8-puzzel (yes)
• Bridge/chess (no) but we can use
probabilities of each possible outcomes
4. Is a good solution absolute or relative?
• More than one solution?
• traveling salesman problem
Seven problem characteristics
35. 35
5. Is the solution a state or a path ?
- Given a sequence of formulae, does a statement
follow from them?
- water jug problem path / plan
6. What is the role of knowledge?
knowledge for perfect program of chess
(need knowledge to constrain the search)
newspaper story understanding
(need knowledge to recognize a solution)
7. Does the task require interaction with a
person? solitary/ conversational
Seven problem characteristics
36. 36
Toy Problems vs.
Real-World Problems
Toy Problems
– concise and exact
description
– used for illustration
purposes (e.g. here)
– used for performance
comparisons
– all the above
examples
Real-World Problems
– no single, agreed-
upon description
– people care about the
solutions (useful)
38. 38
Touring in Romania:
Search Problem Definition
initial state:
– In(Arad)
possible Actions:
– DriveTo(Zerind), DriveTo(Sibiu),
DriveTo(Timisoara), etc.
goal state:
– In(Bucharest)
step cost:
– distances between cities
39. 39
Searching for solutions
An agent with several immediate options of
unknown value can decide what to do by first
examining different possible sequences of
actions that lead to states of known value, and
then choosing the best sequence.
search (through the state space) for
• a goal state
• a sequence of actions that leads to a goal state
• a sequence of actions with minimal path cost
that leads to a goal state
40. 40
Search Trees
search tree: tree structure defined by initial
state and successor function
Touring Romania (partial search tree):
In(Arad)
In(Zerind) In(Sibiu) In(Timisoara)
In(Arad) In(Oradea) In(Fagaras) In(Rimnicu Vilcea)
In(Sibiu) In(Bucharest)
41. 41
Search Nodes
search nodes: the nodes in the search tree
data structure:
– state: a state in the state space
– parent node: the immediate predecessor in the
search tree
– action: the action that, performed in the parent
node’s state, leads to this node’s state
– path cost: the total cost of the path leading to this
node
– depth: the depth of this node in the search tree
42. 42
Expanded Search Nodes
in Touring Romania Example
In(Arad)
In(Zerind) In(Sibiu) In(Timisoara)
In(Arad) In(Oradea) In(Fagaras) In(Rimnicu Vilcea)
In(Sibiu) In(Bucharest)
43. 43
Fringe Nodes
in Touring Romania Example
fringe nodes: nodes that have not been
expanded
In(Arad)
In(Zerind) In(Sibiu) In(Timisoara)
In(Arad) In(Oradea) In(Fagaras) In(Rimnicu Vilcea)
In(Sibiu) In(Bucharest)
44. 44
A General State-Space Search Algorithm
open := {S}; closed :={};
repeat
n := select(open); /* select one node from open for
expansion */
if n is a goal
then exit with success; /* delayed goal testing */
expand(n)
/* generate all children of n
put these newly generated nodes in open
(check duplicates)
put n in closed (to check duplicates) */
until open = {};
exit with failure
45. 45
Key Features of the General
Search Algorithm
systematic
– guaranteed to not generate the same state
infinitely often
– guaranteed to come across every state
eventually
incremental
– attempts to reach a goal state step by step
(rather than guessing it all at once)
46. 46
In(Arad) In(Oradea) In(Rimnicu Vilcea)
In(Zerind) In(Timisoara)
In(Sibiu) In(Bucharest)
In(Fagaras)
In(Sibiu)
General Search Algorithm:
Touring Romania Example
In(Arad)
fringe
selected
47. 47
Evaluating Search Strategies
• Completeness: Is it guaranteed that a solution will be
found?
• Optimality: Is the best solution found when several
solutions exist?
• Time complexity: How long does it take to find a solution?
• Space complexity: How much memory is needed to
perform a search?
• Branching factor (b)
1 + b + b2 + b3 + ... + bd
b nodes
b nodes
b nodes
48. 48
Search Cost vs. Total Cost
search cost:
– time (and memory) used to find a solution
total cost:
– search cost + path cost of solution
optimal trade-off point:
– further computation to find a shorter path
becomes counterproductive
49. 49
Uninformed vs. Informed Search
uninformed search (blind search)
– no additional information about states beyond
problem definition
– only goal states and non-goal states can be
distinguished
– E.g., BFS, DFS, DFID, UCS,…
informed search (heuristic search)
– additional information about how “promising” a state
is available
– Greedy HDFS, BeFs, A*, …
50. 50
Breadth-First Search
strategy:
– expand root node
– expand successors of root node
– expand successors of successors of root
node
– etc.
implementation:
– use FIFO queue to store fringe nodes in
general tree search algorithm
51. 51
Breadth First Search Algorithm
open := [Start]; // Initialize
closed := [ ];
while open != [ ] // While states remain
{ remove the leftmost state from open, call it X;
if X is a goal return success; // Success
else
{ generate children of X;
put X on closed;
eliminate children of X
on open or closed; // loops
put remaining children at the
END of open; } // QUEUE
}
return failure;
54. 54
Time complexity of BFS
• If a goal node is found on depth d of the tree, all nodes
up till that depth are created.
G
b
d
Thus: O(bd+1)
55. 55
In General: bd+1
Space complexity of BFS
• Largest number of nodes in QUEUE is reached on
the level d of the goal node.
G
b
d
56. 56
Exponential Complexity:
Important Lessons
memory requirements are a bigger problem for
breadth-first search than is the execution time
time requirements are still a major factor
exponential-complexity search problems
cannot be solved by uninformed methods for
any but the smallest instances
57. 57
Depth-First Search
strategy:
– always expand the deepest node in the
current fringe first
– when a sub-tree has been completely
explored, delete it from memory and “back
up”
implementation:
– use LIFO queue (stack) to store fringe nodes
in general tree search algorithm
58. 58
Depth First Search Algorithm
open := [Start]; // Initialize
closed := [ ];
while open != [ ] // While states remain
{ remove the leftmost state from open, call it X;
if X is a goal return success; // Success
else
{ generate children of X;
put X on closed;
eliminate children of X
on open or closed; // loops
put remaining children at the
START of open; } // STACK
}
return failure;
61. 61
Time complexity of DFS
• In the worst case:
• the goal node may be on the right-most branch,
G
d
b
Time complexity O(bd+1)
62. 62
Space complexity of DFS
• Largest number of nodes in QUEUE is reached in
bottom left-most node.
...
Order: O(b*d)
63. 63
Evaluation of Depth-first & Breadth-first
• Completeness: Is it guaranteed that a solution will be found?
• Yes for BFS
• No for DFS
• Optimality: Is the best solution found when several solutions exist?
• No for both BFS and DFS if edges are of different length
• Yes for BFS and No for DFS if edges are of same length
• Time complexity: How long does it take to find a solution?
• Worst case: both exponential
• Average case: DFS is better than BFS
• Space complexity: How much memory is needed to perform a
search?
• Exponential for BFS
• Linear for DFS
64. 64
Depth-first vs Breadth-first
Use depth-first when
– Space is restricted
– High branching factor
– There are no solutions with short paths
– No infinite paths
Use breadth-first when
– Possible infinite paths
– Some solutions have short paths
– Low branching factor
65. 65
Depth-First Iterative Deepening (DFID)
BF and DF both have exponential time complexity O(bd)
BF is complete but has exponential space complexity
DF has linear space complexity but is incomplete
Space is often a harder resource constraint than time
Can we have an algorithm that
– is complete and
– has linear space complexity ?
DFID by Korf in 1987
First do DFS to depth 0 (i.e., treat start node as having
no successors), then, if no solution found, do DFS to
depth 1, etc.
until solution found do
DFS with depth bound c
c = c+1
67. 67
Depth-First Iterative Deepening (DFID)
Complete (iteratively generate all nodes up to
depth d)
Optimal if all operators have the same cost.
Otherwise, not optimal but does guarantee
finding solution of fewest edges (like BF).
Linear space complexity: O(bd), (like DF)
Time complexity is exponential O(bd), but a
little worse than BFS or DFS because nodes
near the top of the search tree are generated
multiple times.
Worst case time complexity is exponential
for all blind search algorithms !
69. 69
Uniform/Lowest-Cost (UCS/LCFS)
BFS, DFS, DFID do not take path cost into account.
Let g(n) = cost of the path from the start node to an
open node n
Algorithm outline:
– Always select from the OPEN the node with the
least g(.) value for expansion, and put all newly
generated nodes into OPEN
– Nodes in OPEN are sorted by their g(.) values (in
ascending order)
– Terminate if a node selected for expansion is a goal
Called “Dijkstra's Algorithm” in the algorithms
literature and similar to “Branch and Bound
Algorithm” in operations research literature
70. 70
A Uniform-cost Search Algorithm
open := {S}; closed :={};
repeat
n := select(open); /* select the 1st node from open
for expansion */
if n is a goal
then exit with success; /* delayed goal testing */
expand(n)
/* generate all children of n
put these newly generated nodes in open
(check duplicates)
sort open (by path cost g(n))
put n in closed (check duplicates) */
until open = {};
exit with failure
71. 71
UCS example
A
D
B
E
C
F
G
S
3
4
4
4
5 5
4
3
2
AFTER open closed
ITERATION
0 [S(0)] [ ]
1 [A(3), D(4)] [S(0)]
2 [D(4), B(7)] [S(0), A(3)]
3 [E(6), B(7)] [S(0), A(3), D(4)]
4 [B(7), F(10)] [S(0), A(3), D(4), E(6)]
5 [F(10), C(11)]
6 [C(11), G(13)]
7 [G(13)]
72. 72
Uniform-Cost -- analysis
Complete (if cost of each edge is not infinitesimal)
– The total # of nodes n with g(n) ≤ g(goal) in the state
space is finite
– Goal node will eventually be generated (put in OPEN)
and selected for expansion (and passes the goal test)
Optimal
– When the first goal node is selected for expansion (and
passes the goal test), its path cost is less than or equal to
g(n) of every OPEN node n (and solutions entailed by n)
Exponential time and space complexity, O(bd)
where d is the depth of the solution path of the least
cost solution
73. 73
Bidirectional Search
idea: run two simultaneous searches:
– one forward from the initial state,
– one backward from the goal state,
until the two fringes meet.
The solution path must cross the meeting
point.
Start Goal
74. 74
Bidirectional Search: Caveats
What search strategy for forward/backward
search?
– breadth-first search
How to check whether a node is in the other
fringe?
– hash table
must know goal state for backward search
must be able to compute predecessors and
successors for a given state
77. 77
When to use what
Depth-First Search:
– Space is limited
– High branching factor
– No infinite branches
Breadth-First Search:
– Some solutions are known to be shallow
Iterative-Deepening Search:
– Space is limited and the shortest solution path is required
Uniform-Cost Search:
– Actions have varying costs
– Least cost solution is the required
The only uninformed search that worries about costs.
78. 78
Repeated states
Repeated states can be a source of great inefficiency:
identical subtrees will be explored many times!
– Failure to detect repeated states can turn a linear problem
into an exponential one !
How much effort to invest in detecting repetitions?
79. 79
Strategies for repeated states
Do not expand the state that was just generated
– constant time, prevents cycles of length one, ie.,
A,B,A,B….
Do not expand states that appear in the path
– time linear in the depth of node, prevents some cycles
of the type A,B,C,D,A
Do not expand states that were expanded before
– can be expensive! Use hash table to avoid looking at
all nodes every time.
80. 80
Summary: uninformed search
Problem formulation and representation is key!
– State formulation with care (8-queens)
– Action formulation with care (8-puzzle)
Implementation as expanding directed graph of
states and transitions
Appropriate for problems where no solution is
known and many combinations must be tried
Problem space is of exponential size in the number
of world states -- NP-hard problems
Fails due to lack of space and/or time.
81. Homework
1. Explain the seven characteristics of AI problems.
2. Define state space search.
3. Compare the search strategies BFS, DFS, DFID, UCS,
based time complexity, space complexity, optimality and
completeness.
4. A farmer is stranded on an island with 3 of his belongings:
cabbage, goat and tiger. He has a small boat capable of
carrying him and one of his belongings. He cannot leave
cabbage and goat unattended (the cabbage will be eaten
by the goat) and cannot leave tiger and goat unattended
(the tiger will eat the goat). Design a state-space
representation for this search problem and draw the state-
space showing the first 3 levels.
81
82. Homework
5. In the rabbit leap problem, three east-bound rabbits
stand in a line blocked by three west-bound rabbits as
shown below. They are crossing a stream with stones
placed in a line. A rabbit can only move forward or
jump over one rabbit to get to an unoccupied stone.
Design a state-space representation for this search
problem and draw the state-space showing the first 3
levels. Find a solution path in the state space.
82