Breadth-first search expands nodes in order of their distance from the root, searching shallow nodes before deeper ones. It uses a queue to store nodes at each level, processing the shallowest unexpanded node first. This continues until the goal is found or the entire search space is explored. Breadth-first search is complete but can require large amounts of memory and time for problems with large state spaces.
This document discusses problem solving agents in artificial intelligence. It explains that problem solving agents focus on satisfying goals by formulating the goal based on the current situation, then formulating the problem by determining the actions needed to achieve the goal. Key components of problem formulation include the initial state, possible actions, transition model describing how actions change the state, a goal test, and path cost function. Two examples of well-defined problems are given: the 8-puzzle problem and the 8-queens problem.
This document discusses search algorithms and problem solving through searching. It begins by defining search problems and representing them using graphs with states as nodes and actions as edges. It then covers uninformed search strategies like breadth-first and depth-first search. Informed search strategies use heuristics to guide the search toward more promising areas of the problem space. Examples of single agent pathfinding problems are given like the traveling salesman problem and Rubik's cube.
Self-adaptation is a prominent property for developing complex distributed software systems. Notable approaches to deal with self-adaptation are the runtime goal model artifacts. Goals are generally invariant along the system lifecycle but contain points of variability for allowing the system to decide among many alternative behaviors.
This work investigates how it is possible to provide goal models at run-time that do not contain tasks, i.e. the description of how to address goals, thus breaking the design-time tie up between Tasks and Goals, generally outcome of a means-end analysis. In this vision the system is up to decide how to combine its available Capabilities: the Proactive Means-End Reasoning.
The impact of this research line is to implement a goal-oriented form of self-adaptation where goal models can be injected at runtime. The paper also introduces MUSA, a Middleware for User-driven Service self-Adaptation.
16890 unit 2 heuristic search techniquesJais Balta
The document discusses heuristic search techniques for artificial intelligence. It covers greedy search which uses a heuristic function f(n) = h(n) to choose the successor node with the lowest estimated cost to reach the goal. An example of the travelling salesman problem is provided to illustrate greedy search.
This document discusses different search algorithms for traversing tree structures:
- Depth-first search (DFS) explores the deepest paths first, using a stack data structure. It is complete but not optimal.
- Breadth-first search (BFS) explores all nodes at each depth level first, before deeper levels, using a queue. It finds the minimum depth goal node.
- Uniform cost search prioritizes exploring the lowest cost path first, using a priority queue ordered by path cost. It is optimal, finding the least cost goal node.
The document describes problem solving by search. It defines problem solving as finding a sequence of actions and states that lead from an initial state to a goal state. A problem is defined by the initial state, possible actions/successor states, goal test, and path costs. Search is the process of systematically examining states to find a path from the start to goal state. An example problem discussed is getting an agent from Arad to Bucharest in Romania by traveling between cities. The document also discusses well-defined problems, solutions, and toy problems like the vacuum world.
This document provides a summary of Lecture 3 on problem-solving by searching. It describes how problem-solving agents can formulate goals and problems, represent the problem as a state space, and find solutions using search algorithms like breadth-first search, uniform-cost search, depth-first search, and iterative deepening search. Examples of search problems discussed include the Romania pathfinding problem, vacuum world, and the 8-puzzle.
Problem Solving Agents decide what to do by finding a sequence of actions tha...KrishnaVeni451953
1) Problem solving agents formulate goals based on the current situation, then formulate problems by deciding which actions and states to consider to achieve the goal. Agents search for solutions in the form of action sequences. Once a solution is found, it is recommended for execution.
2) Problems can involve single or multiple states. Well-defined problems specify the initial state, available actions, how actions change states, the goal state, and path costs. Search algorithms take problem definitions as input and output solutions.
3) Evaluating problem-solving performance considers search cost for finding solutions and total cost including solution path and search costs. Abstraction removes detail from problem representations.
This document discusses problem solving agents in artificial intelligence. It explains that problem solving agents focus on satisfying goals by formulating the goal based on the current situation, then formulating the problem by determining the actions needed to achieve the goal. Key components of problem formulation include the initial state, possible actions, transition model describing how actions change the state, a goal test, and path cost function. Two examples of well-defined problems are given: the 8-puzzle problem and the 8-queens problem.
This document discusses search algorithms and problem solving through searching. It begins by defining search problems and representing them using graphs with states as nodes and actions as edges. It then covers uninformed search strategies like breadth-first and depth-first search. Informed search strategies use heuristics to guide the search toward more promising areas of the problem space. Examples of single agent pathfinding problems are given like the traveling salesman problem and Rubik's cube.
Self-adaptation is a prominent property for developing complex distributed software systems. Notable approaches to deal with self-adaptation are the runtime goal model artifacts. Goals are generally invariant along the system lifecycle but contain points of variability for allowing the system to decide among many alternative behaviors.
This work investigates how it is possible to provide goal models at run-time that do not contain tasks, i.e. the description of how to address goals, thus breaking the design-time tie up between Tasks and Goals, generally outcome of a means-end analysis. In this vision the system is up to decide how to combine its available Capabilities: the Proactive Means-End Reasoning.
The impact of this research line is to implement a goal-oriented form of self-adaptation where goal models can be injected at runtime. The paper also introduces MUSA, a Middleware for User-driven Service self-Adaptation.
16890 unit 2 heuristic search techniquesJais Balta
The document discusses heuristic search techniques for artificial intelligence. It covers greedy search which uses a heuristic function f(n) = h(n) to choose the successor node with the lowest estimated cost to reach the goal. An example of the travelling salesman problem is provided to illustrate greedy search.
This document discusses different search algorithms for traversing tree structures:
- Depth-first search (DFS) explores the deepest paths first, using a stack data structure. It is complete but not optimal.
- Breadth-first search (BFS) explores all nodes at each depth level first, before deeper levels, using a queue. It finds the minimum depth goal node.
- Uniform cost search prioritizes exploring the lowest cost path first, using a priority queue ordered by path cost. It is optimal, finding the least cost goal node.
The document describes problem solving by search. It defines problem solving as finding a sequence of actions and states that lead from an initial state to a goal state. A problem is defined by the initial state, possible actions/successor states, goal test, and path costs. Search is the process of systematically examining states to find a path from the start to goal state. An example problem discussed is getting an agent from Arad to Bucharest in Romania by traveling between cities. The document also discusses well-defined problems, solutions, and toy problems like the vacuum world.
This document provides a summary of Lecture 3 on problem-solving by searching. It describes how problem-solving agents can formulate goals and problems, represent the problem as a state space, and find solutions using search algorithms like breadth-first search, uniform-cost search, depth-first search, and iterative deepening search. Examples of search problems discussed include the Romania pathfinding problem, vacuum world, and the 8-puzzle.
Problem Solving Agents decide what to do by finding a sequence of actions tha...KrishnaVeni451953
1) Problem solving agents formulate goals based on the current situation, then formulate problems by deciding which actions and states to consider to achieve the goal. Agents search for solutions in the form of action sequences. Once a solution is found, it is recommended for execution.
2) Problems can involve single or multiple states. Well-defined problems specify the initial state, available actions, how actions change states, the goal state, and path costs. Search algorithms take problem definitions as input and output solutions.
3) Evaluating problem-solving performance considers search cost for finding solutions and total cost including solution path and search costs. Abstraction removes detail from problem representations.
Lecture is related to the topic of Artificial intelligencemohsinwaseer1
The document discusses different types of problem-solving agents, including reflex agents which directly map states to actions, and goal-based agents which solve problems by searching for sequences of actions that lead to desirable goal states. It provides examples of well-defined problems like the vacuum world and 8-puzzle that involve specifying an initial state, possible actions, transition models, a goal test, and a path cost function. The document also discusses how real-world problems like route planning and airline travel can be modeled as search problems by defining states, actions, transitions between states, and optimal solutions.
1) Means-ends analysis is a problem-solving technique that breaks down goals into sub-goals and determines actions to achieve each sub-goal step-by-step to ultimately reach the end goal.
2) It works by first evaluating the current state, defining the target goal, and then splitting the target goal into sub-goals linked to executable actions.
3) The actions are applied to reduce the differences between the current and target states, tracking changes made until the end goal is achieved.
The document discusses problem formulation for solving problems using search algorithms. It provides examples of formulating problems like route finding between cities and solving the 8-puzzle as state space problems. Key components of problem formulation are defined as the initial state, successor function, goal test, and path cost. Real-life applications that can be formulated as search problems are also presented, such as robot navigation, vehicle routing, and assembly sequencing.
Artificial Intelligence involves representing problems as state spaces and using algorithms to search the state space to solve the problem. The document discusses key concepts in problem solving using search including representing the problem as states, defining state transitions with successor functions, and exploring the resulting state space to find a solution. It provides examples of representing common problems like the 8-puzzle and n-queens as state spaces. The document also summarizes uninformed search strategies like breadth-first, depth-first, and iterative deepening search that use the problem definition to search the state space without using heuristics.
The slide covers the information about agent and environment. It explains the way to define problems as a state space along with the constraint satisfaction problem.
Problem solving
Problem formulation
Search Techniques for Artificial Intelligence
Classification of AI searching Strategies
What is Search strategy ?
Defining a Search Problem
State Space Graph versus Search Trees
Graph vs. Tree
Problem Solving by Search
The document discusses search techniques in artificial intelligence. It defines search as finding a sequence of actions to achieve a goal state. Common problems that use search include problem solving, natural language processing, computer vision, and machine learning. Search involves defining a search space with states, operators to transition between states, an initial state, and a goal test. Popular uninformed search techniques like breadth-first search and depth-first search are explained. The document also introduces informed search techniques like uniform cost search that use cost information to guide the search towards optimal solutions.
State Space Search and Control Strategies in Artificial Intelligence.pptxRSAISHANKAR
In here, I gave PowerPoint Presentation on State Space Search and Control Strategies in Artificial Intelligence.
For More Videos Please Like Share Subscribe to my Youtube Channel
https://www.youtube.com/@learnaiwithshankar
For More PowerPoint Presentations, Please Follow Me.
https://www.slideshare.net/RSAISHANKAR?from_search=0
Thank you
BFS and DFS are two algorithms for traversing or searching tree and graph data structures. BFS uses a queue to visit all nodes level-by-level starting from the root node, while DFS uses a stack to visit nodes by going deeper first and prioritizing depth over breadth. For a binary tree, BFS traverses the tree level-by-level from left to right and results in the order 1 2 3 4 5. DFS can traverse the tree in different orders like inorder (left-root-right), preorder (root-left-right), and postorder (left-right-root) by using a stack.
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do.
The method of solving problem through AI involves the process of defining the search space, deciding start and goal states and then finding the path from start state to goal state through search space.
State space search is a process used in the field of computer science, including artificial intelligence(AI), in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property.
The document discusses various problem solving techniques in artificial intelligence, including different types of problems, components of well-defined problems, measuring problem solving performance, and different search strategies. It describes single-state and multiple-state problems, and defines the key components of a problem including the data type, operators, goal test, and path cost. It also explains different search strategies such as breadth-first search, uniform cost search, depth-first search, depth-limited search, iterative deepening search, and bidirectional search.
The document discusses various problem solving techniques in artificial intelligence, including different types of problems, components of well-defined problems, measuring problem solving performance, and different search strategies. It describes single-state and multiple-state problems, and defines the key components of a problem including the data type, operators, goal test, and path cost. It also explains different search strategies such as breadth-first search, uniform cost search, depth-first search, depth-limited search, iterative deepening search, and bidirectional search.
An agent is anything that can perceive its environment and take actions to affect that environment. Agents can be human, robotic, or software based. The PEAS framework describes an agent's Performance measures, Environment, Actuators, and Sensors. Problem solving agents use a goal-based approach, representing problems as an initial state, possible actions or operators, and a goal test to determine if a state is a solution. Search algorithms systematically examine states to find a path from the start to a goal state. The performance of search algorithms depends on their completeness, optimality, time and space complexity which are measured based on the branching factor and depth of solutions in the search space.
This document discusses various search techniques used in artificial intelligence problem solving. It defines a search problem as consisting of an initial state, goal state, set of all possible states, and operators to transform between states. Uninformed searches like breadth-first search explore the state space without any heuristics, while informed searches use heuristics to guide the search. The performance of different search strategies is evaluated based on completeness, optimality, time complexity and space complexity.
This document provides an overview of problem solving through searching. It defines key concepts like agents, sensors, actuators, and effectors. It explains that an intelligent agent perceives its environment, thinks, and acts to achieve goals. Search algorithms take problems as input and return solutions as sequences of actions. Problems are formulated by defining the search space, start state, and goal test. Search techniques explore the state space using actions and transition models to find optimal solutions. Common examples like the 8-puzzle and n-queens problems are presented. Tree search algorithms simulate state space exploration by expanding already explored states. A general search algorithm is outlined using open and closed lists to iteratively find solutions.
The document discusses various search algorithms used in artificial intelligence problem solving. It defines key search terminology like problem space, states, actions, and goals. It then explains different types of search problems and provides examples like the 8-puzzle and vacuuming world problems. Finally, it summarizes uninformed search strategies like breadth-first search, depth-first search, and iterative deepening search as well as informed strategies like greedy best-first search and A* search which use heuristics to guide the search.
The document discusses problem solving agents and how to formulate problems for agents to solve. It explains that problem solving involves defining a goal, formulating the initial state, possible actions, and transition model between states. A search algorithm can then find a solution path through the state space from the initial to goal states. The performance of search algorithms depends on factors like completeness, optimality, and time and space complexity which are determined by properties of the state space like branching factor and solution depth. Examples of problems discussed include the vacuuming agent, 8-puzzle, and traveling salesman problems.
The document discusses various aspects of problem solving and production systems including:
- Problem characteristics like decomposability and recoverability impact the appropriate problem solving approach.
- Production systems consist of rules, databases, and a control strategy to apply rules.
- Well-designed heuristics can efficiently guide search toward solutions without exploring all possibilities.
- Different problem types like classification and design are suited to different control strategies like proposing and refining solutions.
Problem-Solving Strategies in Artificial Intelligence" delves into the core techniques and methods employed by AI systems to address complex problems. This exploration covers the two main categories of search strategies: uninformed and informed, revealing how they navigate the solution space. It also investigates the use of heuristics, which provide a shortcut for guiding the search, and local search algorithms' role in tackling optimization problems. The description offers insights into the critical concepts and strategies that power AI's ability to find solutions efficiently and effectively in various domains.
In "Problem-Solving Strategies in Artificial Intelligence," we dive deeper into the foundational techniques and methodologies that AI systems rely on to tackle challenging problems. This comprehensive exploration begins with an in-depth examination of search strategies. Uninformed search strategies, often referred to as blind searches, are dissected, along with informed search strategies that harness domain-specific knowledge and heuristics to guide the search process more intelligently.
The role of heuristics in AI problem-solving is thoroughly investigated. These problem-solving techniques employ domain-specific rules of thumb to estimate the quality of potential solutions, aiding in decision-making and prioritization. The famous A* search algorithm, which combines actual cost and heuristic estimation, is highlighted as a prime example of informed search.
Local search algorithms, another critical component, are discussed in the context of optimization problems. These algorithms excel in finding the best solution within a local neighborhood of the current solution and are particularly valuable for various optimization challenges. You'll explore methods like hill climbing and simulated annealing, which are vital for optimizing solutions in constrained problem spaces.
This insightful exploration provides a comprehensive understanding of the problem-solving strategies employed in AI, offering a solid foundation for those seeking to apply AI techniques to real-world challenges and further the field of artificial intelligence.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Lecture is related to the topic of Artificial intelligencemohsinwaseer1
The document discusses different types of problem-solving agents, including reflex agents which directly map states to actions, and goal-based agents which solve problems by searching for sequences of actions that lead to desirable goal states. It provides examples of well-defined problems like the vacuum world and 8-puzzle that involve specifying an initial state, possible actions, transition models, a goal test, and a path cost function. The document also discusses how real-world problems like route planning and airline travel can be modeled as search problems by defining states, actions, transitions between states, and optimal solutions.
1) Means-ends analysis is a problem-solving technique that breaks down goals into sub-goals and determines actions to achieve each sub-goal step-by-step to ultimately reach the end goal.
2) It works by first evaluating the current state, defining the target goal, and then splitting the target goal into sub-goals linked to executable actions.
3) The actions are applied to reduce the differences between the current and target states, tracking changes made until the end goal is achieved.
The document discusses problem formulation for solving problems using search algorithms. It provides examples of formulating problems like route finding between cities and solving the 8-puzzle as state space problems. Key components of problem formulation are defined as the initial state, successor function, goal test, and path cost. Real-life applications that can be formulated as search problems are also presented, such as robot navigation, vehicle routing, and assembly sequencing.
Artificial Intelligence involves representing problems as state spaces and using algorithms to search the state space to solve the problem. The document discusses key concepts in problem solving using search including representing the problem as states, defining state transitions with successor functions, and exploring the resulting state space to find a solution. It provides examples of representing common problems like the 8-puzzle and n-queens as state spaces. The document also summarizes uninformed search strategies like breadth-first, depth-first, and iterative deepening search that use the problem definition to search the state space without using heuristics.
The slide covers the information about agent and environment. It explains the way to define problems as a state space along with the constraint satisfaction problem.
Problem solving
Problem formulation
Search Techniques for Artificial Intelligence
Classification of AI searching Strategies
What is Search strategy ?
Defining a Search Problem
State Space Graph versus Search Trees
Graph vs. Tree
Problem Solving by Search
The document discusses search techniques in artificial intelligence. It defines search as finding a sequence of actions to achieve a goal state. Common problems that use search include problem solving, natural language processing, computer vision, and machine learning. Search involves defining a search space with states, operators to transition between states, an initial state, and a goal test. Popular uninformed search techniques like breadth-first search and depth-first search are explained. The document also introduces informed search techniques like uniform cost search that use cost information to guide the search towards optimal solutions.
State Space Search and Control Strategies in Artificial Intelligence.pptxRSAISHANKAR
In here, I gave PowerPoint Presentation on State Space Search and Control Strategies in Artificial Intelligence.
For More Videos Please Like Share Subscribe to my Youtube Channel
https://www.youtube.com/@learnaiwithshankar
For More PowerPoint Presentations, Please Follow Me.
https://www.slideshare.net/RSAISHANKAR?from_search=0
Thank you
BFS and DFS are two algorithms for traversing or searching tree and graph data structures. BFS uses a queue to visit all nodes level-by-level starting from the root node, while DFS uses a stack to visit nodes by going deeper first and prioritizing depth over breadth. For a binary tree, BFS traverses the tree level-by-level from left to right and results in the order 1 2 3 4 5. DFS can traverse the tree in different orders like inorder (left-root-right), preorder (root-left-right), and postorder (left-right-root) by using a stack.
Searching is a technique used in AI to solve problems by exploring possible states or solutions. The document discusses various search algorithms used in single-agent pathfinding problems like sliding tile puzzles. It describes brute force search strategies like breadth-first search and depth-first search, and informed search strategies like A* search, greedy best-first search, hill-climbing search and simulated annealing that use heuristic functions. Local search algorithms are also summarized.
In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do.
The method of solving problem through AI involves the process of defining the search space, deciding start and goal states and then finding the path from start state to goal state through search space.
State space search is a process used in the field of computer science, including artificial intelligence(AI), in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property.
The document discusses various problem solving techniques in artificial intelligence, including different types of problems, components of well-defined problems, measuring problem solving performance, and different search strategies. It describes single-state and multiple-state problems, and defines the key components of a problem including the data type, operators, goal test, and path cost. It also explains different search strategies such as breadth-first search, uniform cost search, depth-first search, depth-limited search, iterative deepening search, and bidirectional search.
The document discusses various problem solving techniques in artificial intelligence, including different types of problems, components of well-defined problems, measuring problem solving performance, and different search strategies. It describes single-state and multiple-state problems, and defines the key components of a problem including the data type, operators, goal test, and path cost. It also explains different search strategies such as breadth-first search, uniform cost search, depth-first search, depth-limited search, iterative deepening search, and bidirectional search.
An agent is anything that can perceive its environment and take actions to affect that environment. Agents can be human, robotic, or software based. The PEAS framework describes an agent's Performance measures, Environment, Actuators, and Sensors. Problem solving agents use a goal-based approach, representing problems as an initial state, possible actions or operators, and a goal test to determine if a state is a solution. Search algorithms systematically examine states to find a path from the start to a goal state. The performance of search algorithms depends on their completeness, optimality, time and space complexity which are measured based on the branching factor and depth of solutions in the search space.
This document discusses various search techniques used in artificial intelligence problem solving. It defines a search problem as consisting of an initial state, goal state, set of all possible states, and operators to transform between states. Uninformed searches like breadth-first search explore the state space without any heuristics, while informed searches use heuristics to guide the search. The performance of different search strategies is evaluated based on completeness, optimality, time complexity and space complexity.
This document provides an overview of problem solving through searching. It defines key concepts like agents, sensors, actuators, and effectors. It explains that an intelligent agent perceives its environment, thinks, and acts to achieve goals. Search algorithms take problems as input and return solutions as sequences of actions. Problems are formulated by defining the search space, start state, and goal test. Search techniques explore the state space using actions and transition models to find optimal solutions. Common examples like the 8-puzzle and n-queens problems are presented. Tree search algorithms simulate state space exploration by expanding already explored states. A general search algorithm is outlined using open and closed lists to iteratively find solutions.
The document discusses various search algorithms used in artificial intelligence problem solving. It defines key search terminology like problem space, states, actions, and goals. It then explains different types of search problems and provides examples like the 8-puzzle and vacuuming world problems. Finally, it summarizes uninformed search strategies like breadth-first search, depth-first search, and iterative deepening search as well as informed strategies like greedy best-first search and A* search which use heuristics to guide the search.
The document discusses problem solving agents and how to formulate problems for agents to solve. It explains that problem solving involves defining a goal, formulating the initial state, possible actions, and transition model between states. A search algorithm can then find a solution path through the state space from the initial to goal states. The performance of search algorithms depends on factors like completeness, optimality, and time and space complexity which are determined by properties of the state space like branching factor and solution depth. Examples of problems discussed include the vacuuming agent, 8-puzzle, and traveling salesman problems.
The document discusses various aspects of problem solving and production systems including:
- Problem characteristics like decomposability and recoverability impact the appropriate problem solving approach.
- Production systems consist of rules, databases, and a control strategy to apply rules.
- Well-designed heuristics can efficiently guide search toward solutions without exploring all possibilities.
- Different problem types like classification and design are suited to different control strategies like proposing and refining solutions.
Problem-Solving Strategies in Artificial Intelligence" delves into the core techniques and methods employed by AI systems to address complex problems. This exploration covers the two main categories of search strategies: uninformed and informed, revealing how they navigate the solution space. It also investigates the use of heuristics, which provide a shortcut for guiding the search, and local search algorithms' role in tackling optimization problems. The description offers insights into the critical concepts and strategies that power AI's ability to find solutions efficiently and effectively in various domains.
In "Problem-Solving Strategies in Artificial Intelligence," we dive deeper into the foundational techniques and methodologies that AI systems rely on to tackle challenging problems. This comprehensive exploration begins with an in-depth examination of search strategies. Uninformed search strategies, often referred to as blind searches, are dissected, along with informed search strategies that harness domain-specific knowledge and heuristics to guide the search process more intelligently.
The role of heuristics in AI problem-solving is thoroughly investigated. These problem-solving techniques employ domain-specific rules of thumb to estimate the quality of potential solutions, aiding in decision-making and prioritization. The famous A* search algorithm, which combines actual cost and heuristic estimation, is highlighted as a prime example of informed search.
Local search algorithms, another critical component, are discussed in the context of optimization problems. These algorithms excel in finding the best solution within a local neighborhood of the current solution and are particularly valuable for various optimization challenges. You'll explore methods like hill climbing and simulated annealing, which are vital for optimizing solutions in constrained problem spaces.
This insightful exploration provides a comprehensive understanding of the problem-solving strategies employed in AI, offering a solid foundation for those seeking to apply AI techniques to real-world challenges and further the field of artificial intelligence.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
4. Introduction
Goal-based agents can succeed by considering future
actions and desirability of their outcomes.
Problem solving agent is a goal-based agent that
decides what to do by finding sequences of actions that
lead to desirable states
5. Problem solving
We want:
To automatically solve a problem
We need:
A representation of the problem
Algorithms that use some strategy to solve the problem
defined in that representation
6. Problem representation
General:
– State space: a problem is divided into a set of
resolution steps from the initial state to the goal state
– Reduction to sub-problems: a problem is arranged
into a hierarchy of sub-problems
7. States
A problem is defined by its elements and their
relations.
In each instant of the resolution of a problem,
those elements have specific descriptors (How to
select them?) and relations.
A state is a representation of those elements in a
given moment.
Two special states are defined:
– Initial state (starting point)
– Final state (goal state)
8. State modification: successor function
A successor function is needed to move between
different states.
A successor function is a description of possible
actions, a set of operators. It is a transformation
function on a state representation, which convert it
into another state.
The successor function defines a relation of
accessibility among states.
Representation of the successor function:
Conditions of applicability
Transformation function
9. State space
The state space is the set of all states reachable
from the initial state.
It forms a graph (or map) in which the nodes are
states and the arcs between nodes are actions.
A path in the state space is a sequence of states
connected by a sequence of actions.
The solution of the problem is part of the map
formed by the state space.
10. Problem solution
A solution in the state space is a path from the
initial state to a goal state or.
Path/solution cost: function that assigns a
numeric cost to each path, the cost of applying the
operators to the states.
Solution quality is measured by the path cost
function, and an optimal solution has the lowest
path cost among all solutions.
Solutions: any, an optimal one, all. Cost is
important depending on the problem and the type
of solution sought.
11. Problem description
Components:
State space (explicitly or implicitly defined)
Initial state
Goal state (or the conditions it has to fulfill)
Available actions (operators to change state)
Restrictions (e.g., cost)
Elements of the domain which are relevant to the
problem (e.g., incomplete knowledge of the starting
point)
Type of solution:
Sequence of operators or goal state
Any, an optimal one (cost definition needed), all
12. Problem solving agents
Intelligent agents are supposed to maximize their performance measure.
This can be simplified if the agent can adopt a goal and aim at
satisfying it.
Goal formulation, based on the current situation and the agent’s
performance measure, is the first step in problem solving
Goal is a set of states. The agent’s task is to find out which sequence of
actions will get it to a goal state
Problem formulation is the process of deciding what sorts of actions
and states to consider, given a goal
13. Contd..
An agent with several immediate options of unknown value can decide
what to do by first examining different possible sequences of actions
that lead to states of known value, and then choosing the best sequence,
Looking for such a sequence is called search,
A search algorithm takes a problem as input and returns a solution in
the form of action sequence,
a solution is found the actions it recommends can be carried out –
execution phase.
14. Contd..
“formulate, search, execute” design for the agent,
After formulating a goal and a problem to solve the agent calls a search
procedure to solve it,
It then uses the solution to guide its actions, doing whatever the
solution recommends as the next thing to do (typically the first action in
the sequence),
Then removing that step from the sequence,
Once the solution has been executed, the agent will formulate a new
goal.
16. Problem types
Deterministic, fully observable single-state problem
Agent knows exactly which state it will be in;
Non-observable senseless problem (conformant
problem)
Agent may have no idea where it is;
Nondeterministic and/or partially observable
contingency problem
percepts provide new information about current state
Unknown state space exploration problem
17. Example: Romania
On holiday in Romania; currently in Arad.
Flight leaves tomorrow from Bucharest
Formulate goal:
be in Bucharest
Formulate problem:
states: various cities
actions: drive between cities
Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
19. Well-defined problems and solutions
A problem can be defined formally by four components
Initial state that the agent starts in – e.g. In(Arad)
A description of the possible actions available to the agent
– Successor function – returns a set of <action, successor> pairs
– e.g. {<Go(Sibiu),In(Sibiu)>, <Go(Timisoara),In(Timisoara)>, <Go(Zerind), In(Zerind)>}
– Initial state and the successor function define the state space ( a graph in which the nodes are
states and the arcs between nodes are actions). A path in state space is a sequence of states
connected by a sequence of actions
Goal test determines whether a given state is a goal state – e.g.{In(Bucharest)}
Path cost function that assigns a numeric cost to each path. The cost of a path can be described
as the some of the costs of the individual actions along the path – step cost – e.g. Time to go
Bucharest
20. Single-state problem formulation
A problem is defined by four items:
1. initial state e.g., "at Arad“
2. actions or successor function S(x) = set of action–state pairs
e.g., S(Arad) = {<Arad Zerind, Zerind>, … }
3. goal test, can be
explicit, e.g., x = "at Bucharest“
4. path cost (additive)
e.g., sum of distances, number of actions executed, etc.
step cost, assumed to be ≥ 0
A solution is a sequence of actions leading from the initial state to a goal state
22. Example: The 8-puzzle
states? locations of tiles
actions? move blank left, right, up, down
goal test? = goal state (given)
path cost? 1 per move
23. Tree search algorithms
Basic idea:
offline, simulated exploration of state space by
generating successors of already-explored states
(a.k.a.~expanding states)
27. Search strategies
A search strategy is defined by picking the order of node
expansion
Strategies are evaluated along the following dimensions:
completeness: does it always find a solution if one exists?
time complexity: number of nodes generated
space complexity: maximum number of nodes in memory
optimality: does it always find a least-cost solution?
Time and space complexity are measured in terms of
b: maximum branching factor of the search tree
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
29. Uninformed search strategies
Uninformed search strategies use only the information
available in the problem definition
Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
30. Breadth-first search
The root node is expanded first, then all the successors of the root
node, and their successors and so on
In general, all the nodes are expanded at a given depth in the search tree
before any nodes at the next level are expanded
Expand shallowest unexpanded node
Implementation:
– fringe is a FIFO queue,
– the nodes that are visited first will be expanded first
–All newly generated successors will be put at the end of the queue
– Shallow nodes are expanded before deeper nodes
31. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Initial state
Goal state
Fringe: A (FIFO)
Successors: B,C,D
Visited:
Breadth-first Search
The fringe is the data structure we use to store all of the
nodes that have been generated
32. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: B,C,D (FIFO)
Successors: E,F
Visited: A
Next node
Breadth-first Search
33. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: C,D,E,F (FIFO)
Successors: G,H
Visited: A, B
Next node
Breadth-first Search
34. A
B C D
E F G H I J
K L M N O P Q R
S T U
Fringe: D,E,F,G,H (FIFO)
Successors: I,J
Visited: A, B, C
Next node
Breadth-first Search
35. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: E,F,G,H,I,J (FIFO)
Successors: K,L
Visited: A, B, C, D
Next node
Breadth-first Search
36. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: F,G,H,I,J,K,L (FIFO)
Successors: M
Visited: A, B, C, D, E
Next node
Breadth-first Search
37. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: G,H,I,J,K,L,M (FIFO)
Successors: N
Visited: A, B, C, D, E, F
Next node
Breadth-first Search
38. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: H,I,J,K,L,M,N (FIFO)
Successors: O
Visited: A, B, C, D, E, F,
G
Next node
Breadth-first Search
39. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: I,J,K,L,M,N,O (FIFO)
Successors: P,Q
Visited: A, B, C, D, E, F,
G, H
Next node
Breadth-first Search
40. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: J,K,L,M,N,O,P,Q (FIFO)
Successors: R
Next node
Visited: A, B, C, D, E, F,
G, H, I
Breadth-first Search
41. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: K,L,M,N,O,P,Q,R (FIFO)
Successors: S
Next node
Visited: A, B, C, D, E, F,
G, H, I, J
Breadth-first Search
42. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: L,M,N,O,P,Q,R,S (FIFO)
Successors: T
Next node
Visited: A, B, C, D, E, F,
G, H, I, J, K
Breadth-first Search
43. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: M,N,O,P,Q,R,S,T (FIFO)
Successors:
Next node
Visited: A, B, C, D, E, F,
G, H, I, J, K, L
Breadth-first Search
44. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: N,O,P,Q,R,S,T (FIFO)
Successors:
Next node
Visited: A, B, C, D, E, F,
G, H, I, J, K, L, M
Goal state achieved
Breadth-first Search
45. Breadth-first Search
Algorithm BREADTH: Breadth first search in state space
Let fringe be a list containing the initial state
Loop if fringe is empty return failure
Node remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
(merge the newly generated nodes into fringe)
add generated nodes to the back of fringe
End Loop
46. Properties of breadth-first search
Complete? Yes (if b is finite)
Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)
Space? O(bd+1) (keeps every node in memory)
Optimal? Yes (if cost = 1 per step)
Space is the bigger problem (more than time)
47. Uniform-cost search (UCS)
Uniform cost search is a search algorithm used to traverse,
and find the shortest path in weighted trees and graphs.
Uniform Cost Search or UCS begins at a root node and will
continually expand nodes, taking the node with the smallest
total cost from the root until it reaches the goal state.
Uniform cost search doesn't care about how many steps a
path has, only the total cost of the path.
UCS with all path costs equal to one is identical to breadth
first search.
49. S
A
B
C
D
1
5
15
5
5
10
Uniform Cost Search
Fringe = [S0]
Next Node=Head of Fringe=S, S is not
goal
Successor(S)={C,B,A}=expand(S) but
sort them according to path cost.
Updated Fringe=[A1,B5,C15]
Queue
Queue
50. S
A
B
C
1
5
15
D
5
5
10
Uniform Cost Search
Fringe = [A1,B5,C15]
Next Node=Head of Fringe=A, A is not
goal
Successor(A)={D}=expand(A)
Sort the queue according to path cost.
Updated Fringe=[B5,D11,C15]
51. DS
A
B
C
1
5
15
5
5
10
Uniform Cost Search
Fringe = [B5,D11,C15]
Next Node=Head of Fringe=B, B is not
goal
Successor(B)={D}=expand(B)
Sort the queue according to path cost.
Updated Fringe=[D10,D11,C15]
52. Uniform Cost Search
Fringe = [D10,D11,C15]
Next Node=Head of Fringe=D,
D is a GOAL (cost 10 = 5+5)
SBD
DS
A
B
C
1
5
15
5
5
10
Always finds the
cheapest solution
56. Uniform-cost search
Expand least-cost unexpanded node
Implementation:
fringe = queue ordered by path cost
Equivalent to breadth-first if step costs all equal
Complete? Yes, if step cost ≥ ε
Time? # of nodes with g ≤ cost of optimal solution, O(b1 + C*/ε) where
C* is the cost of the optimal solution
Space? # of nodes with g ≤ cost of optimal solution, O(b1 + C*/ε)
Optimal? Yes – nodes expanded in increasing order of g(n)
57. Depth First Search - Method
Expand Root Node First
Explore one branch of the tree before exploring another
branch
If a leaf node do not represent a goal state, search
backtracks up to the next highest node that has an
unexplored path
58. DFS
Depth-first search (DFS) is an algorithm for
traversing or searching a tree, tree structure, or graph.
One starts at the root (selecting some node as the root
in the graph case) and explores as far as possible along
each branch before backtracking.
59. DFS
DFS is an uninformed search that starts from root node
of the search tree and going deeper and deeper until a
goal node is found, or until it hits a node that has no
children. Then the search backtracks, returning to the
most recent node it hasn't finished exploring.
60. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Initial state
Goal state
Fringe: A (LIFO)
Successors: B,C,D
Visited:
Depth-First Search
61. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: B,C,D (LIFO)
Successors: E,F
Visited: A
Depth-First Search
62. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: E,F,C,D (LIFO)
Successors: K,L
Visited: A, B
Depth-First Search
63. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: K,L,F,C,D (LIFO)
Successors: S
Visited: A, B, E
Depth-First Search
64. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: S,L,F,C,D (LIFO)
Successors:
Visited: A, B, E, K
Depth-First Search
65. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: L,F,C,D (LIFO)
Successors: T
Backtracking
Visited: A, B, E, K, S
Depth-First Search
66. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: T,F,C,D (LIFO)
Successors:
Visited: A, B, E, K, S, L
Depth-First Search
67. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: F,C,D (LIFO)
Successors: M
Depth-First Search
Backtracking
Visited: A, B, E, K, S, L, T
68. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: M,C,D (LIFO)
Successors:
Visited: A, B, E, K, S, L, T,
F
Depth-First Search
69. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: C,D (LIFO)
Successors: G,H
Backtracking
Visited: A, B, E, K, S, L, T,
F, M
Depth-First Search
70. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: G,H,D (LIFO)
Successors: N
Visited: A, B, E, K, S, L, T,
F, M, C
Depth-First Search
71. A
B C
D
E F G H I J
K L M N O P Q R
S T U
Fringe: N,H,D (LIFO)
Successors:
Finished search
U
Goal state achieved
Visited: A, B, E, K, S, L, T,
F, M, C, G
Depth-First Search
72. Depth First Search
Let fringe be a list containing the initial state
Loop
if fringe is empty return failure
Node remove-first (fringe)
if Node is a goal
then return the path from initial state to Node
else generate all successors of Node, and
merge the newly generated nodes into fringe
add generated nodes to the front of fringe
End Loop
Depth-First Search
73. Properties of depth-first search
Complete? No: fails in infinite-depth spaces,
spaces with loops
Modify to avoid repeated states along path
complete in finite spaces
Time? O(bd): terrible if m is much larger than d
Space? O(bm), i.e., linear space!
Optimal? No
74. Depth limited search
Like Depth first search, but the search is limited to a
predefined depth.
The depth of each state is recorded as it is generated. When
picking the next state to expand, only those with depth less or
equal than the current depth are expanded.
Once all the nodes of a given depth are explored, the current
depth is incremented.
75. Depth-limited search
= depth-first search with depth limit l,
i.e., nodes at depth l have no successors
Recursive implementation:
Determine the vertex where the search should start and assign the maximum
search depth
Check if the current vertex is the goal state
If not: Do nothing
If yes: return
Check if the current vertex is within the maximum search depth
If not: Do nothing
If yes:
Expand the vertex and save all of its successors in a stack
Call DLS recursively for all vertices of the stack and go back to Step 2
86. Best-first search
Idea: use an evaluation function f(n) for each node
f(n) provides an estimate for the total cost.
Expand the node n with smallest f(n).
Implementation:
Order the nodes in fringe increasing order of cost.
Special cases:
greedy best-first search
A* search
88. Greedy best-first search
f(n) = estimate of cost from n to goal
e.g., fSLD(n) = straight-line distance from n to
Bucharest
Greedy best-first search expands the node that appears
to be closest to goal.
93. Properties of greedy best-first search
Complete? No – can get stuck in loops.
Time? O(bm), but a good heuristic can give dramatic
improvement
Space? O(bm) - keeps all nodes in memory
Optimal? No
94. A* search
Idea: avoid expanding paths that are already expensive
Evaluation function f(n) = g(n) + h(n)
g(n) = cost so far to reach n
h(n) = estimated cost from n to goal
f(n) = estimated total cost of path through n to goal
101. Admissible heuristics
A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal
state from n.
An admissible heuristic never overestimates the cost to
reach the goal, i.e., it is optimistic
Example: hSLD(n) (never overestimates the actual road
distance)
Theorem: If h(n) is admissible, A* using TREE-SEARCH
is optimal
102. Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the
fringe. Let n be an unexpanded node in the fringe such that n is on a
shortest path to an optimal goal G.
We want to prove:
f(n) < f(G2)
(then A* will prefer n over G2)
f(G2) = g(G2) since h(G2) = 0
f(G) = g(G) since h(G) = 0
g(G2) > g(G) since G2 is suboptimal
f(G2) > f(G) from above
103. Optimality of A* (proof)
Suppose some suboptimal goal G2 has been generated and is in the fringe.
Let n be an unexpanded node in the fringe such that n is on a shortest path
to an optimal goal G.
f(G2) > f(G) copied from last slide
h(n) ≤ h*(n) since h is admissible (under-estimate)
g(n) + h(n) ≤ g(n) + h*(n) from above
f(n) ≤ f(G) since g(n)+h(n)=f(n) & g(n)+h*(n)=f(G)
f(n) < f(G2) from top line.
Hence: n is preferred over G2
104. Consistent heuristics
A heuristic is consistent if for every node n, every successor n' of n
generated by any action a,
h(n) ≤ c(n,a,n') + h(n')
If h is consistent, we have
f(n') = g(n') + h(n')
= g(n) + c(n,a,n') + h(n')
≥ g(n) + h(n) = f(n)
f(n’) ≥ f(n)
Theorem:
If h(n) is consistent, A* using GRAPH-SEARCH is optimal
It’s the triangle
inequality !
105. Optimality of A*
A* expands nodes in order of increasing f value.
Gradually adds "f-contours" of nodes
Contour i contains all nodes with f ≤ fi where fi < fi+1
106. Properties of A*
Complete? Yes (unless there are infinitely many nodes with
f ≤ f(G) , i.e. path-cost > ε)
Time/Space? Exponential
except if:
Optimal? Yes
Optimally Efficient: Yes (no algorithm with the same
heuristic is guaranteed to expand fewer nodes)
d
b
* *
| ( ) ( )| (log ( ))h n h n O h n
107. Memory Bounded Heuristic Search:
Recursive BFS
How can we solve the memory problem for A* search?
Idea: Try something like depth first search, but let’s not
forget everything about the branches we have partially
explored.
We remember the best f-value we have found so far in the
branch we are deleting.
108. RBFS:
RBFS changes its mind
very often in practice.
This is because the
f=g+h become more
accurate (less optimistic)
as we approach the goal.
Hence, higher level nodes
have smaller f-values and
will be explored first.
Problem: We should keep
in memory whatever we can.
best alternative
over fringe nodes,
which are not children:
do I want to back up?
109. Simple Memory Bounded A*
This is like A*, but when memory is full we delete the worst
node (largest f-value).
Like RBFS, we remember the best descendent in the branch
we delete.
If there is a tie (equal f-values) we first delete the oldest nodes
first.
simple-MBA* finds the optimal reachable solution given the
memory constraint.
Time can still be exponential.
110. Admissible heuristics
E.g., for the 8-puzzle:
h1(n) = number of misplaced tiles
h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
h1(S) = ?
h2(S) = ?
111. Admissible heuristics
E.g., for the 8-puzzle:
h1(n) = number of misplaced tiles
h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
h1(S) = ? 8
h2(S) = ? 3+1+2+2+2+3+3+2 = 18
112. Dominance
If h2(n) ≥ h1(n) for all n (both admissible)
then h2 dominates h1
h2 is better for search: it is guaranteed to expand
less nodes.
Typical search costs (average number of nodes
expanded):
d=12 IDS = 3,644,035 nodes
A*(h1) = 227 nodes
A*(h2) = 73 nodes
d=24 IDS = too many nodes
A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes
113. Relaxed problems
A problem with fewer restrictions on the actions is
called a relaxed problem
The cost of an optimal solution to a relaxed problem
is an admissible heuristic for the original problem
If the rules of the 8-puzzle are relaxed so that a tile
can move anywhere, then h1(n) gives the shortest
solution
If the rules are relaxed so that a tile can move to any
adjacent square, then h2(n) gives the shortest solution
114. Local search algorithms
State space = set of "complete" configurations
Keep only current node in memory
Local search is useful for solving optimization
problems:
Often it is easy to find a solution
But hard to find the best solution
115. Example: n-queens
Put n queens on an n × n board with no two queens on
the same row, column, or diagonal
116. Hill Climbing
Generate-and-test + direction to move.
Heuristic function to estimate how close a given state is to a
goal state.
117. Hill Climbing
Hill climbing is an optimization technique for solving
computationally hard problems.
Used in problems with “the property that the state description
itself contains all the information”
The algorithm is memory efficient since it does not maintain a
search tree
Hill climbing attempts to iteratively improve the current state by
means of an evaluation function
Searching for a goal state = Climbing to the top of a hill
118. Simple Hill Climbing
Algorithm
1. determine successors of current state
2. choose successor of maximum goodness
3. if goodness of best successor is less than current state's
goodness, stop
4. otherwise make best successor the current state and go
to step 1
119. Hill Climbing (Gradient Search)
Considers all the moves from the current state.
Selects the best one as the next state.
120. Hill Climbing: Disadvantages
Local maximum
A state that is better than all of its neighbours, but not
better than some other states far away.
123. Hill Climbing: Conclusion
Can be very inefficient in a large, rough problem
space.
Global heuristic - computational complexity.
Often useful when combined with other methods,
getting it started right in the right general
neighbourhood.
124. Simulated annealing search
function SIM-ANNEALING(problem,schedule)
current = INITIAL-STATE(problem)
loop do
temperature = schedule[t]
next = randomly selected successor of current
diff = VALUE(next) - VALUE(current)
if diff > 0
then current = next
else current = next only with probability
end
125. Local beam search
Keep track of k states rather than just one.
Start with k randomly generated states.
At each iteration, all the successors of all k states
are generated.
If any one is a goal state, stop; else select the k best
successors from the complete list and repeat.
126. Genetic algorithms
A successor state is generated by combining two parent
states
Start with k randomly generated states (population)
A state is represented as a string over a finite alphabet
(often a string of 0s and 1s)
Evaluation function (fitness function). Higher values for
better states.
Produce the next generation of states by selection,
crossover, and mutation
127. Fitness function: number of non-attacking pairs of queens
(min = 0, max = 8 × 7/2 = 28)
24/(24+23+20+11) = 31%
23/(24+23+20+11) = 29% etc
fitness:
#non-attacking queens
probability of being
regenerated
in next generation