The document discusses local search algorithms as an alternative to classical search algorithms when the path to the goal state is irrelevant. It describes hill-climbing search, which iteratively moves to a neighboring state with improved value. Hill-climbing can get stuck at local optima. Variations like simulated annealing and stochastic hill-climbing incorporate randomness to avoid local optima. Genetic algorithms use techniques inspired by evolution like selection, crossover and mutation to search the state space. The document uses examples like the 8-queens and 8-puzzle problems to illustrate local search concepts.
This document discusses different types of intelligent agents and problem solving techniques. It describes five types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Goal-based agents use problem formulation, search, and execution to solve problems by finding a sequence of actions to reach a goal state. Several examples of well-defined problems are provided, including traveling in Romania, the vacuum world problem, and the 8-puzzle problem.
The document discusses various search strategies for solving problems like the 8-puzzle game. It defines key concepts like search, state space graphs, search trees, step costs, path costs, and solutions. It explains uninformed and informed search strategies. Specifically it covers breadth-first search, uniform-cost search, and depth-first search algorithms. It provides examples of applying these algorithms to sample state space graphs and discusses their time and space complexities.
Lecture 14 Heuristic Search-A star algorithmHema Kashyap
A* is a search algorithm that finds the shortest path through a graph to a goal state. It combines the best aspects of Dijkstra's algorithm and best-first search. A* uses a heuristic function to evaluate the cost of a path passing through each state to guide the search towards the lowest cost goal state. The algorithm initializes the start state, then iteratively selects the lowest cost node from its open list to expand, adding successors to the open list until it finds the goal state. A* is admissible, complete, and optimal under certain conditions relating to the heuristic function and graph structure.
The document provides an introduction to artificial intelligence (AI). It discusses the goals of understanding intelligent behavior and building intelligent agents. It then examines four perspectives on defining AI: (1) acting humanly vs thinking rationally, and (2) focusing on thought processes vs behavior. Each perspective is associated with different approaches to AI like the Turing Test, cognitive modeling, laws of thought, and rational agents. The document also outlines some current capabilities of AI like robotic vehicles, speech recognition, game playing, and machine translation. It introduces the concepts of agents, percepts, and rational agents that aim to maximize performance. Finally, it categorizes different types of environments that agents can operate in.
This presentation discuses the following topics:
What is A-Star (A*) Algorithm in Artificial Intelligence?
A* Algorithm Steps
Why is A* Search Algorithm Preferred?
A* and Its Basic Concepts
What is a Heuristic Function?
Admissibility of the Heuristic Function
Consistency of the Heuristic Function
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This document discusses different types of intelligent agents and problem solving techniques. It describes five types of agent programs: simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, and learning agents. Goal-based agents use problem formulation, search, and execution to solve problems by finding a sequence of actions to reach a goal state. Several examples of well-defined problems are provided, including traveling in Romania, the vacuum world problem, and the 8-puzzle problem.
The document discusses various search strategies for solving problems like the 8-puzzle game. It defines key concepts like search, state space graphs, search trees, step costs, path costs, and solutions. It explains uninformed and informed search strategies. Specifically it covers breadth-first search, uniform-cost search, and depth-first search algorithms. It provides examples of applying these algorithms to sample state space graphs and discusses their time and space complexities.
Lecture 14 Heuristic Search-A star algorithmHema Kashyap
A* is a search algorithm that finds the shortest path through a graph to a goal state. It combines the best aspects of Dijkstra's algorithm and best-first search. A* uses a heuristic function to evaluate the cost of a path passing through each state to guide the search towards the lowest cost goal state. The algorithm initializes the start state, then iteratively selects the lowest cost node from its open list to expand, adding successors to the open list until it finds the goal state. A* is admissible, complete, and optimal under certain conditions relating to the heuristic function and graph structure.
The document provides an introduction to artificial intelligence (AI). It discusses the goals of understanding intelligent behavior and building intelligent agents. It then examines four perspectives on defining AI: (1) acting humanly vs thinking rationally, and (2) focusing on thought processes vs behavior. Each perspective is associated with different approaches to AI like the Turing Test, cognitive modeling, laws of thought, and rational agents. The document also outlines some current capabilities of AI like robotic vehicles, speech recognition, game playing, and machine translation. It introduces the concepts of agents, percepts, and rational agents that aim to maximize performance. Finally, it categorizes different types of environments that agents can operate in.
This presentation discuses the following topics:
What is A-Star (A*) Algorithm in Artificial Intelligence?
A* Algorithm Steps
Why is A* Search Algorithm Preferred?
A* and Its Basic Concepts
What is a Heuristic Function?
Admissibility of the Heuristic Function
Consistency of the Heuristic Function
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
This document discusses various heuristic search techniques, including generate-and-test, hill climbing, best first search, and simulated annealing. Generate-and-test involves generating possible solutions and testing them until a solution is found. Hill climbing iteratively improves the current state by moving in the direction of increased heuristic value until no better state can be found or a goal is reached. Best first search expands the most promising node first based on heuristic evaluation. Simulated annealing is based on hill climbing but allows moves to worse states probabilistically to escape local maxima.
The document discusses various problem solving techniques in artificial intelligence, including different types of problems, components of well-defined problems, measuring problem solving performance, and different search strategies. It describes single-state and multiple-state problems, and defines the key components of a problem including the data type, operators, goal test, and path cost. It also explains different search strategies such as breadth-first search, uniform cost search, depth-first search, depth-limited search, iterative deepening search, and bidirectional search.
What is artificial intelligence,Hill Climbing Procedure,Hill Climbing Procedure,State Space Representation and Search,classify problems in AI,AO* ALGORITHM
Hill climbing is a heuristic search algorithm used to find optimal solutions to mathematical problems. It works by starting with an initial solution and iteratively moving to a neighboring solution that improves the value of an objective function until a local optimum is reached. However, hill climbing may not find the global optimum solution and can get stuck in local optima. Variants include simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing.
This document discusses several search strategies including uninformed search, breadth-first search, depth-first search, uniform cost search, iterative deepening search, and bi-directional search. It provides algorithms and examples to explain how each strategy works. Key points include: breadth-first search visits nodes by level of depth; depth-first search generates nodes along the largest depth first before moving up; uniform cost search expands the lowest cost node; and iterative deepening search avoids infinite depth by searching each level iteratively and increasing the depth limit.
This document discusses intelligent agents and their environments. It covers:
1) Intelligent agents are entities that perceive their environment through sensors and act upon the environment through actuators. They map percept sequences to actions.
2) A rational agent should select actions that are expected to maximize its performance measure given the percept sequence and its prior knowledge. Performance measures evaluate how well the agent solves its task.
3) Agent environments can have different properties such as being fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, and single-agent or multi-agent. The simplest is fully observable, deterministic, etc. but most real environments are more complex.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
Local search algorithms operate by examining the current node and its neighbors. They are suitable for problems where the solution is the goal state itself rather than the path to get there. Hill-climbing and simulated annealing are examples of local search algorithms. Hill-climbing continuously moves to higher value neighbors until a local peak is reached. Simulated annealing also examines random moves and can accept moves to worse states based on probability. Both aim to find an optimal or near-optimal solution but can get stuck in local optima.
Means End Analysis (MEA) in Artificial.pptxsuchita74
Means end analysis (MEA) is a technique used in AI programs to solve problems by defining goals and establishing action plans to reach those goals. MEA evaluates the differences between the current state and target goal state, then decides the best actions to undertake to reach the end goal through a combination of forward and backward strategies. It works by first evaluating the current state, defining a target goal and splitting it into sub-goals linked to executable actions, then undertaking intermediate steps by applying operators to reduce differences between states until the target is achieved.
The document discusses problem solving by searching. It describes problem solving agents and how they formulate goals and problems, search for solutions, and execute solutions. Tree search algorithms like breadth-first search, uniform-cost search, and depth-first search are described. Example problems discussed include the 8-puzzle, 8-queens, and route finding problems. The strategies of different uninformed search algorithms are explained.
The document discusses informed search techniques that use heuristic information to guide the search for a solution more efficiently. It describes how heuristic information about the problem domain can help constrain the search space. Hill climbing and best-first search are two informed search strategies discussed. Hill climbing iteratively moves to successor states with improved heuristic values until a local optimum is reached. Best-first search maintains an open list of promising nodes to explore and prioritizes expanding nodes with the best heuristic values to avoid getting stuck in local optima.
This document discusses production systems and different search strategies used in artificial intelligence. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. Control strategies must cause motion and be systematic. Breadth-first search explores all neighbors of the initial node before moving to the next level, while depth-first search explores as far as possible along each branch before backtracking. Heuristic search uses heuristics or rules of thumb to guide the search towards the most promising paths.
Heuristic search algorithms use heuristics, or problem-specific knowledge, to guide the search for a solution. Some heuristics guarantee completeness while others may sacrifice completeness to improve efficiency. A heuristic function estimates the cost to reach the goal state from the current state. For example, in the 8-puzzle problem the Manhattan distance heuristic estimates this cost as the sum of the distances each misplaced tile would need to move to reach its goal position. The example shows applying the Manhattan distance heuristic to guide the search for a solution to instances of the 8-puzzle problem.
A star algorithm | A* Algorithm in Artificial Intelligence | EdurekaEdureka!
YouTube Link: https://youtu.be/amlkE0g-YFU
** Artificial Intelligence and Deep Learning: https://www.edureka.co/ai-deep-learni... **
This Edureka PPT on 'A Star Algorithm' teaches you all about the A star Algorithm, the uses, advantages and disadvantages and much more. It also shows you how the algorithm can be implemented practically and has a comparison between the Dijkstra and itself.
Check out our playlist for more videos: http://bit.ly/2taym8X
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Problem solving
Problem formulation
Search Techniques for Artificial Intelligence
Classification of AI searching Strategies
What is Search strategy ?
Defining a Search Problem
State Space Graph versus Search Trees
Graph vs. Tree
Problem Solving by Search
This document discusses adversarial search in artificial intelligence. It provides an overview of games and introduces the minimax algorithm. The minimax algorithm is used to determine optimal strategies in two-player adversarial games by recursively considering all possible moves by both players. Tic-tac-toe is given as an example game where minimax can be applied to choose the best first move. The properties and limitations of the minimax algorithm are also summarized.
this is ppt on the topic of heuristic search techniques or we can also known it by the name of informed search techniques.
in this presentation we only disscuss about three search techniques there are lot of them by the most important once are in this presentation.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
This document discusses various heuristic search algorithms including generate-and-test, hill climbing, best-first search, problem reduction, and constraint satisfaction. Generate-and-test involves generating possible solutions and testing if they are correct. Hill climbing involves moving in the direction that improves the state based on a heuristic evaluation function. Best-first search evaluates nodes and expands the most promising node first. Problem reduction breaks problems into subproblems. Constraint satisfaction views problems as sets of constraints and aims to constrain the problem space as much as possible.
This document discusses various heuristic search techniques, including generate-and-test, hill climbing, best first search, and simulated annealing. Generate-and-test involves generating possible solutions and testing them until a solution is found. Hill climbing iteratively improves the current state by moving in the direction of increased heuristic value until no better state can be found or a goal is reached. Best first search expands the most promising node first based on heuristic evaluation. Simulated annealing is based on hill climbing but allows moves to worse states probabilistically to escape local maxima.
The document discusses various problem solving techniques in artificial intelligence, including different types of problems, components of well-defined problems, measuring problem solving performance, and different search strategies. It describes single-state and multiple-state problems, and defines the key components of a problem including the data type, operators, goal test, and path cost. It also explains different search strategies such as breadth-first search, uniform cost search, depth-first search, depth-limited search, iterative deepening search, and bidirectional search.
What is artificial intelligence,Hill Climbing Procedure,Hill Climbing Procedure,State Space Representation and Search,classify problems in AI,AO* ALGORITHM
Hill climbing is a heuristic search algorithm used to find optimal solutions to mathematical problems. It works by starting with an initial solution and iteratively moving to a neighboring solution that improves the value of an objective function until a local optimum is reached. However, hill climbing may not find the global optimum solution and can get stuck in local optima. Variants include simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing.
This document discusses several search strategies including uninformed search, breadth-first search, depth-first search, uniform cost search, iterative deepening search, and bi-directional search. It provides algorithms and examples to explain how each strategy works. Key points include: breadth-first search visits nodes by level of depth; depth-first search generates nodes along the largest depth first before moving up; uniform cost search expands the lowest cost node; and iterative deepening search avoids infinite depth by searching each level iteratively and increasing the depth limit.
This document discusses intelligent agents and their environments. It covers:
1) Intelligent agents are entities that perceive their environment through sensors and act upon the environment through actuators. They map percept sequences to actions.
2) A rational agent should select actions that are expected to maximize its performance measure given the percept sequence and its prior knowledge. Performance measures evaluate how well the agent solves its task.
3) Agent environments can have different properties such as being fully or partially observable, deterministic or stochastic, episodic or sequential, static or dynamic, discrete or continuous, and single-agent or multi-agent. The simplest is fully observable, deterministic, etc. but most real environments are more complex.
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
UNIT - I PROBLEM SOLVING AGENTS and EXAMPLES.pptx.pdfJenishaR1
Replicate human intelligence
Solve Knowledge-intensive tasks
An intelligent connection of perception and action
Building a machine which can perform tasks that requires human intelligence such as:
Proving a theorem
Playing chess
Plan some surgical operation
Driving a car in traffic
Creating some system which can exhibit intelligent behavior, learn new things by itself, demonstrate, explain, and can advise to its user.
What Comprises to Artificial Intelligence?
Artificial Intelligence is not just a part of computer science even it's so vast and requires lots of other factors which can contribute to it. To create the AI first we should know that how intelligence is composed, so the Intelligence is an intangible part of our brain which is a combination of Reasoning, learning, problem-solving perception, language understanding, etc.
To achieve the above factors for a machine or software Artificial Intelligence requires the following discipline:
Mathematics
Biology
Psychology
Sociology
Computer Science
Neurons Study
Statistics Advantages of Artificial Intelligence
Following are some main advantages of Artificial Intelligence:
High Accuracy with less errors: AI machines or systems are prone to less errors and high accuracy as it takes decisions as per pre-experience or information.
High-Speed: AI systems can be of very high-speed and fast-decision making, because of that AI systems can beat a chess champion in the Chess game.
High reliability: AI machines are highly reliable and can perform the same action multiple times with high accuracy.
Useful for risky areas: AI machines can be helpful in situations such as defusing a bomb, exploring the ocean floor, where to employ a human can be risky.
Digital Assistant: AI can be very useful to provide digital assistant to the users such as AI technology is currently used by various E-commerce websites to show the products as per customer requirement.
Useful as a public utility: AI can be very useful for public utilities such as a self-driving car which can make our journey safer and hassle-free, facial recognition for security purpose, Natural language processing to communicate with the human in human-language, etc.
Hill climbing algorithm in artificial intelligencesandeep54552
The hill climbing algorithm is a local search technique used to find the optimal solution to a problem. It works by starting with an initial solution and iteratively moving to a neighboring solution that has improved value until no better solutions can be found. Simple hill climbing only considers one neighbor at a time, while steepest ascent examines all neighbors and chooses the one closest to the optimal solution. The algorithm can get stuck at local optima rather than finding the global optimum. Techniques like simulated annealing incorporate randomness to help escape local optima.
Local search algorithms operate by examining the current node and its neighbors. They are suitable for problems where the solution is the goal state itself rather than the path to get there. Hill-climbing and simulated annealing are examples of local search algorithms. Hill-climbing continuously moves to higher value neighbors until a local peak is reached. Simulated annealing also examines random moves and can accept moves to worse states based on probability. Both aim to find an optimal or near-optimal solution but can get stuck in local optima.
Means End Analysis (MEA) in Artificial.pptxsuchita74
Means end analysis (MEA) is a technique used in AI programs to solve problems by defining goals and establishing action plans to reach those goals. MEA evaluates the differences between the current state and target goal state, then decides the best actions to undertake to reach the end goal through a combination of forward and backward strategies. It works by first evaluating the current state, defining a target goal and splitting it into sub-goals linked to executable actions, then undertaking intermediate steps by applying operators to reduce differences between states until the target is achieved.
The document discusses problem solving by searching. It describes problem solving agents and how they formulate goals and problems, search for solutions, and execute solutions. Tree search algorithms like breadth-first search, uniform-cost search, and depth-first search are described. Example problems discussed include the 8-puzzle, 8-queens, and route finding problems. The strategies of different uninformed search algorithms are explained.
The document discusses informed search techniques that use heuristic information to guide the search for a solution more efficiently. It describes how heuristic information about the problem domain can help constrain the search space. Hill climbing and best-first search are two informed search strategies discussed. Hill climbing iteratively moves to successor states with improved heuristic values until a local optimum is reached. Best-first search maintains an open list of promising nodes to explore and prioritizes expanding nodes with the best heuristic values to avoid getting stuck in local optima.
This document discusses production systems and different search strategies used in artificial intelligence. A production system consists of rules, knowledge databases, a control strategy, and a rule applier. Control strategies must cause motion and be systematic. Breadth-first search explores all neighbors of the initial node before moving to the next level, while depth-first search explores as far as possible along each branch before backtracking. Heuristic search uses heuristics or rules of thumb to guide the search towards the most promising paths.
Heuristic search algorithms use heuristics, or problem-specific knowledge, to guide the search for a solution. Some heuristics guarantee completeness while others may sacrifice completeness to improve efficiency. A heuristic function estimates the cost to reach the goal state from the current state. For example, in the 8-puzzle problem the Manhattan distance heuristic estimates this cost as the sum of the distances each misplaced tile would need to move to reach its goal position. The example shows applying the Manhattan distance heuristic to guide the search for a solution to instances of the 8-puzzle problem.
A star algorithm | A* Algorithm in Artificial Intelligence | EdurekaEdureka!
YouTube Link: https://youtu.be/amlkE0g-YFU
** Artificial Intelligence and Deep Learning: https://www.edureka.co/ai-deep-learni... **
This Edureka PPT on 'A Star Algorithm' teaches you all about the A star Algorithm, the uses, advantages and disadvantages and much more. It also shows you how the algorithm can be implemented practically and has a comparison between the Dijkstra and itself.
Check out our playlist for more videos: http://bit.ly/2taym8X
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Problem solving
Problem formulation
Search Techniques for Artificial Intelligence
Classification of AI searching Strategies
What is Search strategy ?
Defining a Search Problem
State Space Graph versus Search Trees
Graph vs. Tree
Problem Solving by Search
This document discusses adversarial search in artificial intelligence. It provides an overview of games and introduces the minimax algorithm. The minimax algorithm is used to determine optimal strategies in two-player adversarial games by recursively considering all possible moves by both players. Tic-tac-toe is given as an example game where minimax can be applied to choose the best first move. The properties and limitations of the minimax algorithm are also summarized.
this is ppt on the topic of heuristic search techniques or we can also known it by the name of informed search techniques.
in this presentation we only disscuss about three search techniques there are lot of them by the most important once are in this presentation.
This document discusses various classical and advanced optimization techniques. It begins with an overview of classical techniques like single/multivariable optimization and methods using Lagrange multipliers or Kuhn-Tucker conditions. It then describes numerical methods including linear programming, integer programming, and nonlinear programming. Finally, it outlines advanced techniques such as hill climbing, simulated annealing, genetic algorithms, ant colony optimization, and how they draw inspiration from natural processes to solve optimization problems.
This document discusses various classical and advanced optimization techniques. It begins with an overview of classical techniques like single/multivariable optimization and methods using Lagrange multipliers or Kuhn-Tucker conditions. Numerical methods are then introduced, including linear programming, integer programming, and nonlinear programming. Advanced techniques like hill climbing, simulated annealing, genetic algorithms, and ant colony optimization are also summarized. These optimization methods are inspired by natural processes and use techniques such as local search, positive feedback, and path pheromones to find approximate solutions.
Traveling Salesman Problem (TSP) is a kind of NPHard problem which cant be solved in polynomial time for
asymptotically large values of n. In this paper a balanced combination of Genetic algorithm and Simulated Annealing is used. To
improve the performance of finding optimal solution from huge
search space, we have incorporated the use of tournament and
rank as selection operator. And Inver-over operator Mechanism
for crossover and mutation . To illustrate it more clearly an
implementation in C++ (4.9.9.2) has been done.
Index Terms—Genetic Algorithm (GA) , Simulated Annealing
(SA) , Inver-over operator , Lin-Kernighan algorithm , selection
operator , crossover operator , mutation operator.
- The document discusses various problem solving techniques in artificial intelligence including search strategies like BFS, DFS, A*, heuristic search, and beyond classical search methods.
- It describes local search algorithms like hill climbing, simulated annealing, and genetic algorithms that are used for large search spaces and optimization problems.
- Various hill climbing techniques - simple, steepest ascent, stochastic, and random restart hill climbing - are explained along with state space diagrams and concepts like local maxima.
- The next session will cover local search in continuous spaces.
The document discusses various optimization techniques and algorithms including genetic algorithms, artificial neural networks, and data analytics. Specifically, it covers genetic algorithms in more detail including the basic concepts of populations of chromosomes evolving over generations using processes like crossover, mutation, and selection to optimize an objective function. It also discusses other metaheuristic algorithms like simulated annealing, particle swarm optimization, and ant colony optimization which are inspired by natural processes and use stochastic components to find robust solutions.
The document discusses binary genetic algorithms. It begins by motivating GAs as able to find good enough solutions fast enough compared to traditional methods. It then describes GAs as optimization techniques based on genetics and natural selection. The key steps of a binary GA are described as population initialization, fitness calculation, selection, crossover, mutation, and termination. Example applications like the travelling salesman problem are provided. Advantages include not requiring derivatives, always finding an answer, and being faster than traditional methods. Limitations include high computational cost of fitness calculations and potential lack of convergence to the optimal solution.
Local search algorithms aim to find configurations that satisfy constraints by starting with a single state and iteratively moving to neighboring states. Hill climbing is a basic local search technique where the algorithm always chooses the neighbor with the best value according to a heuristic function. However, hill climbing can get stuck in local optima. Variants like simulated annealing, tabu search, and genetic algorithms incorporate randomness to help escape local optima.
1) Hill climbing is a local search algorithm that continuously moves in the direction of increasing value to find the optimal solution. It terminates when no neighbor has a higher value.
2) It has a linear time complexity but constant space complexity. It is used to optimize mathematical problems like the traveling salesman problem.
3) There are different types of hill climbing algorithms like simple hill climbing, steepest ascent hill climbing, and stochastic hill climbing that vary in how they evaluate neighbor states.
Reinforcement learning algorithms like Q-learning, SARSA, DQN, and A3C help agents learn optimal behaviors through trial-and-error interactions with an environment. Q-learning uses a model-free approach to estimate state-action values without a transition model. SARSA is similar to Q-learning but is on-policy, learning the value function from the current policy. DQN approximates Q-values using a neural network to handle large state spaces. A3C uses multiple asynchronous agents interacting with individual environments to learn diversified policies through an actor-critic framework.
The pure heuristic search algorithm maintains an open list of generated nodes that have not been expanded and a closed list of nodes that have. It begins with the initial state on the open list and at each cycle expands the node with the minimum heuristic value, generating its children and placing them on the open list in heuristic order. This continues until a goal state is expanded. Heuristic search sacrifices completeness for efficiency by using heuristics to guide the search towards the goal. Examples given include the 15-puzzle, maze navigation, and the missionaries and cannibals river crossing problem.
The pure heuristic search algorithm maintains an open list of generated nodes that have not been expanded and a closed list of nodes that have. It begins with the initial state on the open list and at each cycle expands the node with the minimum heuristic value, generating its children and placing them on the open list in heuristic order. This continues until a goal state is expanded. Heuristic search sacrifices completeness for efficiency by using heuristics to guide the search towards the goal. Examples given include the 15-puzzle, maze navigation, and the missionaries and cannibals river crossing problem.
Hill climbing is a local search algorithm that starts with a random solution and iteratively makes small changes to improve the solution. It terminates when no further improvements can be made. Hill climbing can get stuck at local optima rather than finding the global optimum. Simulated annealing is similar to hill climbing but allows occasional "downhill moves" that worsen the solution based on a probability function involving the change in solution quality and temperature parameter. The temperature is gradually decreased, reducing the probability of downhill moves over time. This helps simulated annealing avoid local optima and find better solutions than hill climbing.
This document summarizes a presentation on natural computing. It begins by defining natural computing as a field that investigates computational systems and algorithms inspired by nature. It then discusses various types of natural computing, including evolutionary computing, neural computing, swarm computing, DNA computing, artificial immune systems, and artificial life. For each type, it provides an overview of the inspiration from nature, basic principles, and examples of applications. The document concludes by discussing the philosophy of natural computing as a multidisciplinary field.
This document discusses techniques for ordering variables and values in constraint satisfaction problems to improve search efficiency. It describes heuristics like first-fail and most-constraints for variable ordering that aim to tackle harder constraints earlier. For value ordering, it recommends heuristics like most-supporting-values and lowest-cost that aim to succeed earlier by choosing values more likely to extend the partial solution. The document also introduces local search algorithms like hill-climbing and min-conflicts that incrementally modify assignments to reach solutions.
The document discusses local search algorithms like hill-climbing for solving optimization problems. It explains that hill-climbing iteratively moves to successor states with improved evaluations until a local optimum is reached. However, hill-climbing often gets stuck in local optima and fails to find global optima. The document proposes methods like allowing sideways moves, random restarts, and stochastic selection to help hill-climbing escape local optima and improve performance.
This document discusses local search algorithms. It begins by introducing the topic of local search algorithms and some examples of problems they can be applied to, such as the n-queens problem. It then describes several specific local search algorithms in more detail, including hill-climbing search, gradient descent, simulated annealing search, local beam search, and genetic algorithms. It also discusses techniques like random restart wrappers and tabu search wrappers that can help local search algorithms avoid getting stuck in local optima.
1) The document discusses the Longest Increasing Subsequence (LIS) problem to find the longest subsequence of a given sequence where elements are in increasing order. It provides an example LIS of length 6 for a sample input array. A dynamic programming table is used to store the LIS value for each array element.
2) The problem of counting the number of ways to make change for an amount N using coins of values in S is discussed. A 2D dynamic programming table is used where one dimension tracks coins and the other tracks the change value.
3) The 0-1 Knapsack problem is described, to find the maximum value subset of items fitting in a knapsack of capacity
The document discusses dynamic programming and provides examples of problems that can be solved using dynamic programming techniques. It describes characteristics of dynamic programming problems such as overlapping subproblems and optimal substructure properties. It also describes two common approaches to dynamic programming - top-down with memorization and bottom-up with tabulation. Finally, it lists 12 practice problems related to topics like staircase problem, tiling problem, friends pairing problem, house thief problem, minimum jumps problem, Catalan numbers, binomial coefficients, permutation coefficients, subset sum problem, 0/1 knapsack problem, longest common subsequence and edit distance that can be solved using dynamic programming.
The document discusses key graph concepts like connected graphs, connected components, strongly connected graphs, and strongly connected components. It also covers disjoint set data structures, including the operations of make-set, union, and find-set. It describes how linked lists and disjoint set forests can be used to represent disjoint sets and discusses techniques like union by rank and path compression that allow disjoint set operations to run in nearly linear time. Finally, it defines minimum spanning trees and covers Kruskal's and Prim's algorithms for finding minimum spanning trees in graphs.
The document discusses graphs and their representations using adjacency matrices and lists. It also describes different algorithms for solving the single-source shortest path problem on graphs, including breadth-first search (BFS), Dijkstra's algorithm, and Bellman-Ford algorithm. BFS runs in O(V+E) time and works when edge weights are equal. Dijkstra's algorithm uses a min-priority queue and runs in O(ElogV) time when implemented with a Fibonacci heap, handling graphs with positive edge weights. Bellman-Ford works for graphs with positive or negative edge weights, running in O(VE) time.
The document discusses three algorithms:
1) Quicksort partitions an array around a pivot element, recursively sorting the left and right subarrays.
2) Finding the maximum subarray sum divides an array in half at each step, calculating sums within and across the halves.
3) Finding the rotation count of a rotated sorted array returns the index of the minimum element, representing the number of rotations.
The document discusses the divide and conquer algorithm design paradigm and some examples of its applications. It describes divide and conquer as having three key steps: 1) divide the problem into subproblems, 2) solve the subproblems recursively, and 3) combine the solutions. It then lists several problems and their divide and conquer solutions, including merge sort, quick sort, calculating powers, randomized binary search, fast multiplication, finding maximum subarray sums, counting array inversions, and finding peak elements in an array.
The document provides examples of recursive code to solve various problems and asks the reader to write similar recursive code to solve related problems. It includes code to find the maximum value in an array, check if a string is a palindrome, calculate the greatest common divisor (GCD) of integers, solve the Tower of Hanoi problem, and calculate the number of ways to cover a distance moving units of 1, 2, or 3 each move. The reader is asked to write code for related problems such as finding the maximum digit in an integer, checking if an integer array is a palindrome, calculating the GCD of an integer array, solving Tower of Hanoi with two intermediate poles, and calculating ways to cover a distance moving specified
The document discusses recursion, defining it as a problem-solving technique where problems are solved by reducing them to smaller problems of the same type. It provides examples of different types of recursion like direct, indirect, tail, and tree recursion. It also lists 15 recursive problems and their solutions including finding maximum/minimum in an array, calculating factorials, Fibonacci numbers, and solving subset sum and coin change problems.
1. The document describes 3 programming tasks:
- Read characters from an adjacency list file and print them
- Build an adjacency list graph data structure from the file
- Implement a queue using two stacks
2. It provides code templates and instructions for building a queue that implements enqueue by pushing to one stack and dequeue by popping between two stacks.
3. The tasks are to read characters from a file into a graph, build the graph data structure, and implement a queue with two stacks that demonstrates pushing and popping values.
Mohammad Imam Hossain is a lecturer in the Department of Computer Science and Engineering at UIU. His email is provided. The document discusses C++ including its history and structure, input/output, file I/O, STL containers like vector, stack, queue and map, and strings. Key STL concepts covered are containers, iterators, insertion, deletion, searching and accessing elements.
The document discusses transactions and transaction management in database systems. It defines transactions as logical units of work that must follow the ACID properties of atomicity, consistency, isolation, and durability. Transactions access and update data using operations like read and write. The transaction model ensures concurrent transactions execute reliably by enforcing serializability through techniques like conflict analysis and precedence graphs. Maintaining serializability guarantees the isolation property and prevents anomalous behavior from transaction interleaving.
This slide explains the conversion procedure from ER Diagram to Relational Schema.
1. Entity set to Relation
2. Relationship set to Relation
3. Attributes to Columns, Primary key, Foreign Keys
1. What is Entity Relationship Model
2. Entity and Entity Set
3. Relationship and Relationship Set
4. Attributes and it's kinds
5. Participation Constraints and Mapping Cardinality
6. Aggregation, Specialization, and Generalization
7. Some Sample ERD models
This note includes the followings:
- Database Create, Drop Operations
- Database Table Create, Drop Operations
- Database Table Alter Operation
- Data insertion
- Data deletion
- Existing data update
- Searching data from data table (showing all record, specific columns, specific rows, column aliasing, sorting data, limiting data, distinct data)
- Aggregate functions
- Group by clause
- Having clause
- Types of table joins
- Table aliasing, Inner Join, Left/Right Join, Self Join
- Subquery operation (scalar subquery, column subquery, row subquery, correlated subquery, derived table)
This note contains some sample MySQL query practices based on the HR Schema database. The practice sections are from the following categories:
- DDL statements
- Basic Select statements
- Aggregate operations
- Join operations
This lecture slide contains:
- Difference between FA, PDA and TM
- Formal definition of TM
- TM transition function and configuration
- Designing TM for different languages
- Simulating TM for different strings
This slide contains,
1) Some terminologies like yields, derives, word, derivation
2) Leftmost and Rightmost derivation
3) Ambiguity checking
4) Parse tree generation and ambiguity checking
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
2. Classical Search
The search problems we have considered during the Uninformed/Informed Search have the following properties:
▸Observable
– If an agent’s sensors give it access to the complete state of the environment at each point in time, then we
say that the task environment is fully observable.
▸Deterministic
– If the next state of the environment is completely determined by the current state and action executed by the
agent, then we say the environment is deterministic.
▸Known
– In a known environment, the outcomes for all actions are given and the agent won’t have to learn how it works
in order to make good decisions.
Classical search algorithms explore the search space systematically and the solution of the search problem is a sequence of
actions representing paths from an initial state to the goal state.
2
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
3. Local Search
▸There exists some problems where the path to the goal is irrelevant.
▸For example,
In the 8-queens problem what matters is the final configuration of queens, not the order in which they are added.
▸If the path to the goal does not matter, we might consider a different class of algorithms(Local Search), ones that do not
worry about paths at all.
▸Local search algorithms evaluate and modify one or more current states rather than systematically exploring paths from an
initial state.
▸Local Search
- Keep track of single current state
- Move only to neighboring states
- Path followed by the search are not retained
▸Advantages
- Use very little memory
- Can often find reasonable solutions in large or infinite state spaces for which systematic algorithms are unsuitable
- Useful for solving pure optimization problems, in which the aim is to find the best state according to an objective
function
3
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
4. State-space Landscape
Components >>
▸Location – state
▸Elevation – heuristic cost function or,
objective function.
An optimal algorithm always finds a global
maximum(for objective function)/minimum(for
heuristic cost function)
▸Local Maxima – a peak that is higher than each of
its neighboring states but lower than the global
maximum
▸Plateaux – a plateau is a flat area of the state-
space landscape. It can be a flat local maximum
from which no uphill exit exists, or a shoulder from
which progress is possible.
4
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
5. Hill-climbing Search >> Steepest-ascent Version
▸Hill-climbing search algorithm is simply a loop that continually moves in the direction of increasing value - that is uphill.
▸It terminates when it reaches a peak where no neighbor has a higher value.
▸It doesn’t maintain a search tree, so the data structure for the current node need only record the state and the value of the
objective function/heuristic cost function.
▸Can randomly choose among the set of best successors, if multiple have the best value.
▸It doesn’t look ahead beyond the immediate neighbors of the current state.
▸Also known as Greedy Local Search
5
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
6. Hill-climbing Search >> Example
▸8-queens problem:
- Local search algorithms use a complete-state formulation, where each state has 8-queens on the board one per
columns.
- Successors of a state are all possible states generated by moving a single queen to another square in the same
column. (8x7=56 successors)
- Heuristic cost function h is the number of pairs of queens that are attacking each other, either directly or indirectly.
- The global minima of this heuristic function i.e. zero occurs only at perfect solution.
6
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
Local minima, h=1Initial State, h=17
9. Hill-climbing Search >> Drawbacks
Hill-climbing search often gets stuck for the following reasons:
▸Local Maxima >>
▹It is a peak that is higher than each of its neighboring states but lower than the global maximum.
▹For 8-queens problem at local minima, each move of a single queen makes the situation worse.
▸Ridges >>
▹Sequence of local maxima difficult for greedy algorithms to navigate
▸Plateaux >>
▹A plateau is a flat area of the state-space landscape. If can be a flat
local maximum or, a shoulder.
9
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
10. Hill-climbing Search >> Local Maxima
10
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
11. Hill-climbing Search >> Performance
▸Randomly generated 8-queens starting state…
▸14% the time it solves the problem
▸86% of the time it gets stuck at a local minimum
▸However…
- Takes only 4 steps on average when it succeeds
- And 3 on average when it gets stuck
- Not bad (for a state space with 88~17 million states)
11
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
12. Hill-climbing Search >> Sideway Move version
▸If no downhill (uphill) moves, allow sideways moves in hope that algorithm can escape
- Need to place a limit on the possible number of sideways moves to avoid infinite loops
▸For 8-queens
- Now allow sideways moves with a limit of 100
- Raises percentage of problem instances solved from 14 to 94%
- However….
- 21 steps for every successful solution
- 64 for each failure
12
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
13. Hill-climbing Search >> Variations
▸Stochastic hill-climbing
- Random selection among the uphill/downhill moves.
- The selection probability can vary with the steepness of the uphill move.
Sample Algorithm [Uphill version] >>
13
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
eval(vc) = 107, T = const = 10
14. Hill-climbing Search >> Variations
▸First-choice hill-climbing
- stochastic hill climbing by generating successors randomly until a better one is found
- Useful when there are a very large number of successors
▸Random-restart hill-climbing
- If at first you don’t succeed, try, try again.
- Tries to avoid getting stuck in local maxima.
14
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
15. Simulated Annealing
15
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸A hill-climbing algorithm that never makes downhill moves(for maximization problem) toward states with lower value(or
higher cost) is guaranteed to be incomplete, because it can get stuck on a local maximum.
▸In contrast, a purely random walk-that is moving to successor chosen uniformly at random from the set of successors-is
complete but extremely inefficient.
▸Simulated Annealing is such an algorithm that combines both hill climbing with a random walk in such a way that yields
both efficiency and completeness.
16. Simulated Annealing
16
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸In metallurgy, annealing is the process used to temper or harden metals and glass by heating them to a high temperature
and then gradually cooling them, thus allowing the material to reach a low energy crystalline state.
▸The simulated-annealing solution is to start by shaking hard (i.e. high temperature) and then gradually reduce the
intensity of the shaking (i.e. lower the temperature).
▸Instead of picking the best move, it picks a random move.
- If the move improves the situation, it is always accepted.
- Otherwise, the algorithm accepts the move with some probability less than 1.
▸The probability decreases exponentially with the badness of the move – the amount ∆E by which the evaluation is
worsened.
▸The probability also decreases as the temperature T goes down. As a result bad moves more likely to be allowed at the
start when T is high, and they become more unlikely as T decreases.
▸If the schedule lowers T slowly enough, the algorithm will find a global optimum with probability approaching 1.
17. Simulated Annealing >> Algorithm
17
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
18. Simulated Annealing >> Algorithm
18
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
19. Local Beam Search
19
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸Because of memory limitations, we are just maintaining one node in memory in our previous local search algorithms.
▸The local beam search keeps track of k states rather than just one.
▹It begins with k randomly generated states.
▹At each step, all the successors of all k states are generated. If any one is a goal, the algorithm halts. Otherwise it
selects the k best successors from the complete list and repeats.
▸In a random restart search each search process runs independently of the others; while in local beam search useful
information is passed among the parallel search threads.
▸The algorithm quickly abandons unfruitful searches and moves its resources to where the most progress is being made.
▸Problem:
- It can suffer from a lack of diversity among the k states - they can quickly become concentrated in a small region
of the state space, making the search little more than an expensive version of hill-climbing.
▸Solution:
- Stochastic Beam Search : Instead of choosing the best k from the pool of candidate successors, stochastic beam
search chooses k successors at random, with the probability of choosing a given successor being an increasing
function of its value.
20. Genetic Algorithm (GA)
20
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸A Genetic Algorithm is a variant of stochastic beam search in which successor states are generated by combining two
parents rather than by modifying a single state.
▸Population >> GAs begin with a set of k randomly generated states, called population.
▸Individual >> Each state or, individual is represented as a string over finite alphabet.
For example, 8-queens state must specify the positions of 8-queens, each in a column of 8 squares.
- way 1: a bit string requires 8 x log2(8) = 8 x 3 = 24 bits. (problem: cut in the middle of a digit)
- way 2: a string representing a sequence of 8 digits.
▸Genetic algorithms combine an uphill tendency with random exploration and exchange of
information among parallel search threads.
1 6 2 5 7 4 8 3
21. Genetic Algorithm (GA) >> Fitness Function
21
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸Each state/individual is rated by the objective/fitness function. A fitness function should return higher values for better
states.
For 8-queens, fitness function = number of not-attacking pairs of queens, which has a value of 28 for solution.
▸In this variant, we assume that the probability of being chosen for reproducing is directly proportional to the fitness score.
22. Genetic Algorithm (GA) >> Reproduction & Crossover
22
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸Pairs are selected at random for reproduction, in accordance with the calculated probabilities of the previous step.
▸For each pair to be mated, a crossover point is chosen randomly from the positions in the string.
- In our case, the crossover points are after the third digit in the first pair and after the fifth digit in the second pair.
- In our variant, each mating of two parents produces two offsprings.
23. Genetic Algorithm (GA) >> Reproduction & Crossover
23
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸The population is quite diverse early on in the process, so crossover frequently takes large steps in the state space early in
the search process and smaller steps later on when most individuals are quite similar.
▸The following figure illustrates that when two parent states are quite different, the crossover operation can produce a state
that is a long way from either parent state.
24. Genetic Algorithm (GA) >> Mutation
24
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸Each location is subject to random mutation with a small independent probability.
▸For 8-queens problem, we need to choose a queen at random and moving it to a random square in its column.
▸In our example, one digit was mutated in the first, third and fourth offspring.
25. Genetic Algorithm (GA) >> Mutation
25
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
26. Genetic Algorithm (GA) >> Comments
26
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸Positive points
- Random exploration can find solutions that local search can’t
- Appealing connection to human programming
▸Negative points
- Large number of tunable parameters
- Lack of good empirical studies comparing to simpler methods
- Useful on some set of problems but no convincing evidence that GAs are better than hill-climbing w/random restarts
in general.
27. Genetic Algorithm (GA) >> Max One Problem
27
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸The Max-One problem is a very simple problem where evolution is used to find a specific gene.
▸A gene is essentially a piece of text filled with random binary values, binary string.
Start Gene: 1010001010
Target Gene: 1111111111
▸The fitness f of a candidate solution to the Max-One problem is the number of ones in its genetic code.
28. Genetic Algorithm (GA) >> Max One Problem
28
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸We start with a population of n random strings.
▸Suppose that l=10 and n=6
s1= 1111010101 f (s1) = 7
s2= 0111000101 f (s2) = 5
s3= 1110110101 f (s3) = 7
s4= 0100010011 f (s4) = 4
s5= 1110111101 f (s5) = 8
s6= 0100110000 f (s6) = 3
29. Genetic Algorithm (GA) >> Selection
29
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸We randomly select a subset of the individuals based on their fitness:
Individual i will have a probability =
𝑓(𝑖)
𝑓(𝑖)𝑖
to be chosen.
s1’= 1111010101(s1)
s2’ = 1110110101(s3)
s3’ = 1110111101(s5)
s4’ = 0111000101 (s2)
s5’ = 0100010011 (s4)
s6’ = 1110111101 (s5)
30. Genetic Algorithm (GA) >> Crossover
30
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸Next we mate strings for crossover. For each couple we first decide whether to actually perform the crossover or not.
▸If we decide to actually perform crossover, we randomly extract the crossover points.
s1’= 1111010101(s1) s1’’ = 1110110101
s2’ = 1110110101(s3) s2’’ = 1111010101
s3’ = 1110111101(s5) s3’’ = 1110111101
s4’ = 0111000101 (s2) s4’’ = 0111000101
s5’ = 0100010011 (s4) s5’’ = 0100011101
s6’ = 1110111101 (s5) s6’’ = 1110110011
31. Genetic Algorithm (GA) >> Mutation
31
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU
▸For each bit that we are to copy to the new population we allow a small probability of error (for example 0.1)
s1’’ = 1110110101 s1’’’ = 1110100101
s2’’ = 1111010101 s2’’’ = 1111110100
s3’’ = 1110111101 s3’’’ = 1110101111
s4’’ = 0111000101 s4’’’ = 0111000101
s5’’ = 0100011101 s5’’’ = 0100011101
s6’’ = 1110110011 s6’’’ = 1110110001
Go through the same process all over again, until a stopping criterion is met.
32. Genetic Algorithm (GA) >> Previous Question
32
Mohammad Imam Hossain | Lecturer, Dept. of CSE | UIU