This presentation discuses the following topics:
What is A-Star (A*) Algorithm in Artificial Intelligence?
A* Algorithm Steps
Why is A* Search Algorithm Preferred?
A* and Its Basic Concepts
What is a Heuristic Function?
Admissibility of the Heuristic Function
Consistency of the Heuristic Function
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
This presentation discuses the following topics:
What is A-Star (A*) Algorithm in Artificial Intelligence?
A* Algorithm Steps
Why is A* Search Algorithm Preferred?
A* and Its Basic Concepts
What is a Heuristic Function?
Admissibility of the Heuristic Function
Consistency of the Heuristic Function
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
The Foundations of Artificial Intelligence, The History of
Artificial Intelligence, and the State of the Art. Intelligent Agents: Introduction, How Agents
should Act, Structure of Intelligent Agents, Environments. Solving Problems by Searching:
problem-solving Agents, Formulating problems, Example problems, and searching for Solutions,
Search Strategies, Avoiding Repeated States, and Constraint Satisfaction Search. Informed
Search Methods: Best-First Search, Heuristic Functions, Memory Bounded Search, and Iterative
Improvement Algorithms.
A star algorithm | A* Algorithm in Artificial Intelligence | EdurekaEdureka!
YouTube Link: https://youtu.be/amlkE0g-YFU
** Artificial Intelligence and Deep Learning: https://www.edureka.co/ai-deep-learni... **
This Edureka PPT on 'A Star Algorithm' teaches you all about the A star Algorithm, the uses, advantages and disadvantages and much more. It also shows you how the algorithm can be implemented practically and has a comparison between the Dijkstra and itself.
Check out our playlist for more videos: http://bit.ly/2taym8X
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Uncertain Knowledge and Reasoning in Artificial IntelligenceExperfy
Learn how to take informed decisions based on probabilities and expert knowledge
Understand and explore one of the most exciting advances in AI in the last decades.
Many hands-on examples, including Python code.
Check it out: https://www.experfy.com/training/courses/uncertain-knowledge-and-reasoning-in-artificial-intelligence
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
The Foundations of Artificial Intelligence, The History of
Artificial Intelligence, and the State of the Art. Intelligent Agents: Introduction, How Agents
should Act, Structure of Intelligent Agents, Environments. Solving Problems by Searching:
problem-solving Agents, Formulating problems, Example problems, and searching for Solutions,
Search Strategies, Avoiding Repeated States, and Constraint Satisfaction Search. Informed
Search Methods: Best-First Search, Heuristic Functions, Memory Bounded Search, and Iterative
Improvement Algorithms.
A star algorithm | A* Algorithm in Artificial Intelligence | EdurekaEdureka!
YouTube Link: https://youtu.be/amlkE0g-YFU
** Artificial Intelligence and Deep Learning: https://www.edureka.co/ai-deep-learni... **
This Edureka PPT on 'A Star Algorithm' teaches you all about the A star Algorithm, the uses, advantages and disadvantages and much more. It also shows you how the algorithm can be implemented practically and has a comparison between the Dijkstra and itself.
Check out our playlist for more videos: http://bit.ly/2taym8X
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Uncertain Knowledge and Reasoning in Artificial IntelligenceExperfy
Learn how to take informed decisions based on probabilities and expert knowledge
Understand and explore one of the most exciting advances in AI in the last decades.
Many hands-on examples, including Python code.
Check it out: https://www.experfy.com/training/courses/uncertain-knowledge-and-reasoning-in-artificial-intelligence
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
What is A * Search? What is Heuristic Search? What is Tree search Algorithm?Santosh Pandeya
What is A * Search? What is Heuristic Search? What is Tree search Algorithm?
Moving from one place to another is a task that we humans do almost every day. We try to find the shortest path that enables us to reach our destinations faster and make the whole process of traveling as efficient as possible. In the old days, we would trial and error with the paths available and had to assume which path taken was shorter or longer.
What is a Search Algorithm?
Tree search Algorithm
Breadth-First Search
Depth-First Search
Bidirectional Search
Uniform Cost Search
Iterative Deepening Depth-First Search
Heuristic Search
Manhattan distance
Pure Heuristic Search
A * Search
Formula
A * Search Explanation
What is artificial intelligence,Hill Climbing Procedure,Hill Climbing Procedure,State Space Representation and Search,classify problems in AI,AO* ALGORITHM
Search techniques in ai, Uninformed : namely Breadth First Search and Depth First Search, Informed Search strategies : A*, Best first Search and Constraint Satisfaction Problem: criptarithmatic
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
1. Unit – II
Uninformed and Informed Search Algorithms: Random search, Search with closed and
open list, Depth first and Breadth first search, Heuristic search: Generate & Test, Hill
Climbing, Best first search, A* algorithm, Game Search, Alpha-Beta Pruning Genetic
Algorithm.
TYPES OF SEARCH ALGORITHMS IN ARTIFICIAL INTELLIGENCE:
1.Uninformed Search Algorithms.
2. Informed Search Algorithms.
Uniformed Search
• Uniformed Search, which is also known as Blind search or Brute-Force Search, does
not have any additional information or domain knowledge that can help it reach the
goal, other than the one provided in the problem definition.
This type of search algorithm is only focused on reaching the goal and achieving success and
can be applied to a variety of search algorithms, as they are not concerned about the target
problem.
Types of Uniformed Search Algorithms
• Breadth-First Search
• Uniform Cost Search
• Depth First Search
• Iterative Deepening Search
• Bidirectional Search
Informed Search
Also known as Heuristic Search, this type of search algorithm uses the domain knowledge and
follows a heuristic function that estimates the cost of the optimal path between two states as
well as how close a state is to the goal.
Types of Informed Search Algorithms
• Greedy Search
• A* Tree Search
• A* Graph Search
2. Uninformed Search Methods:
Breadth- First -Search:
Consider the state space of a problem that takes the form of a tree. Now, if we search the goal
along each breadth of the tree, starting from the root and continuing up to the largest depth, we
call it breadth first search.
• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
BFS illustrated:
Step 1: Initially fringe contains only one node corresponding to the source state A.
FRINGE: A
Step 2: A is removed from fringe. The node is expanded, and its children B and C are
generated. They are placed at the back of fringe.
3. FRINGE: B C
Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and
put at the back of fringe.
FRINGE: C D E
Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the
back of fringe.
4. FRINGE: D E D G
Step 5: Node D is removed from fringe. Its children C and F are generated and added to the
back of fringe.
FRINGE: E D G C F
Step 6: Node E is removed from fringe. It has no children.
FRINGE: D G C F
Step 7: D is expanded; B and F are put in OPEN.
5. FRINGE: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns the
path A C G by following the parent pointers of the node corresponding to G. The algorithm
terminates.
Breadth first search is:
One of the simplest search strategies
Complete. If there is a solution, BFS is guaranteed to find it.
If there are multiple solutions, then a minimal solution will be found
The algorithm is optimal (i.e., admissible) if all operators have the same cost.
Otherwise, breadth first search finds a solution with the shortest path length.
Time complexity : O(bd )
Space complexity : O(bd )
Optimality :Yes b - branching factor(maximum no of successors of any node),
b - branching factor(maximum no of successors of any node),
d – Depth of the shallowest goal node
Maximum length of any path (m) in search space
Advantages: Finds the path of minimal length to the goal.
Disadvantages:
Requires the generation and storage of a tree whose size is exponential the depth of the
shallowest goal node.
The breadth first search algorithm cannot be effectively used unless the search space is
quite small.
6. Depth- First- Search
We may sometimes search the goal along the largest depth of the tree, and move up only when
further traversal along the depth is not possible. We then attempt to find alternative offspring
of the parent of the node (state) last visited. If we visit the nodes of a tree using the above
principles to search the goal, the traversal made is called depth first traversal and consequently
the search strategy is called depth first search.
• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state.
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit.
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state in front of NODE-LIST
DFS illustrated:
Step 1: Initially fringe contains only the node for A.
7. FRINGE: A
Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of fringe.
FRINGE: B C
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.
FRINGE: D E C
8. Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.
FRINGE: C F E C
Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.
FRINGE: G F E C
Step 6: Node G is expanded and found to be a goal node.
9. FRINGE: G F E C
The solution path A-B-D-C-G is returned and the algorithm terminates.
Depth first search is:
1. The algorithm takes exponential time.
2. If N is the maximum depth of a node in the search space, in the worst case the algorithm will take
time O(bd).
3. The space taken is linear in the depth of the search tree, O(bN).
Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the
search tree has infinite depth, the algorithm may not terminate. This can happen if the search space is
infinite. It can also happen if the search space contains cycles. The latter case can be handled by
checking for cycles in the algorithm. Thus Depth First Search is not complete.
Informed Search Algorithms
So far we have talked about the uninformed search algorithms which looked through search
space for all possible solutions of the problem without having any additional knowledge about
search space. But informed search algorithm contains an array of knowledge such as how far
we are from the goal, path cost, how to reach to goal node, etc. This knowledge help agents to
explore less to the search space and find more efficiently the goal node.
The informed search algorithm is more useful for large search space. Informed search
algorithm uses the idea of heuristic, so it is also called Heuristic search.
Heuristics function: Heuristic is a function which is used in Informed Search, and it finds the
most promising path. It takes the current state of the agent as its input and produces the
estimation of how close agent is from the goal. The heuristic method, however, might not
always give the best solution, but it guaranteed to find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of states. The value of the heuristic
function is always positive.
10. Admissibility of the heuristic function is given as:
1. h(n) <= h*(n)
Here h(n) is heuristic cost, and h*(n) is the estimated cost. Hence heuristic cost should be less than or
equal to the estimated cost.
Pure Heuristic Search:
Pure heuristic search is the simplest form of heuristic search algorithms. It expands nodes based on their
heuristic value h(n). It maintains two lists, OPEN and CLOSED list. In the CLOSED list, it places those
nodes which have already expanded and in the OPEN list, it places nodes which have yet not been expanded.
On each iteration, each node n with the lowest heuristic value is expanded and generates all its successors
and n is placed to the closed list. The algorithm continues unit a goal state is found.
In the informed search we will discuss two main algorithms which are given below:
o Best First Search Algorithm(Greedy search)
o A* Search Algorithm
Heuristic Search methods Generate and Test Algorithm
1. generate a possible solution which can either be a point in the problem space
or a path from the initial state.
2. test to see if this possible solution is a real solution by comparing the state
reached with the set of goal states.
3. if it is a real solution, return. Otherwise repeat from 1.
This method is basically a depth first search as complete solutions must be created
before testing. It is often called the British Museum method as it is like looking for
an exhibit at random. A heuristic is needed to sharpen up the search. Consider the
problem of four 6-sided cubes, and each side of the cube is painted in one of four
colours. The four cubes are placed next to one another and the problem lies in
arranging them so that the four available colours are displayed whichever way the 4
cubes are viewed. The problem can only be solved if there are at least four sides
coloured in each colour and the number of options tested can be reduced using
heuristics if the most popular colour is hidden by the adjacent cube.
Best-first Search Algorithm (Greedy Search):
Greedy best-first search algorithm always selects the path which appears best at that moment. It is the
combination of depth-first search and breadth-first search algorithms. It uses the heuristic function and
search. Best-first search allows us to take the advantages of both algorithms. With the help of best-first
search, at each step, we can choose the most promising node. In the best first search algorithm, we expand
the node which is closest to the goal node and the closest cost is estimated by heuristic function, i.e.
1. f(n)= g(n).
11. Were, h(n)= estimated cost from node n to the goal.
The greedy best first algorithm is implemented by the priority queue.
Best first search algorithm:
o Step 1: Place the starting node into the OPEN list.
o Step 2: If the OPEN list is empty, Stop and return failure.
o Step 3: Remove the node n, from the OPEN list which has the lowest value of h(n), and
places it in the CLOSED list.
o Step 4: Expand the node n, and generate the successors of node n.
o Step 5: Check each successor of node n, and find whether any node is a goal node or
not. If any successor node is goal node, then return success and terminate the search,
else proceed to Step 6.
o Step 6: For each successor node, algorithm checks for evaluation function f(n), and
then check if the node has been in either OPEN or CLOSED list. If the node has not
been in both list, then add it to the OPEN list.
o Step 7: Return to Step 2.
Advantages:
o Best first search can switch between BFS and DFS by gaining the advantages of both the algorithms.
o This algorithm is more efficient than BFS and DFS algorithms.
Disadvantages:
o It can behave as an unguided depth-first search in the worst case scenario.
o It can get stuck in a loop as DFS.
o This algorithm is not optimal.
Example:
Consider the below search problem, and we will traverse it using greedy best-first search. At each iteration,
each node is expanded using evaluation function f(n)=h(n) , which is given in the below table.
12. In this search example, we are using two lists which are OPEN and CLOSED Lists. Following are the
iteration for traversing the above example.
Expand the nodes of S and put in the CLOSED list
Initialization: Open [A, B], Closed [S]
Iteration 1: Open [A], Closed [S, B]
Iteration 2: Open [E, F, A], Closed [S, B]
: Open [E, A], Closed [S, B, F]
Iteration 3: Open [I, G, E, A], Closed [S, B, F]
: Open [I, E, A], Closed [S, B, F, G]
Hence the final solution path will be: S----> B----->F----> G
Time Complexity: The worst case time complexity of Greedy best first search is O(bm
).
Space Complexity: The worst case space complexity of Greedy best first search is O(bm
). Where, m is the
maximum depth of the search space.
Complete: Greedy best-first search is also incomplete, even if the given state space is finite.
13. Optimal: Greedy best first search algorithm is not optimal.
A* Search Algorithm:
A* search is the most commonly known form of best-first search. It uses heuristic function
h(n), and cost to reach the node n from the start state g(n). It has combined features of UCS
and greedy best-first search, by which it solve the problem efficiently. A* search algorithm
finds the shortest path through the search space using the heuristic function. This search
algorithm expands less search tree and provides optimal result faster. A* algorithm is similar
to UCS except that it uses g(n)+h(n) instead of g(n).
In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we
can combine both costs as following, and this sum is called as a fitness number.
o Hill climbing algorithm is a local search algorithm which continuously moves in the direction of
increasing elevation/value to find the peak of the mountain or best solution to the problem. It
terminates when it reaches a peak value where no neighbor has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the mathematical problems. One
of the widely discussed examples of Hill climbing algorithm is Traveling-salesman Problem in
which we need to minimize the distance traveled by the salesman.
o It is also called greedy local search as it only looks to its good immediate neighbor state and not
beyond that.
o A node of hill climbing algorithm has two components which are state and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or graph as it only keeps a
single current state.
Features of Hill Climbing:
Following are some main features of Hill Climbing Algorithm:
14. o Generate and Test variant: Hill Climbing is the variant of Generate and Test method. The
Generate and Test method produce feedback which helps to decide which direction to move in the
search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction which optimizes the cost.
o No backtracking: It does not backtrack the search space, as it does not remember the previous
states.
Different regions in the state space landscape:
Local Maximum: Local maximum is a state which is better than its neighbor states, but there is also another
state which is higher than it.
Global Maximum: Global maximum is the best possible state of state space landscape. It has the highest
value of objective function.
Current state: It is a state in a landscape diagram where an agent is currently present.
Flat local maximum: It is a flat space in the landscape where all the neighbor states of current states have
the same value.
Shoulder: It is a plateau region which has an uphill edge.
Types of Hill Climbing Algorithm:
1. Simple hill Climbing:
2. Steepest-Ascent hill-climbing:
3. Stochastic hill Climbing:
1. Simple Hill Climbing:
Simple hill climbing is the simplest way to implement a hill climbing algorithm. It only
evaluates the neighbour node state at a time and selects the first one which optimizes
current cost and set it as a current state. It only checks it's one successor state, and if it finds
better than the current state, then move else be in the same state. This algorithm has the
following features:
o Less time consuming
o Less optimal solution and the solution is not guaranteed
Algorithm for Simple Hill Climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and Stop.
o Step 2: Loop Until a solution is found or there is no new operator left to apply.
o Step 3: Select and apply an operator to the current state.
o Step 4: Check new state:
15. 1. If it is goal state, then return success and quit.
2. Else if it is better than the current state then assign new state as a current state.
3. Else if not better than the current state, then return to step2.
o Step 5: Exit.
2. Steepest-Ascent hill climbing:
The steepest-Ascent algorithm is a variation of simple hill climbing algorithm. This algorithm
examines all the neighboring nodes of the current state and selects one neighbor node which is
closest to the goal state. This algorithm consumes more time as it searches for multiple
neighbors
Algorithm for Steepest-Ascent hill climbing:
o Step 1: Evaluate the initial state, if it is goal state then return success and stop, else
make current state as initial state.
o Step 2: Loop until a solution is found or the current state does not change.
1. Let SUCC be a state such that any successor of the current state will be better
than it.
2. For each operator that applies to the current state:
I. Apply the new operator and generate a new state.
II. Evaluate the new state.
III. If it is goal state, then return it and quit, else compare it to the SUCC.
IV. If it is better than SUCC, then set new state as SUCC.
V. If the SUCC is better than the current state, then set current state to
SUCC.
Step 5: Exit.
3. Stochastic hill climbing:
Stochastic hill climbing does not examine for all its neighbor before moving. Rather, this search
algorithm selects one neighbor node at random and decides whether to choose it as a current
state or examine another state.
Problems in Hill Climbing Algorithm:
1. Local Maximum: A local maximum is a peak state in the landscape which is better than
each of its neighboring states, but there is another state also present which is higher than the
local maximum.
16. Solution: Backtracking technique can be a solution of the local maximum in state space
landscape. Create a list of the promising path so that the algorithm can backtrack the search
space and explore other paths as well.
2. Plateau: A plateau is the flat area of the search space in which all the neighbor states of the
current state contains the same value, because of this algorithm does not find any best direction
to move. A hill-climbing search might be lost in the plateau area.
Solution: The solution for the plateau is to take big steps or very little steps while searching,
to solve the problem. Randomly select a state which is far away from the current state so it is
possible that the algorithm could find non-plateau region.
3. Ridges: A ridge is a special form of the local maximum. It has an area which is higher than
its surrounding areas, but itself has a slope, and cannot be reached in a single move.
Solution: With the use of bidirectional search, or by moving in different directions, we can
improve this problem.
17. A* Algorithm
A* Algorithm is one of the best and popular techniques used for path finding and graph
traversals.
A lot of games and web-based maps use this algorithm for finding the shortest path
efficiently.
It is essentially a best first search algorithm.
A* Algorithm works as-
It maintains a tree of paths originating at the start node.
It extends those paths one edge at a time.
It continues until its termination criterion is satisfied.
A* Algorithm extends the path that minimizes the following function-
f(n) = g(n) + h(n)
Here,
‘n’ is the last node on the path
g(n) is the cost of the path from start node to node ‘n’
h(n) is a heuristic function that estimates cost of the cheapest path from node ‘n’ to the
goal node
A* Algorithm
The implementation of A* Algorithm involves maintaining two lists- OPEN and
CLOSED.
OPEN contains those nodes that have been evaluated by the heuristic function but have
not been expanded into successors yet.
CLOSED contains those nodes that have already been visited.
The algorithm is as follows
Step-1:
Define a list OPEN.
Initially, OPEN consists solely of a single node, the start node S.
18. Step-2:
If the list is empty, return failure and exit.
Step-3:
Remove node n with the smallest value of f(n) from OPEN and move it to list CLOSED.
If node n is a goal state, return success and exit.
Step-04:
Expand node n.
Step-5: If any successor to n is the goal node, return success and the solution by tracing the
path from goal node to S.
Otherwise, go to Step-06.
Step-6: For each successor node,
Apply the evaluation function f to the node.
If the node has not been in either list, add it to OPEN.
Step-7:
Go back to Step-02.
Example Problem:
Consider the following graph-
The numbers written on edges represent the distance between the nodes.
The numbers written on nodes represent the heuristic value.
Find the most cost-effective path to reach from start state A to final state J using A* Algorithm.
Solution-
19. Step-1:
We start with node A.
Node B and Node F can be reached from node A.
A* Algorithm calculates f(B) and f(F).
f(B) = 6 + 8 = 14
f(F) = 3 + 6 = 9
Since f(F) < f(B), so it decides to go to node F.
Path- A → F
Step-2:
Node G and Node H can be reached from node F.
A* Algorithm calculates f(G) and f(H).
f(G) = (3+1) + 5 = 9
f(H) = (3+7) + 3 = 13
Since f(G) < f(H), so it decides to go to node G.
Path- A → F → G
Step-3:
Node I can be reached from node G.
A* Algorithm calculates f(I).
f(I) = (3+1+3) + 1 = 8
It decides to go to node I.
Path- A → F → G → I
Step-4:
Node E, Node H and Node J can be reached from node I.
A* Algorithm calculates f(E), f(H) and f(J).
f(E) = (3+1+3+5) + 3 = 15
f(H) = (3+1+3+2) + 3 = 12
f(J) = (3+1+3+3) + 0 = 10
Since f(J) is least, so it decides to go to node J.
Path- A → F → G → I → J
This is the required shortest path from node A to node J.
20. It is important to note that
A* Algorithm is one of the best path finding algorithms.
But it does not produce the shortest path always.
This is because it heavily depends on heuristics.
Adversarial Search(Game Search)
Adversarial search is a search, where we examine the problem which arises when we try to plan ahead
of the world and other agents are planning against us.
o In previous topics, we have studied the search strategies which are only associated with a single
agent that aims to find the solution which often expressed in the form of a sequence of actions.
o But, there might be some situations where more than one agent is searching for the solution in the
same search space, and this situation usually occurs in game playing.
o The environment with more than one agent is termed as multi-agent environment, in which each
agent is an opponent of other agent and playing against each other. Each agent needs to consider the
action of other agent and effect of that action on their performance.
o So, Searches in which two or more players with conflicting goals are trying to explore the same
search space for the solution, are called adversarial searches, often known as Games.
o Games are modeled as a Search problem and heuristic evaluation function, and these are the two
main factors which help to model and solve games in AI.
Types of Games in AI:
Deterministic Chance Moves
Perfect information Chess, Checkers, go, Othello Backgammon, monopoly
21. Imperfect information Battleships, blind, tic-tac-toe Bridge, poker, scrabble, nuclear
war
o Perfect information: A game with the perfect information is that in which agents can look into the
complete board. Agents have all the information about the game, and they can see each other moves
also. Examples are Chess, Checkers, Go, etc.
o Imperfect information: If in a game agents do not have all information about the game and not
aware with what's going on, such type of games are called the game with imperfect information,
such as tic-tac-toe, Battleship, blind, Bridge, etc.
o Deterministic games: Deterministic games are those games which follow a strict pattern and set of
rules for the games, and there is no randomness associated with them. Examples are chess, Checkers,
Go, tic-tac-toe, etc.
o Non-deterministic games: Non-deterministic are those games which have various unpredictable
events and has a factor of chance or luck. This factor of chance or luck is introduced by either dice
or cards. These are random, and each action response is not fixed. Such games are also called as
stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
Game tree:
A game tree is a tree where nodes of the tree are the game states and Edges of the tree are the moves by
players. Game tree involves initial state, actions function, and result Function.
Example: Tic-Tac-Toe game tree:
The following figure is showing part of the game-tree for tic-tac-toe game. Following are some key points
of the game:
o There are two players MAX and MIN.
o Players have an alternate turn and start with MAX.
o MAX maximizes the result of the game tree
o MIN minimizes the result.
22. Mini-Max Algorithm in Artificial Intelligence
o Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-making and
game theory. It provides an optimal move for the player assuming that opponent is also playing
optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe,
go, and various tow-players game. This Algorithm computes the minimax decision for the current
state.
o In this algorithm two players play the game, one is called MAX and other is called MIN.
o Both the players fight it as the opponent player gets the minimum benefit while they get the
maximum benefit.
o Both Players of the game are opponent of each other, where MAX will select the maximized value
and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the exploration of the complete
game tree.
o The minimax algorithm proceeds all the way down to the terminal node of the tree, then backtrack
the tree as the recursion.
Initial call:
Minimax(node, 3, true)
23. Working of Min-Max Algorithm:
o The working of the minimax algorithm can be easily described using an example. Below we have
taken an example of game-tree which is representing the two-player game.
o In this example, there are two players one is called Maximizer and other is called Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer will try to get the minimum
possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the way through the leaves to
reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare those value and backtrack
the tree until the initial state occurs. Following are the main steps involved in solving the two-player
game tree:
Step-1: In the first step, the algorithm generates the entire game-tree and apply the utility function to get the
utility values for the terminal states. In the below tree diagram, let's take A is the initial state of the tree.
Suppose maximizer takes first turn which has worst-case initial value =- infinity, and minimizer will take
next turn which has worst-case initial value = +infinity.
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -∞, so we will compare
each value in terminal state with initial value of Maximizer and determines the higher nodes values. It will
find the maximum among the all.
o For node D max(-1,- -∞) => max(-1,4)= 4
o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
24. Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value with +∞, and will find
the 3rd
layer node values.
o For node B= min(4,6) = 4
o For node C= min (-3, 7) = -3
Step 3: Now it's a turn for Maximizer, and it will again choose the maximum of all nodes value and find the
maximum value for the root node. In this game tree, there are only 4 layers, hence we reach immediately to
the root node, but in real games, there will be more than 4 layers.
o For node A max(4, -3)= 4
25. Alpha-Beta PruThat was the complete workflow of the minimax two player game.
Properties of Mini-Max algorithm:
o Complete- Min-Max algorithm is Complete. It will definitely find a solution (if exist), in the finite
search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
o Time complexity- As it performs DFS for the game-tree, so the time complexity of Min-Max
algorithm is O(bm
), where b is branching factor of the game-tree, and m is the maximum depth of
the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also similar to DFS which
is O(bm).
Limitation of the minimax Algorithm:
The main drawback of the minimax algorithm is that it gets really slow for complex games such as Chess,
go, etc. This type of games has a huge branching factor, and the player has lots of choices to decide. This
limitation of the minimax algorithm can be improved from alpha-beta pruning which we have discussed
in the next topic.
Alpha-beta pruning
Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization
technique for the minimax algorithm.
As we have seen in the minimax search algorithm that the number of game states it has to
examine are exponential in depth of the tree. Since we cannot eliminate the exponent, but we
can cut it to half. Hence there is a technique by which without checking each node of the game
tree we can compute the correct minimax decision, and this technique is called pruning. This
involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-
beta pruning. It is also called as Alpha-Beta Algorithm.
Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the
tree leaves but also entire sub-tree.
The two-parameter can be defined as:
Alpha: The best (highest-value) choice we have found so far at any point along the path of
Maximizer. The initial value of alpha is -∞.
Beta: The best (lowest-value) choice we have found so far at any point along the path of
Minimizer. The initial value of beta is +∞.
The Alpha-beta pruning to a standard minimax algorithm returns the same move as the
standard algorithm does, but it removes all the nodes which are not really affecting the final
decision but making algorithm slow. Hence by pruning these nodes, it makes the algorithm
fast.
26. The main condition which required for alpha-beta pruning is: α>=β
Key points about alpha-beta pruning:
The Max player will only update the value of alpha.
The Min player will only update the value of beta.
While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha
and beta.
We will only pass the alpha, beta values to the child nodes.
Working of Alpha-Beta Pruning:
Let's take an example of two-player search tree to understand the working of Alpha-beta
pruning
Step 1: At the first step the, Max player will start first move from node A where α= -∞ and β=
+∞, these value of alpha and beta passed down to node B where again α= -∞ and β= +∞, and
Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of α is
compared with firstly 2 and then 3, and the max (2, 3) = 3 will be the value of α at node D and
node value will also 3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this is a turn
of Min, Now β= +∞, will compare with the available subsequent nodes value, i.e. min (∞, 3) =
3, hence at node B now α= -∞, and β= 3.
27. In the next step, algorithm traverse the next successor of Node B which is node E, and the
values of α= -∞, and β= 3 will also be passed.
Step 4: At node E, Max will take its turn, and the value of alpha will change. The current value
of alpha will be compared with 5, so max (-∞, 5) = 5, hence at node E α= 5 and β= 3, where
α>=β, so the right successor of E will be pruned, and algorithm will not traverse it, and the
value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A. At node A,
the value of alpha will be changed the maximum available value is 3 as max (-∞, 3)= 3, and β=
+∞, these two values now passes to right successor of A which is Node C.
At node C, α=3 and β= +∞, and the same values will be passed on to node F.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3, and then compared with right child which is 1, and max(3,1)= 3 still α remains 3,
but the node value of F will become 1.
28. Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here the value of beta
will be changed, it will compare with 1 so min (∞, 1) = 1. Now at C, α=3 and β= 1, and again
it satisfies the condition α>=β, so the next child of C which is G will be pruned, and the
algorithm will not compute the entire sub-tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1) = 3. Following
is the final game tree which is the showing the nodes which are computed and nodes which has
never computed. Hence the optimal value for the maximizer is 3 for this example.
29. Move Ordering in Alpha-Beta pruning:
The effectiveness of alpha-beta pruning is highly dependent on the order in which each node
is examined. Move order is an important aspect of alpha-beta pruning.
It can be of two types:
o Worst ordering: In some cases, alpha-beta pruning algorithm does not prune any of
the leaves of the tree, and works exactly as minimax algorithm. In this case, it also
consumes more time because of alpha-beta factors, such a move of pruning is called
worst ordering. In this case, the best move occurs on the right side of the tree. The time
complexity for such an order is O(bm
).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when lots of pruning
happens in the tree, and best moves occur at the left side of the tree. We apply DFS
hence it first search left of the tree and go deep twice as minimax algorithm in the same
amount of time. Complexity in ideal ordering is O(bm/2
).
Rules to find good ordering in alpha-beta pruning
Occur the best move from the shallowest node.
Order the nodes in the tree such that the best nodes are checked first.
Use domain knowledge while finding the best move. Ex: for Chess, try order: captures
first, then threats, then forward moves, backward moves.
We can bookkeep the states, as there is a possibility that states may repeat.
30. Genetic Algorithm
In Artificial Intelligence,
Genetic Algorithm is one of the heuristic algorithms.
They are used to solve optimization problems.
They are inspired by Darwin’s Theory of Evolution.
They are an intelligent exploitation of a random search.
Although randomized, Genetic Algorithms are by no means random.
Genetic Algorithm works in the following steps-
Step-1:
Randomly generate a set of possible solutions to a problem.
Represent each solution as a fixed length character string.
Step-2:
Using a fitness function, test each possible solution against the problem to evaluate them.
Step-3:
Keep the best solutions.
Use best solutions to generate new possible solutions.
Step-4:
Repeat the previous two steps until-
Either an acceptable solution is found
Or until the algorithm has completed its iterations through a given number of cycles /
generations.
The basic operators of Genetic Algorithm are-
1. Selection (Reproduction)
It is the first operator applied on the population.
It selects the chromosomes from the population of parents to cross over and produce
offspring.
It is based on evolution theory of “Survival of the fittest” given by Darwin.
There are many techniques for reproduction or selection operator such as-
Tournament selection
Ranked position selection
Steady state selection etc.
2. Cross Over
Population gets enriched with better individuals after reproduction phase.
Then crossover operator is applied to the mating pool to create better strings.
31. Crossover operator makes clones of good strings but does not create new ones.
By recombining good individuals, the process is likely to create even better individuals.
3. Mutation
Mutation is a background operator.
Mutation of a bit includes flipping it by changing 0 to 1 and vice-versa.
After crossover, the mutation operator subjects the strings to mutation.
It facilitates a sudden change in a gene within a chromosome.
Thus, it allows the algorithm to see for the solution far away from the current ones.
It guarantees that the search algorithm is not trapped on a local optimum.
Its purpose is to prevent premature convergence and maintain diversity within the
population.
The following flowchart represents how a genetic algorithm works-
32. Genetic Algorithm advantages
Genetic Algorithms are better than conventional AI.
This is because they are more robust.
They do not break easily unlike older AI systems.
They do not break easily even in the presence of reasonable noise or if the inputs get
change slightly.
While performing search in multi modal state-space or large state-space,Genetic
algorithms has significant benefits over other typical search optimization techniques.