2. Contents
• Concept of Heuristics
• Key Takeaways in Heuristics
• Uses of Heuristics
• Demerits of using Heuristics Approach
• Hill Climbing Algorithm
• A* Algorithm
• Best First Search Algorithm
• AO* Algorithm
• Mini-Max Game Playing Algorithm
• Alpha-Beta Cutoff
3. Concept of Heuristics
• Heuristics are a problem-solving method that
uses shortcuts to produce good-enough solutions
given a limited time frame or deadline.
• Heuristics are a flexibility technique for quick
decisions, particularly when working with
complex data.
• Decisions made using an heuristic approach may
not necessarily be optimal. Heuristic is derived
from the Greek word meaning “to discover”.
4. Keys Takeaways
• Heuristics are methods for solving problems in a
quick way that delivers a result that is sufficient
enough to be useful given time constraints.
• Investors and financial professionals use a
heuristic approach to speed up analysis and
investment decisions.
• Heuristics can lead to poor decision making based
on a limited data set, but the speed of decisions
can sometimes make up for the disadvantages.
• Heuristics in AI helps to find a solution but may
not be the best solution in a limited time frame.
5. Uses of Heuristics
• Heuristics facilitate timely decisions.
• Analysts in every industry use rules of thumb
such as intelligent guesswork, trial and error,
process of elimination, past formulas and the
analysis of historical data to solve a problem.
• Heuristic methods make decision making
simpler and faster through short cuts and
good-enough calculations.
6. Demerits of using Heuristic Approach
• There are trade-offs with the use of heuristics
that render the approach prone to bias and
errors in judgment.
• The user’s final decision may not be the
optimal or best solution, the decision made
may be inaccurate and the data
selected might be insufficient leading to an
imprecise solution to a problem
7. Hill Climbing Algorithm
• Hill Climbing is a heuristic search used for
mathematical optimization problems in the
field of Artificial Intelligence.
• Types of Hill Climbing
– Simple Hill Climbing Algorithm
– Steepest Ascent Hill Climbing Algorithm
– Stochastic Hill Climbing
9. Simple Hill Climbing Algorithm
• Step 1 : Evaluate the initial state. If it is a goal state then stop and
return success. Otherwise, make initial state as current state.
• Step 2 : Loop until the solution state is found or there are no new
operators present which can be applied to the current state.
• a) Select a state that has not been yet applied to the current state
and apply it to produce a new state.
• b) Perform these to evaluate new state
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state
and proceed further.
iii. If it is not better than the current state, then continue in the
loop until a solution is found.
• Step 3 : Exit.
11. A * Algorithm
• Informed Search Technique.
• It is an admissible algorithm which ensures
that the optimal solution is obtained.
• A* (pronounced as "A star") is a
computer algorithm that is widely used in
path finding, tree and graph traversal. The
algorithm efficiently plots a walk able path
between multiple nodes, or points, on the
graph.
12. A * Algorithm Function
• A* algorithm expands paths that are already less
expensive by using this function:
f(n)=g(n)+h(n),
where
• f(n) = total estimated cost of path through node n
• g(n) = actual cost incurred so far to reach node n
from the starting node
• h(n) = estimated cost (Heuristics) from node n to
goal node. This is the heuristic part of the cost
function, so it is like a calculated guess.
14. Solution
• f(a) > 0+21=21
• f(ab) > 9+14=23
• f(ac) > 4+18=22
• f(ad) > 7+18=25
• Now out of the three path “ac” has the least
cost, so we will explore this path first.
• f(ace) > 4+17+5=26
• f(acf) > 4+12+8=24
• Now f(ab) has less cost than f(acf), so we need
to explore f(ab) now.
15. Solution
• f(abe) > 9+11+5=25
• Now f(acf) cost is less that f(abe). So we will
continue to explore f(acf)
• f(acfz) > 4+12+9+0=25
• Now f(acfz) cost is same as f(abe). So we will
explore the f(abe)
• F(abez) > 9+11+5+0=25
• So any one of the two route can be chosen as the
shortest route from a to z.
• So path: a>c>f>z or a>b>e>z
16. A * Algorithm
• The implementation of A* Algorithm involves
maintaining two lists- OPEN and CLOSED.
• OPEN contains those nodes that have been
evaluated by the heuristic function but have
not been expanded into successors yet.
• CLOSED contains those nodes that have
already been visited.
17. A * Algorithm
• Step1: Define a list OPEN. Initially, OPEN consists solely
of a single node, the start node S.
• Step 2: If the list is empty, return failure and exit.
• Step 3: Remove node n with the smallest value of f(n)
from OPEN and move it to list CLOSED. If node n is a
goal state, return success and exit.
• Step 4: Expand node n.
• Step 5: If any successor to n is the goal node, return
success and the solution by tracing the path from goal
node to S. Otherwise, go to Step 6.
• Step 6: For each successor node, Apply the evaluation
function f(n) to the node. If the node has not been in
either list, add it to OPEN.
• Step 7: Go back to Step 2.
18. Applications of A * Algorithm
• A* Algorithm is one of the best and popular
techniques used for path finding and graph
traversals.
• A lot of games and web-based maps use this
algorithm for finding the shortest cost path
efficiently.
• It is essentially a best first search algorithm.
22. Solution
• F(s) = 17
• F(sa) > 6+10=16
• F(sb) > 5+13=18
• F(sc) > 10+4=14
• F(scd) > 10+6+2=18
• F(sae) > 6+6+4=16
• F(saef) > 6+6+4+1=17
• F(saefg) > 6+6+4+3=19
• F(sbe) > 5+6+4=15
• F(sbd) > 5+7+2=14
• F(sbef) > 5+6+4+1=16
• F(sbdf) > 5+7+6+1=19
• F(scdf) > 10+6+6+1=23
• F(sbefg) > 5+6+4+3=18
So the path with the least cost is
S>B>E>F>G with the cost of 18
23.
24. Best First Search Algorithm
• Best first search uses the concept of a
“Priority Queue” and heuristic search. It is a
search algorithm that works on a specific rule.
The aim is to reach the goal from the initial
state via the least cost path.
• To search the graph space, the Best First
Search method uses two lists for tracking the
traversal. An ‘OPEN’ list which keeps track of
the current ‘immediate’ nodes available for
traversal and ‘CLOSED’ list that keeps track of
the nodes already traversed.
25. Priority Queue
• A priority queue is a special type of queue in
which each element is associated with a
priority and is served according to its priority.
• If elements with the same priority occur, they
are served according to their order in the
queue.
• In case of Best First Search (BFS), the priority is
the heuristic value “h(n)” of each node. So
lower the heuristic value, higher is the priority
of that node.
27. BFS
• Best First Search (BFS) is an instance of graph search
algorithm in which a node is selected for expansion
based on evaluation function f(n).
• Traditionally, the node which is the lowest evaluation is
selected for the explanation because the evaluation
measures distance/cost to the goal.
• Best first search can be implemented within general
search frame work via a priority queue, a data
structure that will maintain the fringe in ascending/
increasing order of “f(n)” values.
• This search algorithm serves as combination of depth
first search and breadth first search algorithm. Best
first search algorithm is often referred greedy
algorithm this is because they quickly attack the most
desirable path as soon as its heuristic weight becomes
the most desirable.
28. Best First Search Algorithm
1. Create 2 empty lists: OPEN and CLOSED
2. Start from the initial node (say N) and put it in the
‘ordered’ OPEN list
3. Repeat the next steps until GOAL node is reached
i. If OPEN list is empty, then EXIT the loop returning ‘False’
ii. Select the first/top node (say N) in the OPEN list and move it
to the CLOSED list. Also capture the information of the parent
node
iii. If N is a GOAL node, then move the node to the Closed list
and exit the loop returning ‘True’. The solution can be found
by backtracking the path
iv. If N is not the GOAL node, expand node N to generate the
‘immediate’ next nodes linked to node N and add all those to
the OPEN list
v. Reorder the nodes in the OPEN list in ascending order
according to an evaluation function f(n). Here f(n) = h(n)
which is the heuristic value of the nodes.
29. Time Complexity
• This algorithm will traverse the shortest path
first in the queue. The time complexity of the
algorithm is given by O(nlogn) .
30. Variants of BFS
• The two variants of Best First Search are Greedy
Search Algorithm and A* Algorithm. The Greedy
BFS algorithm selects the path which appears to
be the best, it can be known as the combination
of depth-first search and breadth-first search.
Greedy BFS makes use of Heuristic function and
search and allows us to take advantages of both
algorithms.
• The only difference between Greedy BFS and A*
BFS is in the evaluation function. For Greedy BFS
the evaluation function is f(n) = h(n) while for A*
the evaluation function is f(n) = g(n) + h(n).
32. Solution
• Initialization: Close List [], Open List []
• Step 1:
• Open List [S]
• Closed List []
• Step2:
• Open List [B, A]
• Closed List [S]
• Step 3:
• Open List [F, E, A]
• Closed List [S, B]
• Step 4:
• Open List [G, E, I, A]
• Closed List [S, B, F]
• Step 5:
• Open List [E, I, A]
• Closed List [S, B, F, G] So backtrack to Source Node.
Start Node: S
Goal Node: G
Solution Path: S>B>F>G
33. Advantages
• It is more efficient than that of BFS and DFS.
• Time complexity of Best first search is much less
than Breadth first search and Depth First Search.
• The Best first search allows us to switch between
paths by gaining the benefits of both breadth first
and depth first search. Because, depth first is
good because a solution can be found without
computing all nodes and Breadth first search is
good because it does not get trapped in dead
ends.
39. Games
• A game consists of a set of two or more
players, a set of moves for the players, and a
specification of payoffs (outcomes) for each
combination of strategies.
• Different types of games are:
– TWO PERSON ZERO SUM GAME (Chess, Tic-Ta-
Toe)
– GAME OF CHANCES (Poker, Bridge, etc.)
– MULTI-PLAYER GAMES (All modern games)
40. Game Playing in AI
• Game Playing is an important domain of artificial
intelligence. Games don’t require much
knowledge; the only knowledge we need to
provide is the rules, legal moves and the
conditions of winning or losing the game.
• Both players try to win the game. So, both of
them try to make the best move possible at each
turn. Searching techniques like BFS(Breadth First
Search) are not accurate for this as the branching
factor is very high, so searching will take a lot of
time. So, we need another search procedures
that improve the search and DFS is used for
searching the game tree.
44. Two Algorithms in Game Playing
• Mini-max Algorithm
• Alpha-Beta Pruning
45. Mini-max Algorithm in AI
• This algorithm is a two player game, so we call
the first player as PLAYER1 (Max) and second
player as PLAYER2 (Min).
• The value of each node is backed-up from its
children. For PLAYER1 (Max) the backed-up value
is the maximum value of its children and for
PLAYER2 (Min) the backed-up value is the
minimum value of its children.
• It provides most promising move to PLAYER1
(Max), assuming that the PLAYER2 (Min) has
make the best move. It is a recursive algorithm,
as same procedure occurs at each level.
46. Mini-max Algorithm
• Mini-max algorithm is a recursive or backtracking algorithm which is
used in decision-making and game theory. It provides an optimal move
for the player assuming that opponent is also playing optimally.
• Min-Max algorithm is mostly used for game playing in AI. Some of the
examples are Chess, Checkers, tic-tac-toe, and various tow-players
game. This Algorithm computes the Mini-max decision for the current
state.
• In this algorithm two players play the game, one is called MAX and
other is called MIN.
• Both the players fight it as the opponent player gets the minimum
benefit while they get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
• The Mini-max algorithm performs a depth-first search (DFS) algorithm
for the exploration of the complete game tree.
• The Mini-max algorithm proceeds all the way down to the terminal
node of the tree, then backtrack the tree as the recursion.
47. Important points for Mini-max
Algorithm
• Backtracking Algorithm
• Best Move Strategy for both players
• Max player will try to maximize the utility
(payoff)
• Min Player will try minimize the utility of the
opponent.
48. Algorithm / Pseudo-code
• function minimax(node, depth, maximizingPlayer) is
• if depth ==0 or node is a terminal node then
• return static evaluation of node
• if MaximizingPlayer then // for Maximizer Player
• maxEva= -infinity
• for each child of node do
• eva= minimax(child, depth-1, false)
• maxEva= max(maxEva,eva) //gives Maximum of the values
• return maxEva
• else // for Minimizer player
• minEva= +infinity
• for each child of node do
• eva= minimax(child, depth-1, true)
• minEva= min(minEva, eva) //gives minimum of the values
• return minEva
50. Properties of Minimax Algorithm
• Complete- Min-Max algorithm is Complete. It will
definitely find a solution (if exist), in the finite
search tree.
• Optimal- Min-Max algorithm is optimal if both
opponents are playing optimally.
• Time complexity- As it performs DFS for the
game-tree, so the time complexity of Min-Max
algorithm is O(b^d), where b is branching factor
of the game-tree, and d is the maximum depth of
the tree.
• Space Complexity- Space complexity of Mini-max
algorithm is also similar to DFS which is O(b^d).
51. Limitation
• The main drawback of the Mini-max algorithm is
that it gets really slow for complex games such as
Chess, checkers, etc. This type of games has a
huge branching factor(on an average 35 choices
for chess), and the player has lots of choices to
decide. So total number of searches for both the
player in a game of chess is 35^100, as there are
almost 100 moves possible for both the players.
• This algorithm explores all the nodes before
making its move, and hence it is very inefficient.
• This limitation of the mini-max algorithm can be
improved from alpha-beta pruning which will be
discussed next.
55. Alpha-Beta Pruning
• Alpha-Beta pruning is not actually a new
algorithm, rather an optimization technique for
Mini-max algorithm.
• It reduces the computation time by a huge factor.
This allows us to search much faster and even go
into deeper levels in the game tree.
• It cuts off branches in the game tree which need
not be searched because there already exists a
better move available. It is called Alpha-Beta
pruning because it passes 2 extra parameters in
the Mini-max function, namely Alpha(α) and Beta
(β).
56. Cont…..
• As we have seen in the Mini-max search
algorithm that the number of game states it has
to examine are exponential in depth of the tree.
• Hence there is a technique by which without
checking each node of the game tree we can
compute the correct Mini-max decision, and this
technique is called pruning.
• This involves two threshold parameter Alpha and
beta for future expansion, so it is called alpha-
beta pruning. It is also called as Alpha-Beta Cut-
off Algorithm.
57. Condition for Alpha-beta pruning:
• The main condition which required for alpha-
beta pruning is:
α>=β
• If the condition is satisfied, the entire branch
will be pruned.
58. Keys Points
• The Max player will only update the value of
alpha.
• The Min player will only update the value of
beta.
• While backtracking the tree, the node values
will be passed to upper nodes instead of
values of alpha and beta.
• We will only pass the alpha, beta values to the
child nodes.
59. Pseudo-Code
• function minimax(node, depth, alpha, beta, maximizingPlayer) is
• if depth ==0 or node is a terminal node then
• return static evaluation of node
• if MaximizingPlayer then // for Maximizer Player
• maxEva= -infinity
• for each child of node do
• eva= minimax(child, depth-1, alpha, beta, False)
• maxEva= max(maxEva, eva)
• alpha= max(alpha, maxEva)
• if beta<=alpha
• break
• return maxEva
• else // for Minimizer player
• minEva= +infinity
• for each child of node do
• eva= minimax(child, depth-1, alpha, beta, true)
• minEva= min(minEva, eva)
• beta= min(beta, eva)
• if beta<=alpha
• break
• return minEva
61. Solution using Mini-max Method
• Let us consider the root node (Max) as A.
• And then name the rest of the nodes
accordingly. So the nodes will be named up to
O.
• Solving the above the problem using Mini-ma
x algorithm will result in the following
solution:
62. Time Complexity using Mini-max
Algorithm
• b = 2
• d = 4
• Time Complexity is O(b^d)
O(2^4) = 16
64. Time Complexity using Alpha-Beta
Pruning
• The time complexity reduces significantly as
compared to Mini-max algorithm to almost
half in best case scenario.
• Time complexity is O(b^d/2) in best case
scenario.
• But in case of worst case scenario it will
remain O(b^d).