SlideShare a Scribd company logo
1 of 123
Vellore Institute OF Technology, BHOPAL
Fundamentals In AI & ML
(Department of Computer Science &
Engineering)
Ankit Shrivastava
Vellore Institute Of Technology (VIT),
Bhopal, India
Module-2
Problem Solving Methods
2
Search Strategies:
 A search strategy is an organised structure of key terms
used to search a database. The search strategy
combines the key concepts of your search question in
order to retrieve accurate results.
 Search techniques are universal problem-solving
methods.
 A search Strategy is also called “Search Algorithm”
which solves a search problem. Search algorithms
work to retrieve information stored within some data
structure, or calculated in the search space of a
problem domain, with either discrete or continuous
values.
3
 Rational agents or Problem-solving agents in AI
mostly used these search strategies or algorithms to
solve a specific problem and provide the best result.
Problem-solving agents are the goal-based agents and
use atomic representation.
 The appropriate search algorithm often depends on the
data structure being searched, and may also include
prior knowledge about the data. Search algorithms can
be made faster or more efficient by specially
constructed database structures, such as search
trees, hash maps, and database indexes.
4
Types of Search Algorithms:
5
Uninformed Search/ Blind Search:
 Uninformed search is a class of general-purpose search
algorithms which operates in brute force-way. Uninformed
search algorithms do not have additional information about
state or search space other than how to traverse the tree, so
it is also called blind search.
 The uninformed search does not contain any domain
knowledge such as closeness, the location of the goal.
 It operates in a brute force way, as it only includes
information about how to traverse the tree & how to
identify the leaf as well as goal nodes.
 Uninformed search applies a way in which search tree is
searched without any information about the search space
like initial state operates & test for the goal, so it is called
Blind search. Example- It examines each node until it
achieves the goal.
6
UNINFORMED SEARCH
Algorithms:
7
Depth First Search (DFS):
 DFS is called Uninformed Search Technique.
 It works on Present knowledge.
 Depth First Search or DFS starts with the initial node
of the graph, then goes deeper until it finds the goal
nodes or nodes having no children.
 In DFS, then backtracks from the dead end towards the
most recent node that is yet to be completely
unexplored.
 Stack data structure is used in DFS.
 DFS works in LIFO (Last in First Out) manner.
 It works on Brute Force way or Blind Search.
 It is Non-optimal solution.
 It goes on Deepest node.
8
Algorithm:
1. Enter root node on stack
2. Do until stack is not empty
(a.) Remove node
i. if node= Goal node then stop
ii. Push all children of node in stack.
9
Time Complexity:
 Time complexity in Data Structure = O(V+E)
where,
V = no. of vertices
E = no. of edges
 Time complexity in Artificial Intelligence = O(bd)
where,
b = branching factor
d = depth
10
Advantages:
1. It requires less memory.
2. It requires less time to reach goal node if traversal in
right path.
ex.- If we have to reach goal node (G) from starting
node (A) then it takes less time but if we have to reach
goal node (H) from starting node (A) then it takes more
time.
Disadvantages:
1. No guarantee of finding a solution.
2. It can go in infinite loop.
11
12
13
H.W. DFS Example-
14
Breadth First Search (BFS):
 Breadth First Search (BFS) algorithm traverses a graph
in a breadth ward motion and uses a queue to
remember to get the next vertex to start a search, when
a dead end occurs in any iteration.
 It explores all the nodes at given depth before
proceeding to the next level.
 It uses Queue data structure to implement (FIFO)
manner.
 It gives optimal solution.
 BFS comes under Uninformed Search technique or we
can say Blind Search.
 Uninformed means no domains have specific
knowledge.
15
Algorithm:
1. Enter starting nodes on Queue.
2. If Queue is empty then return fail and stop.
3. If first element on queue is goal node, then
return success and stop.
4. ELSE
5. 4. Remove and expand first element from
queue and place children at the end of queue.
6. 5. Go to step 2.
16
Time Complexity:
 Time complexity in Data Structure = O(V+E)
where,
V = no. of vertices
E = no. of edges
 Time complexity in Artificial Intelligence = O(bd)
where,
b = branching factor
d = depth
17
Advantages:
1. Find a solution if it exists.
2. It will try to find the minimal solution in least no.
of steps.
Disadvantages:
1. It requires less memory.
2. It needs lots of time if solution is far from root
node.
18
19
H.W. BFS Example-
20
21
22
Uniform Cost Search Algorithm :
 It is used for weighted Tree/ Graph Traversal.
 Goal is to path finding to goal-node with lowest
cost.
 Node Expansion is based on path costs.
 It uses Backtracking also.
 Priority Queue is used for Implementation.
23
Advantage:
1. It gives optimal solution because at every state/ path with
the least cost is chosen.
Disadvantage:
1. It does not care about the no. of steps involve in
searching and only concerned about the cost path. Due to
which this algo. may be stuck in an infinite loop.
24
25
26
27
INFORMED SEARCH
Algorithms:
28
Greedy Search:
 A greedy algorithm is any algorithm that follows the
problem-solving heuristic of making the locally
optimal choice at each stage.
 A greedy algorithm is an approach for solving a
problem by selecting the best option available at the
moment. It doesn't worry whether the current best
result will bring the overall optimal result.
 In many problems, a greedy strategy does not produce
an optimal solution, but a greedy heuristic can yield
locally optimal solutions that approximate a globally
optimal solution in a reasonable amount of time.
 It gives feasible solution.
29
 The problem that requires either minimum or
maximum result then that problem is known as an
optimization problem. Greedy method is one of
the strategies used for solving the optimization
problems.
 It follows the local optimum choice at each stage
with a intend of finding the global optimum. Let's
understand through an example.
30
Characteristics of Greedy
Method:
• To construct the solution in an optimal way, this
algorithm creates two sets where one set
contains all the chosen items, and another set
contains the rejected items.
• A Greedy algorithm makes good local choices in
the hope that the solution should be either
feasible or optimal.
31
Applications of Greedy Algorithm:
 It is used in finding the shortest path.
 It is used to find the minimum spanning tree
using the prim's algorithm or the Kruskal's
algorithm.
 It is used in a job sequencing with a
deadline.
 This algorithm is also used to solve the
fractional knapsack problem.
32
Pseudocode of Greedy Algorithm:
1.Algorithm Greedy (a, n)
2.{
3. Solution : = 0;
4. for i = 0 to n do
5. {
6. x: = select(a);
7. if feasible(solution, x)
8. {
9. Solution: = union(solution , x)
10. }
11. return solution;
12. } }
33
Best First Search:
 The best first search uses the concept of a priority
queue and heuristic search. It is a search algorithm that
works on a specific rule. The aim is to reach the goal
from the initial state via the shortest path.
 The best First Search algorithm in artificial
intelligence is used for finding the shortest path from a
given starting node to a goal node in a graph. The
algorithm works by expanding the nodes of the graph
in order of increasing the distance from the starting
node until the goal node is reached.
34
Algorithm:
Let ‘OPEN’ be a priority queue containing initial state.
Loop
If OPEN is empty return failure
Node <- Remove - First (OPEN)
If Node is a goal
then return the path from initial to Node
else generate all successors of node and put the
newly generated Node into OPEN according to
their F values
END LOOP
35
36
37
Knapsack Problem:
 Fractional Knapsack problem is defined as, “Given a set of
items having some weight and value/profit associated
with it. The knapsack problem is to find the set of items
such that the total weight is less than or equal to a given
limit (size of knapsack) and the total value/profit earned is
as large as possible.”
 This problem can be solved with the help of using two
techniques:
• Brute-force approach: The brute-force approach tries all the
possible solutions with all the different fractions but it is a
time-consuming approach.
• Greedy approach: In Greedy approach, we calculate the
ratio of profit/weight, and accordingly, we will select the
item. The item with the highest ratio would be selected
first.
38
{
For I =1 to n;
compute pi/wi;
Sort objects in non increasing order of P/W
for i= 1 to n from sorted list
if (m>0 && wi<=m)
m= m-wi;
p= p+pi;
else break;
if (m>0)
p = p+pi (m/wi);
}
39
Knapsack Algorithm:
Job Scheduling Problem:
 Job scheduling is the problem of scheduling jobs
out of a set of N jobs on a single processor
which maximizes profit as much as possible.
Consider N jobs, each taking unit time for
execution. Each job is having some profit and
deadline associated with it.
 The sequencing of jobs on a single processor with
deadline constraints is called as Job Sequencing
with Deadlines.
40
The greedy algorithm described below always gives
an optimal solution to the job sequencing problem-
Step-01:
• Sort all the given jobs in decreasing order of their
profit.
Step-02:
• Check the value of maximum deadline.
• Draw a Gantt chart where maximum time on
Gantt chart is the value of maximum deadline.
Step-03:
• Pick up the jobs one by one.
• Put the job on Gantt chart as far as possible from
0 ensuring that the job gets completed before its
deadline.
41
Q. Given the jobs, this deadlines and associated profits
as shown-
Answer the following questions-
1. Write the optimal schedule that gives max. profit
2. Are the jobs completed in optimal schedule?
3. What is the max. earned profit?
42
Jobs J1 J2 J3 J4 J5 J6
Deadline
s
5 3 3 2 4 2
Profits 200 180 190 300 120 100
Soln.-
Step-1: Sort all the given jobs in decreasing order of
their profit-
Step-2: Gantt Chart
1 2 3 4 5
43
Jobs J4 J1 J3 J2 J5 J6
Deadlines 2 5 3 3 4 2
Profits 300 200 190 180 120 100
J2 J4 J3 J5 J1
1. The optimal schedule is-
J2, J4, J3, J5, J1
This is required order in which the jobs must be
completed in order to obtain the maximum profit.
2. All the jobs are not completed in optimal schedule.
This is because job J6 could not be completed with
its deadline.
3. Maximum earned profit = 180 + 300 + 190 + 120 +
200
= 990 units.
44
Prim’s Algorithm:
 Prim's Algorithm is a greedy algorithm that is
used to find the minimum spanning tree from a
graph. Prim's algorithm finds the subset of edges
that includes every vertex of the graph such that
the sum of the weights of the edges can be
minimized.
 Prim's algorithm starts with the single node and
explores all the adjacent nodes with all the
connecting edges at every step.
45
Working of Prim’s Algorithm:
Prim's algorithm is a greedy algorithm that starts
from one vertex and continue to add the edges with
the smallest weight until the goal is reached. The
steps to implement the prim's algorithm are given
as follows -
• First, we have to initialize an MST with the
randomly chosen vertex.
• Now, we have to find all the edges that connect
the tree in the above step with the new vertices.
From the edges found, select the minimum edge
and add it to the tree.
• Repeat step 2 until the minimum spanning tree is
formed.
46
Spanning Tree:
A spanning tree is the subgraph of an undirected connected
graph.
Minimum Spanning Tree:
Minimum spanning tree can be defined as the spanning tree in
which the sum of the weights of the edge is minimum. The
weight of the spanning tree is the sum of the weights given to
the edges of the spanning tree.
47
Applications of Prim’s Algorithm:
• Prim's algorithm can be used in network
designing.
• It can be used to make network cycles.
• It can also be used to lay down electrical
wiring cables.
48
Difference between Prim’s algorithm and Kruskal’s
algorithm:
49
Prim’s Algorithm Kruskal’s Algorithm
The tree that we are making or
growing always remains connected.
The tree that we are making or
growing usually remains
disconnected.
Prim’s algorithm grows a solution from
a random vertex by adding the next
cheapest vertex to the existing tree.
Kruskal’s algorithm grows a solution
from the cheapest edge by adding the
next cheapest edge to the existing
tree/ forest.
Prim’s algorithm is faster for dense
graphs.
Kruskal’s algorithm is faster for sparse
graphs.
50
51
52
Kruskal’s Algorithm:
 Kruskal's algorithm to find the minimum cost spanning
tree uses the greedy approach. This algorithm treats the
graph as a forest and every node it has as an individual
tree. A tree connects to another only and only if, it has
the least cost among all available options and does not
violate MST properties.
 Kruskal's Algorithm is used to find the minimum
spanning tree for a connected and undirected
weighted graph. The main target of the algorithm is to
find the subset of edges by using which we can
traverse every vertex of the graph.
53
A* Search Algorithm:
 A* (pronounced "A-star") is a graph traversal and path
search algorithm, which is often used in many fields of
computer science due to its completeness, optimality,
and optimal efficiency.
 A* is an informed search algorithm, or a best-first
search, meaning that it is formulated in terms
of weighted graphs: starting from a specific
starting node of a graph, it aims to find a path to the
given goal node having the smallest cost (least distance
travelled, shortest time, etc.). It does this by
maintaining a tree of paths originating at the start node
and extending those paths one edge at a time until its
termination criterion is satisfied.
54
 At each iteration of its main loop, A* needs to
determine which of its paths to extend. It does so based
on the cost of the path and an estimate of the cost
required to extend the path all the way to the goal.
Specifically, A* selects the path that minimizes
f(n) = g(n) + h(n)
where n is the next node on the path, g(n) is the cost of
the path from the start node to n, and h(n) is
a heuristic function that estimates the cost of the
cheapest path from n to the goal.
55
Algorithm:
1. Enter starting node in OPEN list.
2. If OPEN list is empty return FAIL
3. Select node from OPEN list which has smallest
value (g+n)
if node = Goal , return success
4. Expand node ‘n’ generates all successors
compute (g+n) for each successor node
5. If node ‘n’ is already in OPEN/CLOSED, attach to
back pointer
6. Go to (3.)
56
Advantages:
 Best searching algorithm
 Optimal and complete
 Solving complex problems
Disadvantages:
 Doesn’t always produces shortest
 Some complexity issues
 It requires memory
57
How to make A* Admissible:
There are two conditions:
1. Admissible: In this Heuristic function, never
underestimate the cost of reaching the goal.
H(n)< = H*(n) {goal}
2. Non-Admissible: In this Heuristic function, never
overestimate the cost of reaching the goal.
H(n) > H*(n)
58
59
60
61
62
63
Local Search Algorithms:
 Local Search Algorithms operate using a single current
node and generally move only to neighbours of that
node.
 Local search method keeps small number of nodes in a
memory. They are suitable for problems where the
solution is the goal state itself and not the path.
 In addition to finding goals, local search algorithms are
useful for solving pure optimization problems, in
which the aim is to find the best state according to an
objective function.
 Hill-Climbing and Simulated Annealing are examples
of local search algorithms.
64
Hill-Climbing Algorithm:
 Hill climbing algorithm is a local search algorithm which
continuously moves in the direction of increasing
elevation/value to find the peak of the mountain or best
solution to the problem. It terminates when it reaches a
peak value where no neighbour has a higher value.
65
A one- dimensional state-space landscape in which elevation corresponds to
the objective function.
 Hill climbing is sometimes called greedy local search
because it grabs a good neighbour state- without
thinking ahead about where to go next.
Limitations:
Hill climbing cannot reach the optimal/best state (global
maximum) if it enters any of the following regions:
Local Maxima –
A local maximum is a peak that is higher than each of its
neighbouring states but lower than the global maximum.
66
Plateaus –
A plateau is a flat area of the state-space landscape. It
can be flat local maximum, from which no uphill exit
exits, or a shoulder, from which progress is possible.
Ridge –
A Ridge is an area which is higher than surrounding
states, but it can not be reached in a single move.
67
A Ridges is shown in fig. result in a sequence of local
maxima that is very difficult for greedy algorithm to
navigate.
68
Variations of Hill Climbing-
 In Steepest Ascent hill climbing all successors are
compared and the closest to the solution is chosen.
Steepest Ascent hill climbing is like best-first search,
which tries all possible extensions of the current path
instead of only one.
 It gives optimal solution but time consuming.
 It is also known as Gradient search.
69
Simulated Annealing:
 Annealing is the process used to temper or harden
metals and glass by heating them to a high temperature
and then gradually cooling them, thus allowing the
material to reach a low energy crystalline state.
 The simulated annealing algorithm is quite similar to
hill-climbing. Instead of picking the best move,
however, it picks a random move. If a move improves
the situation, it is always accepted. Otherwise the
algorithm accepts the move with some probability less
than 1.
 Check all the neighbours.
 Moves to worst state may be accepted.
70
Constraint Satisfaction Problem:
 In artificial intelligence and operations
research, constraint satisfaction is the process of
finding a solution to a set of constraints that impose
conditions that the variables must satisfy.
 Constraint propagation methods are also used in
conjunction with search to make a given problem
simpler to solve.
 Examples of problems that can be modeled as a
constraint satisfaction problem include:
 Map Colouring Problem
 Crosswords, Sudoku and many logic puzzles
71
 Constraint satisfaction depends on three components,
namely:
• X: It is a set of variables.
• D: It is a set of domains where the variables reside.
There is a specific domain for each variable.
• C: It is a set of constraints which are followed by the
set of variables.
72
73
CSP Problems:
Constraint satisfaction includes those problems which
contains some constraints while solving the problem.
CSP includes the following problems:
• Graph Colouring: The problem where the constraint
is that no adjacent sides can have the same colour.
74
 Sudoku Playing: The gameplay where the constraint
is that no number from 0-9 can be repeated in the same
row or column.
75
• n-queen problem: In n-queen problem, the constraint
is that no queen should be placed either diagonally, in
the same row or column.
• Crossword: In crossword problem, the constraint is
that there should be the correct formation of the words,
and it should be meaningful.
76
 Latin square Problem: In this game, the task is to
search the pattern which is occurring several times in
the game. They may be shuffled but will contain the
same digits.
77
Latin Square Problem:
A Latin square is a square array of objects (letters
A,B,C,…) such that each object appears once and
only once in each row and each column.
Example- Suppose we choose the following
Latin Square:
78
 It is not the Latin square design/ problem.
Why?
79
Representation of LSD:
Drivers/
Cars
1 2 3 4
a A B C D
b B C D A
C C D A B
D D A B C
80
4 Brands of petrol's are indicated as A, B,
C, D
81
In LSD, you have 3 factors:
Rows
Columns
Treatments (letters A,B,C,…)
• The row-column treatments are represented by cells in
a n x n array.
• The treatments are assigned to row-column
combinations using a Latin-square arrangement.
82
The number of treatments= No. of Rows = No. of column = n
Map- Coloring Problem:
Problem: We are given the task of coloring each region
either red, green or blue in such a way that no
neighboring regions have the same color.
83
Solution:
 To formulate this as a CSP, we define the variables as
(WA, NT, Q, NSW, V, SA and T)
 The domain of each variable is the set {red, green,
blue}
 The constraints require neighboring regions to have
distinct colors; for example, the allowable
combinations for WA and NT are the pairs {(red,
green), (red, blue), (green, red), (green, blue), (blue,
red), (blue, green)}
 The constraint can also be represented more succinctly
as the inequality.
WA!=NT, provided the constraint satisfaction algorithm
has some way to evaluate such expressions.
84
 There are many possible solutions. One possible
solution is shown below
{WA = red, NT = green, Q = red, NSW = green, V
= red, SA = Blue, T = red/green/blue}
85
Backtracking for the map-coloring problem:
86
Solution with Constraint Satisfaction Problem:
87
Backtracking Search:
 Backtracking search: A depth-first search that
chooses values for one variable at a time and
backtracks when a variable has no legal values left to
assign. Backtracking algorithm repeatedly chooses an
unassigned variable, and then tries all values in the
domain of that variable in turn, trying to find a
solution.
 Examples where backtracking can be used to solve
puzzles or problems include: Puzzles such as eight
queens puzzle, crosswords, verbal arithmetic, Sudoku,
and Peg Solitaire.
 When we use Backtracking?
 How we will use the Backtracking?
88
89
Types of Constraints in Backtracking :
1. Implicit Constraint
2. Explicit Constraint
90
91
 In backtracking technique we backtrack to the last
valid path as soon as we hit a dead end.
 Backtracking reduces the search space since we
no longer have to follow down any path we know
are invalid.
 Backtracking works in a DFS manner with some
bounding function. In this method the desired
solution is expressed as an ‘n’ tuple (x1,x2,-----xn)
where, xi are chosen from finite set si.
92
Constraint Propagation: Inference
 A method of inference that assigns values to variables
characterizing a problem in such a way that some
conditions (called constraints) are satisfied.
 The process of using the constraints to reduce the no.
of legal values for a variable, which in turn can reduce
the legal values for another variable, and so on.
93
94
Example-
95
Game Playing:
 General game playing (GGP) is the design of artificial
intelligence programs to be able to play more than one
game successfully. For instance, a chess-playing computer
program cannot play checkers. Examples include Watson,
a Jeopardy! -playing computer; and the RoboCup
tournament, where robots are trained to compete in soccer
and many more.
 Game Playing is a search problem defined by-
a) Initial State
b) Successor function
c) Goal test
d) Path cost/ Utility/Pay off function
96
 AI has continued to improve, with aims set on a
player being unable to tell the difference between
computer and a human player.
 A game must ‘feel’ natural
a) Obeys laws of the game
b) Character aware of the environment
c) Path finding ( A* algorithm)
d) Decision making
e) Planning
• The game AI is about the illusion of human
behaviour
i. Smart to a certain extent
ii. Non- repeating behaviour
iii. Emotional Influences (Irrationality, Personality)
97
 Game AI needs various computer science
disciplines:
a) Knowledge based systems
b) Machine Learning
c) Multi-agent systems
d) Computer graphics & animation
e) Data Structures
98
Optimal Decisions in Game:
 Optimal Solution: In adversarial search, the optimal
solution is a contingent strategy, which specifies
MAX(the player on our side)’s move in the initial
state, then MAX’s move in the states resulting from
every possible response by MIN(the opponent), then
MAX’s moves in the states resulting from every
possible response by MIN to those moves, and so on.
 One move deep: If a particular game ends after one
move each by MAX and MIN, we say that this tree is
one move deep, consisting of two half-moves, each of
which is called a ply.
99
Explain Min-Max Theorem / Algorithm:
• It is a specialized search algorithm that returns optimal
sequence of moves for a player in Zero sum game.
• Recursive/ Backtracking algorithm which is used in
decision making and game theory-> Two player.
• It uses recursion to search through game tree.
• Algorithm computes minmax decision for current
state.
• Two players MAX (selects maximum value)
•  MIN (selects minimum value)
• Depth first search algorithm is used for operation of
complete game tree.
100
 Minimax value: The minimax value of a node is the
utility (for MAX) of being in the corresponding state,
assuming that both players play optimally from there
to the end of the game. The minimax value of a
terminal state is just its utility.
101
Given a game tree, the optimal strategy can be determined from the
minimax value of each node, i.e. MINIMAX(n).
MAX prefers to move to a state of maximum value, whereas MIN prefers
a state of minimum value.
102
Example of Min-Max Algorithm:
103
Alpha- Beta Pruning:
 Alpha–beta pruning is a search algorithm that seeks to
decrease the number of nodes that are evaluated by the
minimax algorithm in its search tree. It is an adversarial
search algorithm used commonly for machine playing of
two-player games.
 Alpha-beta pruning is a modified version of the minimax
algorithm. It is an optimization technique for the minimax
algorithm.
 This involves two threshold parameter Alpha and beta for
future expansion, so it is called alpha-beta pruning. It is
also called as Alpha-Beta Algorithm.
104
 Alpha-beta pruning can be applied at any depth of a tree,
and sometimes it not only prune the tree leaves but also
entire sub-tree.
 The two-parameter can be defined as:
• Alpha: The best (highest-value) choice we have found
so far at any point along the path of Maximiser. The
initial value of alpha is -∞.
• Beta: The best (lowest-value) choice we have found so
far at any point along the path of Minimizer. The initial
value of beta is +∞.
The main condition which required for alpha-beta pruning
is:
105
Key Points about Alpha- Beta Pruning:
 The Max player will only update the value of alpha.
 The Min player will only update the value of beta.
 While backtracking the tree, the node values will be
passed to upper nodes instead of values of alpha and
beta.
 We will only pass the alpha, beta values to the child
nodes.
106
Working of Alpha-Beta Pruning:
Step 1: The Max player will begin by moving from node A, where = -
and = +, and passing these values of alpha and beta to node B, where
again = - and = +, and Node B passing the same value to its offspring D.
107
Step 2: The value of will be determined as Max's turn at Node D. The
value of is compared to 2, then 3, and the value of at node D will be max
(2, 3) = 3, and the node value will also be 3.
Step 3: The algorithm now returns to node B, where the value of will
change as this is a turn of Min, now = +, and will compare with the value
of the available subsequent nodes, i.e. min (, 3) = 3, so at node B now = -,
and= 3.
108
In the next step, algorithm traverse the next
successor of Node B which is node E, and the
values of α= -∞, and β= 3 will also be passed.
Step 4: Max will take its turn at node E, changing the
value of alpha. The current value of alpha will be
compared to 5, resulting in max (-, 5) = 5, and at
node E= 5 and= 3, where >=, the right successor of
E will be pruned, and the algorithm will not traverse
it, and the value at node E will be 5.
109
110
Step 5: The method now goes backwards in the
tree, from node B to node A. The value of alpha will
be modified at node A, and the highest available
value will be 3 as max (-, 3)= 3, and = +. These two
values will now transfer to A's right successor, Node
C.
=3 and = + will be passed on to node F at node C,
and the same values will be passed on to node F.
Step 6: At node F, the value of will be compared
with the left child, which is 0, and max(3,0)= 3, and
then with the right child, which is 1, and max(3,1)=
3 will remain the same, but the node value of F will
change to 1.
111
112
Step 7: Node F returns the node value 1 to node C, at C = 3 and = +,
the value of beta is modified, and it is compared to 1, resulting in min (, 1)
= 1. Now, at C, =3 and = 1, and again, it meets the condition >=, the
algorithm will prune the next child of C, which is G, and will not compute
the complete sub-tree G.
113
Step 8: C now returns 1 to A, with max (3, 1) = 3
being the greatest result for A. The completed
game tree, which shows calculated and
uncomputed nodes, is shown below. As a result, in
this case, the best maximiser value is 3.
114
115
116
117
Time Complexity:
• Worst ordering: In some instances, the alpha-beta pruning
technique does not trim any of the tree's leaves and
functions identically to the minimax algorithm. Because of
the alpha-beta factors, it also takes more time in this
scenario; this type of pruning is known as worst ordering.
The optimal move is on the right side of the tree in this
situation. For such an order, the temporal complexity is O.
(bm).
• Ideal ordering: When a lot of plastic is in the tree and the
best movements are made on the left side of the tree, the
optimal placement for alpha-beta plastering takes place. We
use DFS such that it is initially left searching and goes deep
in same amount of time twice as a minimum method.
Complexity is O(bm/2) in ideal ordering.
118
Stochastic Games:
 A stochastic game was introduced by Lloyd Shapley in
the early 1950s. It is a dynamic game with
probabilistic transitions played by one or more players.
The game is played in a sequence of stages. At the
beginning of each stage, the game is in a certain state.
 Applications- Stochastic games have applications
in economics, evolutionary biology and computer
networks. They are generalizations of repeated
games which correspond to the special case where
there is only one state.
119
 Many games are unpredictable in nature, such as those
involving dice throw. These games are called as
Stochastic Games. The outcome of the game depends
on skills as well as luck.
 In the Stochastic Games, the winner of the game is not
only decided by the skill but also by luck.
 Examples are
 Gambling game
 Golf ball game
 Backgammon, etc.
120
Stochastic Search Algorithms:
 Stochastic search algorithms are designed for problems
with inherent random noise or deterministic problems
solved by injected randomness.
 Desired properties of search methods are
 High probability of finding near-optimal solutions
(effectiveness)
 Short processing time (Efficiency)
• They are usually conflicting; a compromise is offered
by stochastic techniques where certain steps are based
on random choice.
121
Why Stochastic Search?
 Stochastic search is the method of choice for solving
many hard combinatorial problems.
 Ability of solving hard combinatorial problems has
increased significantly.
 Solution of large propositional satisfiability problems.
 Solution of large travelling salesman problems Good
results in new application areas.
122
123
Thank you

More Related Content

Similar to CSA 2001 (Module-2).pptx

Lecture 07 search techniques
Lecture 07 search techniquesLecture 07 search techniques
Lecture 07 search techniquesHema Kashyap
 
Algorithms Design Patterns
Algorithms Design PatternsAlgorithms Design Patterns
Algorithms Design PatternsAshwin Shiv
 
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdf
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdfAI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdf
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdfAsst.prof M.Gokilavani
 
Best First Search.pptx
Best First Search.pptxBest First Search.pptx
Best First Search.pptxMuktarulHoque1
 
ADSA orientation.pptx
ADSA orientation.pptxADSA orientation.pptx
ADSA orientation.pptxKiran Babar
 
Unit-III-AI Search Techniques and solution's
Unit-III-AI Search Techniques and solution'sUnit-III-AI Search Techniques and solution's
Unit-III-AI Search Techniques and solution'sHarsha Patel
 
Epsrcws08 campbell kbm_01
Epsrcws08 campbell kbm_01Epsrcws08 campbell kbm_01
Epsrcws08 campbell kbm_01Cheng Feng
 
ADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdf
ADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdfADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdf
ADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdfRGPV De Bunkers
 
Heuristic Searching Algorithms Artificial Intelligence.pptx
Heuristic Searching Algorithms Artificial Intelligence.pptxHeuristic Searching Algorithms Artificial Intelligence.pptx
Heuristic Searching Algorithms Artificial Intelligence.pptxSwagat Praharaj
 
2-Algorithms and Complexit data structurey.pdf
2-Algorithms and Complexit data structurey.pdf2-Algorithms and Complexit data structurey.pdf
2-Algorithms and Complexit data structurey.pdfishan743441
 
Types of Algorithms.ppt
Types of Algorithms.pptTypes of Algorithms.ppt
Types of Algorithms.pptALIZAIB KHAN
 
problem solve and resolving in ai domain , probloms
problem solve and resolving in ai domain , problomsproblem solve and resolving in ai domain , probloms
problem solve and resolving in ai domain , problomsSlimAmiri
 
UNIT-1-PPTS-DAA.ppt
UNIT-1-PPTS-DAA.pptUNIT-1-PPTS-DAA.ppt
UNIT-1-PPTS-DAA.pptracha49
 

Similar to CSA 2001 (Module-2).pptx (20)

Lecture 07 search techniques
Lecture 07 search techniquesLecture 07 search techniques
Lecture 07 search techniques
 
search strategies in artificial intelligence
search strategies in artificial intelligencesearch strategies in artificial intelligence
search strategies in artificial intelligence
 
Algorithms Design Patterns
Algorithms Design PatternsAlgorithms Design Patterns
Algorithms Design Patterns
 
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdf
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdfAI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdf
AI3391 ARTIFICIAL INTELLIGENCE UNIT II notes.pdf
 
Best First Search.pptx
Best First Search.pptxBest First Search.pptx
Best First Search.pptx
 
ADSA orientation.pptx
ADSA orientation.pptxADSA orientation.pptx
ADSA orientation.pptx
 
Unit-III-AI Search Techniques and solution's
Unit-III-AI Search Techniques and solution'sUnit-III-AI Search Techniques and solution's
Unit-III-AI Search Techniques and solution's
 
Epsrcws08 campbell kbm_01
Epsrcws08 campbell kbm_01Epsrcws08 campbell kbm_01
Epsrcws08 campbell kbm_01
 
ADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdf
ADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdfADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdf
ADA Unit-1 Algorithmic Foundations Analysis, Design, and Efficiency.pdf
 
Sudoku solver
Sudoku solverSudoku solver
Sudoku solver
 
Heuristic Searching Algorithms Artificial Intelligence.pptx
Heuristic Searching Algorithms Artificial Intelligence.pptxHeuristic Searching Algorithms Artificial Intelligence.pptx
Heuristic Searching Algorithms Artificial Intelligence.pptx
 
Binary search2
Binary search2Binary search2
Binary search2
 
COMPUTER LABORATORY-4 LAB MANUAL BE COMPUTER ENGINEERING
COMPUTER LABORATORY-4 LAB MANUAL BE COMPUTER ENGINEERINGCOMPUTER LABORATORY-4 LAB MANUAL BE COMPUTER ENGINEERING
COMPUTER LABORATORY-4 LAB MANUAL BE COMPUTER ENGINEERING
 
AI Lesson 04
AI Lesson 04AI Lesson 04
AI Lesson 04
 
DAA UNIT 3
DAA UNIT 3DAA UNIT 3
DAA UNIT 3
 
2-Algorithms and Complexit data structurey.pdf
2-Algorithms and Complexit data structurey.pdf2-Algorithms and Complexit data structurey.pdf
2-Algorithms and Complexit data structurey.pdf
 
Types of Algorithms.ppt
Types of Algorithms.pptTypes of Algorithms.ppt
Types of Algorithms.ppt
 
problem solve and resolving in ai domain , probloms
problem solve and resolving in ai domain , problomsproblem solve and resolving in ai domain , probloms
problem solve and resolving in ai domain , probloms
 
Clustering.pptx
Clustering.pptxClustering.pptx
Clustering.pptx
 
UNIT-1-PPTS-DAA.ppt
UNIT-1-PPTS-DAA.pptUNIT-1-PPTS-DAA.ppt
UNIT-1-PPTS-DAA.ppt
 

Recently uploaded

Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 

Recently uploaded (20)

Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 

CSA 2001 (Module-2).pptx

  • 1. Vellore Institute OF Technology, BHOPAL Fundamentals In AI & ML (Department of Computer Science & Engineering) Ankit Shrivastava Vellore Institute Of Technology (VIT), Bhopal, India
  • 3. Search Strategies:  A search strategy is an organised structure of key terms used to search a database. The search strategy combines the key concepts of your search question in order to retrieve accurate results.  Search techniques are universal problem-solving methods.  A search Strategy is also called “Search Algorithm” which solves a search problem. Search algorithms work to retrieve information stored within some data structure, or calculated in the search space of a problem domain, with either discrete or continuous values. 3
  • 4.  Rational agents or Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem and provide the best result. Problem-solving agents are the goal-based agents and use atomic representation.  The appropriate search algorithm often depends on the data structure being searched, and may also include prior knowledge about the data. Search algorithms can be made faster or more efficient by specially constructed database structures, such as search trees, hash maps, and database indexes. 4
  • 5. Types of Search Algorithms: 5
  • 6. Uninformed Search/ Blind Search:  Uninformed search is a class of general-purpose search algorithms which operates in brute force-way. Uninformed search algorithms do not have additional information about state or search space other than how to traverse the tree, so it is also called blind search.  The uninformed search does not contain any domain knowledge such as closeness, the location of the goal.  It operates in a brute force way, as it only includes information about how to traverse the tree & how to identify the leaf as well as goal nodes.  Uninformed search applies a way in which search tree is searched without any information about the search space like initial state operates & test for the goal, so it is called Blind search. Example- It examines each node until it achieves the goal. 6
  • 8. Depth First Search (DFS):  DFS is called Uninformed Search Technique.  It works on Present knowledge.  Depth First Search or DFS starts with the initial node of the graph, then goes deeper until it finds the goal nodes or nodes having no children.  In DFS, then backtracks from the dead end towards the most recent node that is yet to be completely unexplored.  Stack data structure is used in DFS.  DFS works in LIFO (Last in First Out) manner.  It works on Brute Force way or Blind Search.  It is Non-optimal solution.  It goes on Deepest node. 8
  • 9. Algorithm: 1. Enter root node on stack 2. Do until stack is not empty (a.) Remove node i. if node= Goal node then stop ii. Push all children of node in stack. 9
  • 10. Time Complexity:  Time complexity in Data Structure = O(V+E) where, V = no. of vertices E = no. of edges  Time complexity in Artificial Intelligence = O(bd) where, b = branching factor d = depth 10
  • 11. Advantages: 1. It requires less memory. 2. It requires less time to reach goal node if traversal in right path. ex.- If we have to reach goal node (G) from starting node (A) then it takes less time but if we have to reach goal node (H) from starting node (A) then it takes more time. Disadvantages: 1. No guarantee of finding a solution. 2. It can go in infinite loop. 11
  • 12. 12
  • 13. 13
  • 15. Breadth First Search (BFS):  Breadth First Search (BFS) algorithm traverses a graph in a breadth ward motion and uses a queue to remember to get the next vertex to start a search, when a dead end occurs in any iteration.  It explores all the nodes at given depth before proceeding to the next level.  It uses Queue data structure to implement (FIFO) manner.  It gives optimal solution.  BFS comes under Uninformed Search technique or we can say Blind Search.  Uninformed means no domains have specific knowledge. 15
  • 16. Algorithm: 1. Enter starting nodes on Queue. 2. If Queue is empty then return fail and stop. 3. If first element on queue is goal node, then return success and stop. 4. ELSE 5. 4. Remove and expand first element from queue and place children at the end of queue. 6. 5. Go to step 2. 16
  • 17. Time Complexity:  Time complexity in Data Structure = O(V+E) where, V = no. of vertices E = no. of edges  Time complexity in Artificial Intelligence = O(bd) where, b = branching factor d = depth 17
  • 18. Advantages: 1. Find a solution if it exists. 2. It will try to find the minimal solution in least no. of steps. Disadvantages: 1. It requires less memory. 2. It needs lots of time if solution is far from root node. 18
  • 19. 19
  • 21. 21
  • 22. 22
  • 23. Uniform Cost Search Algorithm :  It is used for weighted Tree/ Graph Traversal.  Goal is to path finding to goal-node with lowest cost.  Node Expansion is based on path costs.  It uses Backtracking also.  Priority Queue is used for Implementation. 23
  • 24. Advantage: 1. It gives optimal solution because at every state/ path with the least cost is chosen. Disadvantage: 1. It does not care about the no. of steps involve in searching and only concerned about the cost path. Due to which this algo. may be stuck in an infinite loop. 24
  • 25. 25
  • 26. 26
  • 27. 27
  • 29. Greedy Search:  A greedy algorithm is any algorithm that follows the problem-solving heuristic of making the locally optimal choice at each stage.  A greedy algorithm is an approach for solving a problem by selecting the best option available at the moment. It doesn't worry whether the current best result will bring the overall optimal result.  In many problems, a greedy strategy does not produce an optimal solution, but a greedy heuristic can yield locally optimal solutions that approximate a globally optimal solution in a reasonable amount of time.  It gives feasible solution. 29
  • 30.  The problem that requires either minimum or maximum result then that problem is known as an optimization problem. Greedy method is one of the strategies used for solving the optimization problems.  It follows the local optimum choice at each stage with a intend of finding the global optimum. Let's understand through an example. 30
  • 31. Characteristics of Greedy Method: • To construct the solution in an optimal way, this algorithm creates two sets where one set contains all the chosen items, and another set contains the rejected items. • A Greedy algorithm makes good local choices in the hope that the solution should be either feasible or optimal. 31
  • 32. Applications of Greedy Algorithm:  It is used in finding the shortest path.  It is used to find the minimum spanning tree using the prim's algorithm or the Kruskal's algorithm.  It is used in a job sequencing with a deadline.  This algorithm is also used to solve the fractional knapsack problem. 32
  • 33. Pseudocode of Greedy Algorithm: 1.Algorithm Greedy (a, n) 2.{ 3. Solution : = 0; 4. for i = 0 to n do 5. { 6. x: = select(a); 7. if feasible(solution, x) 8. { 9. Solution: = union(solution , x) 10. } 11. return solution; 12. } } 33
  • 34. Best First Search:  The best first search uses the concept of a priority queue and heuristic search. It is a search algorithm that works on a specific rule. The aim is to reach the goal from the initial state via the shortest path.  The best First Search algorithm in artificial intelligence is used for finding the shortest path from a given starting node to a goal node in a graph. The algorithm works by expanding the nodes of the graph in order of increasing the distance from the starting node until the goal node is reached. 34
  • 35. Algorithm: Let ‘OPEN’ be a priority queue containing initial state. Loop If OPEN is empty return failure Node <- Remove - First (OPEN) If Node is a goal then return the path from initial to Node else generate all successors of node and put the newly generated Node into OPEN according to their F values END LOOP 35
  • 36. 36
  • 37. 37
  • 38. Knapsack Problem:  Fractional Knapsack problem is defined as, “Given a set of items having some weight and value/profit associated with it. The knapsack problem is to find the set of items such that the total weight is less than or equal to a given limit (size of knapsack) and the total value/profit earned is as large as possible.”  This problem can be solved with the help of using two techniques: • Brute-force approach: The brute-force approach tries all the possible solutions with all the different fractions but it is a time-consuming approach. • Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and accordingly, we will select the item. The item with the highest ratio would be selected first. 38
  • 39. { For I =1 to n; compute pi/wi; Sort objects in non increasing order of P/W for i= 1 to n from sorted list if (m>0 && wi<=m) m= m-wi; p= p+pi; else break; if (m>0) p = p+pi (m/wi); } 39 Knapsack Algorithm:
  • 40. Job Scheduling Problem:  Job scheduling is the problem of scheduling jobs out of a set of N jobs on a single processor which maximizes profit as much as possible. Consider N jobs, each taking unit time for execution. Each job is having some profit and deadline associated with it.  The sequencing of jobs on a single processor with deadline constraints is called as Job Sequencing with Deadlines. 40
  • 41. The greedy algorithm described below always gives an optimal solution to the job sequencing problem- Step-01: • Sort all the given jobs in decreasing order of their profit. Step-02: • Check the value of maximum deadline. • Draw a Gantt chart where maximum time on Gantt chart is the value of maximum deadline. Step-03: • Pick up the jobs one by one. • Put the job on Gantt chart as far as possible from 0 ensuring that the job gets completed before its deadline. 41
  • 42. Q. Given the jobs, this deadlines and associated profits as shown- Answer the following questions- 1. Write the optimal schedule that gives max. profit 2. Are the jobs completed in optimal schedule? 3. What is the max. earned profit? 42 Jobs J1 J2 J3 J4 J5 J6 Deadline s 5 3 3 2 4 2 Profits 200 180 190 300 120 100
  • 43. Soln.- Step-1: Sort all the given jobs in decreasing order of their profit- Step-2: Gantt Chart 1 2 3 4 5 43 Jobs J4 J1 J3 J2 J5 J6 Deadlines 2 5 3 3 4 2 Profits 300 200 190 180 120 100 J2 J4 J3 J5 J1
  • 44. 1. The optimal schedule is- J2, J4, J3, J5, J1 This is required order in which the jobs must be completed in order to obtain the maximum profit. 2. All the jobs are not completed in optimal schedule. This is because job J6 could not be completed with its deadline. 3. Maximum earned profit = 180 + 300 + 190 + 120 + 200 = 990 units. 44
  • 45. Prim’s Algorithm:  Prim's Algorithm is a greedy algorithm that is used to find the minimum spanning tree from a graph. Prim's algorithm finds the subset of edges that includes every vertex of the graph such that the sum of the weights of the edges can be minimized.  Prim's algorithm starts with the single node and explores all the adjacent nodes with all the connecting edges at every step. 45
  • 46. Working of Prim’s Algorithm: Prim's algorithm is a greedy algorithm that starts from one vertex and continue to add the edges with the smallest weight until the goal is reached. The steps to implement the prim's algorithm are given as follows - • First, we have to initialize an MST with the randomly chosen vertex. • Now, we have to find all the edges that connect the tree in the above step with the new vertices. From the edges found, select the minimum edge and add it to the tree. • Repeat step 2 until the minimum spanning tree is formed. 46
  • 47. Spanning Tree: A spanning tree is the subgraph of an undirected connected graph. Minimum Spanning Tree: Minimum spanning tree can be defined as the spanning tree in which the sum of the weights of the edge is minimum. The weight of the spanning tree is the sum of the weights given to the edges of the spanning tree. 47
  • 48. Applications of Prim’s Algorithm: • Prim's algorithm can be used in network designing. • It can be used to make network cycles. • It can also be used to lay down electrical wiring cables. 48
  • 49. Difference between Prim’s algorithm and Kruskal’s algorithm: 49 Prim’s Algorithm Kruskal’s Algorithm The tree that we are making or growing always remains connected. The tree that we are making or growing usually remains disconnected. Prim’s algorithm grows a solution from a random vertex by adding the next cheapest vertex to the existing tree. Kruskal’s algorithm grows a solution from the cheapest edge by adding the next cheapest edge to the existing tree/ forest. Prim’s algorithm is faster for dense graphs. Kruskal’s algorithm is faster for sparse graphs.
  • 50. 50
  • 51. 51
  • 52. 52
  • 53. Kruskal’s Algorithm:  Kruskal's algorithm to find the minimum cost spanning tree uses the greedy approach. This algorithm treats the graph as a forest and every node it has as an individual tree. A tree connects to another only and only if, it has the least cost among all available options and does not violate MST properties.  Kruskal's Algorithm is used to find the minimum spanning tree for a connected and undirected weighted graph. The main target of the algorithm is to find the subset of edges by using which we can traverse every vertex of the graph. 53
  • 54. A* Search Algorithm:  A* (pronounced "A-star") is a graph traversal and path search algorithm, which is often used in many fields of computer science due to its completeness, optimality, and optimal efficiency.  A* is an informed search algorithm, or a best-first search, meaning that it is formulated in terms of weighted graphs: starting from a specific starting node of a graph, it aims to find a path to the given goal node having the smallest cost (least distance travelled, shortest time, etc.). It does this by maintaining a tree of paths originating at the start node and extending those paths one edge at a time until its termination criterion is satisfied. 54
  • 55.  At each iteration of its main loop, A* needs to determine which of its paths to extend. It does so based on the cost of the path and an estimate of the cost required to extend the path all the way to the goal. Specifically, A* selects the path that minimizes f(n) = g(n) + h(n) where n is the next node on the path, g(n) is the cost of the path from the start node to n, and h(n) is a heuristic function that estimates the cost of the cheapest path from n to the goal. 55
  • 56. Algorithm: 1. Enter starting node in OPEN list. 2. If OPEN list is empty return FAIL 3. Select node from OPEN list which has smallest value (g+n) if node = Goal , return success 4. Expand node ‘n’ generates all successors compute (g+n) for each successor node 5. If node ‘n’ is already in OPEN/CLOSED, attach to back pointer 6. Go to (3.) 56
  • 57. Advantages:  Best searching algorithm  Optimal and complete  Solving complex problems Disadvantages:  Doesn’t always produces shortest  Some complexity issues  It requires memory 57
  • 58. How to make A* Admissible: There are two conditions: 1. Admissible: In this Heuristic function, never underestimate the cost of reaching the goal. H(n)< = H*(n) {goal} 2. Non-Admissible: In this Heuristic function, never overestimate the cost of reaching the goal. H(n) > H*(n) 58
  • 59. 59
  • 60. 60
  • 61. 61
  • 62. 62
  • 63. 63
  • 64. Local Search Algorithms:  Local Search Algorithms operate using a single current node and generally move only to neighbours of that node.  Local search method keeps small number of nodes in a memory. They are suitable for problems where the solution is the goal state itself and not the path.  In addition to finding goals, local search algorithms are useful for solving pure optimization problems, in which the aim is to find the best state according to an objective function.  Hill-Climbing and Simulated Annealing are examples of local search algorithms. 64
  • 65. Hill-Climbing Algorithm:  Hill climbing algorithm is a local search algorithm which continuously moves in the direction of increasing elevation/value to find the peak of the mountain or best solution to the problem. It terminates when it reaches a peak value where no neighbour has a higher value. 65 A one- dimensional state-space landscape in which elevation corresponds to the objective function.
  • 66.  Hill climbing is sometimes called greedy local search because it grabs a good neighbour state- without thinking ahead about where to go next. Limitations: Hill climbing cannot reach the optimal/best state (global maximum) if it enters any of the following regions: Local Maxima – A local maximum is a peak that is higher than each of its neighbouring states but lower than the global maximum. 66
  • 67. Plateaus – A plateau is a flat area of the state-space landscape. It can be flat local maximum, from which no uphill exit exits, or a shoulder, from which progress is possible. Ridge – A Ridge is an area which is higher than surrounding states, but it can not be reached in a single move. 67
  • 68. A Ridges is shown in fig. result in a sequence of local maxima that is very difficult for greedy algorithm to navigate. 68
  • 69. Variations of Hill Climbing-  In Steepest Ascent hill climbing all successors are compared and the closest to the solution is chosen. Steepest Ascent hill climbing is like best-first search, which tries all possible extensions of the current path instead of only one.  It gives optimal solution but time consuming.  It is also known as Gradient search. 69
  • 70. Simulated Annealing:  Annealing is the process used to temper or harden metals and glass by heating them to a high temperature and then gradually cooling them, thus allowing the material to reach a low energy crystalline state.  The simulated annealing algorithm is quite similar to hill-climbing. Instead of picking the best move, however, it picks a random move. If a move improves the situation, it is always accepted. Otherwise the algorithm accepts the move with some probability less than 1.  Check all the neighbours.  Moves to worst state may be accepted. 70
  • 71. Constraint Satisfaction Problem:  In artificial intelligence and operations research, constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy.  Constraint propagation methods are also used in conjunction with search to make a given problem simpler to solve.  Examples of problems that can be modeled as a constraint satisfaction problem include:  Map Colouring Problem  Crosswords, Sudoku and many logic puzzles 71
  • 72.  Constraint satisfaction depends on three components, namely: • X: It is a set of variables. • D: It is a set of domains where the variables reside. There is a specific domain for each variable. • C: It is a set of constraints which are followed by the set of variables. 72
  • 73. 73
  • 74. CSP Problems: Constraint satisfaction includes those problems which contains some constraints while solving the problem. CSP includes the following problems: • Graph Colouring: The problem where the constraint is that no adjacent sides can have the same colour. 74
  • 75.  Sudoku Playing: The gameplay where the constraint is that no number from 0-9 can be repeated in the same row or column. 75
  • 76. • n-queen problem: In n-queen problem, the constraint is that no queen should be placed either diagonally, in the same row or column. • Crossword: In crossword problem, the constraint is that there should be the correct formation of the words, and it should be meaningful. 76
  • 77.  Latin square Problem: In this game, the task is to search the pattern which is occurring several times in the game. They may be shuffled but will contain the same digits. 77
  • 78. Latin Square Problem: A Latin square is a square array of objects (letters A,B,C,…) such that each object appears once and only once in each row and each column. Example- Suppose we choose the following Latin Square: 78
  • 79.  It is not the Latin square design/ problem. Why? 79
  • 80. Representation of LSD: Drivers/ Cars 1 2 3 4 a A B C D b B C D A C C D A B D D A B C 80 4 Brands of petrol's are indicated as A, B, C, D
  • 81. 81
  • 82. In LSD, you have 3 factors: Rows Columns Treatments (letters A,B,C,…) • The row-column treatments are represented by cells in a n x n array. • The treatments are assigned to row-column combinations using a Latin-square arrangement. 82 The number of treatments= No. of Rows = No. of column = n
  • 83. Map- Coloring Problem: Problem: We are given the task of coloring each region either red, green or blue in such a way that no neighboring regions have the same color. 83
  • 84. Solution:  To formulate this as a CSP, we define the variables as (WA, NT, Q, NSW, V, SA and T)  The domain of each variable is the set {red, green, blue}  The constraints require neighboring regions to have distinct colors; for example, the allowable combinations for WA and NT are the pairs {(red, green), (red, blue), (green, red), (green, blue), (blue, red), (blue, green)}  The constraint can also be represented more succinctly as the inequality. WA!=NT, provided the constraint satisfaction algorithm has some way to evaluate such expressions. 84
  • 85.  There are many possible solutions. One possible solution is shown below {WA = red, NT = green, Q = red, NSW = green, V = red, SA = Blue, T = red/green/blue} 85
  • 86. Backtracking for the map-coloring problem: 86
  • 87. Solution with Constraint Satisfaction Problem: 87
  • 88. Backtracking Search:  Backtracking search: A depth-first search that chooses values for one variable at a time and backtracks when a variable has no legal values left to assign. Backtracking algorithm repeatedly chooses an unassigned variable, and then tries all values in the domain of that variable in turn, trying to find a solution.  Examples where backtracking can be used to solve puzzles or problems include: Puzzles such as eight queens puzzle, crosswords, verbal arithmetic, Sudoku, and Peg Solitaire.  When we use Backtracking?  How we will use the Backtracking? 88
  • 89. 89
  • 90. Types of Constraints in Backtracking : 1. Implicit Constraint 2. Explicit Constraint 90
  • 91. 91
  • 92.  In backtracking technique we backtrack to the last valid path as soon as we hit a dead end.  Backtracking reduces the search space since we no longer have to follow down any path we know are invalid.  Backtracking works in a DFS manner with some bounding function. In this method the desired solution is expressed as an ‘n’ tuple (x1,x2,-----xn) where, xi are chosen from finite set si. 92
  • 93. Constraint Propagation: Inference  A method of inference that assigns values to variables characterizing a problem in such a way that some conditions (called constraints) are satisfied.  The process of using the constraints to reduce the no. of legal values for a variable, which in turn can reduce the legal values for another variable, and so on. 93
  • 95. 95
  • 96. Game Playing:  General game playing (GGP) is the design of artificial intelligence programs to be able to play more than one game successfully. For instance, a chess-playing computer program cannot play checkers. Examples include Watson, a Jeopardy! -playing computer; and the RoboCup tournament, where robots are trained to compete in soccer and many more.  Game Playing is a search problem defined by- a) Initial State b) Successor function c) Goal test d) Path cost/ Utility/Pay off function 96
  • 97.  AI has continued to improve, with aims set on a player being unable to tell the difference between computer and a human player.  A game must ‘feel’ natural a) Obeys laws of the game b) Character aware of the environment c) Path finding ( A* algorithm) d) Decision making e) Planning • The game AI is about the illusion of human behaviour i. Smart to a certain extent ii. Non- repeating behaviour iii. Emotional Influences (Irrationality, Personality) 97
  • 98.  Game AI needs various computer science disciplines: a) Knowledge based systems b) Machine Learning c) Multi-agent systems d) Computer graphics & animation e) Data Structures 98
  • 99. Optimal Decisions in Game:  Optimal Solution: In adversarial search, the optimal solution is a contingent strategy, which specifies MAX(the player on our side)’s move in the initial state, then MAX’s move in the states resulting from every possible response by MIN(the opponent), then MAX’s moves in the states resulting from every possible response by MIN to those moves, and so on.  One move deep: If a particular game ends after one move each by MAX and MIN, we say that this tree is one move deep, consisting of two half-moves, each of which is called a ply. 99
  • 100. Explain Min-Max Theorem / Algorithm: • It is a specialized search algorithm that returns optimal sequence of moves for a player in Zero sum game. • Recursive/ Backtracking algorithm which is used in decision making and game theory-> Two player. • It uses recursion to search through game tree. • Algorithm computes minmax decision for current state. • Two players MAX (selects maximum value) •  MIN (selects minimum value) • Depth first search algorithm is used for operation of complete game tree. 100
  • 101.  Minimax value: The minimax value of a node is the utility (for MAX) of being in the corresponding state, assuming that both players play optimally from there to the end of the game. The minimax value of a terminal state is just its utility. 101
  • 102. Given a game tree, the optimal strategy can be determined from the minimax value of each node, i.e. MINIMAX(n). MAX prefers to move to a state of maximum value, whereas MIN prefers a state of minimum value. 102
  • 103. Example of Min-Max Algorithm: 103
  • 104. Alpha- Beta Pruning:  Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player games.  Alpha-beta pruning is a modified version of the minimax algorithm. It is an optimization technique for the minimax algorithm.  This involves two threshold parameter Alpha and beta for future expansion, so it is called alpha-beta pruning. It is also called as Alpha-Beta Algorithm. 104
  • 105.  Alpha-beta pruning can be applied at any depth of a tree, and sometimes it not only prune the tree leaves but also entire sub-tree.  The two-parameter can be defined as: • Alpha: The best (highest-value) choice we have found so far at any point along the path of Maximiser. The initial value of alpha is -∞. • Beta: The best (lowest-value) choice we have found so far at any point along the path of Minimizer. The initial value of beta is +∞. The main condition which required for alpha-beta pruning is: 105
  • 106. Key Points about Alpha- Beta Pruning:  The Max player will only update the value of alpha.  The Min player will only update the value of beta.  While backtracking the tree, the node values will be passed to upper nodes instead of values of alpha and beta.  We will only pass the alpha, beta values to the child nodes. 106
  • 107. Working of Alpha-Beta Pruning: Step 1: The Max player will begin by moving from node A, where = - and = +, and passing these values of alpha and beta to node B, where again = - and = +, and Node B passing the same value to its offspring D. 107
  • 108. Step 2: The value of will be determined as Max's turn at Node D. The value of is compared to 2, then 3, and the value of at node D will be max (2, 3) = 3, and the node value will also be 3. Step 3: The algorithm now returns to node B, where the value of will change as this is a turn of Min, now = +, and will compare with the value of the available subsequent nodes, i.e. min (, 3) = 3, so at node B now = -, and= 3. 108
  • 109. In the next step, algorithm traverse the next successor of Node B which is node E, and the values of α= -∞, and β= 3 will also be passed. Step 4: Max will take its turn at node E, changing the value of alpha. The current value of alpha will be compared to 5, resulting in max (-, 5) = 5, and at node E= 5 and= 3, where >=, the right successor of E will be pruned, and the algorithm will not traverse it, and the value at node E will be 5. 109
  • 110. 110
  • 111. Step 5: The method now goes backwards in the tree, from node B to node A. The value of alpha will be modified at node A, and the highest available value will be 3 as max (-, 3)= 3, and = +. These two values will now transfer to A's right successor, Node C. =3 and = + will be passed on to node F at node C, and the same values will be passed on to node F. Step 6: At node F, the value of will be compared with the left child, which is 0, and max(3,0)= 3, and then with the right child, which is 1, and max(3,1)= 3 will remain the same, but the node value of F will change to 1. 111
  • 112. 112
  • 113. Step 7: Node F returns the node value 1 to node C, at C = 3 and = +, the value of beta is modified, and it is compared to 1, resulting in min (, 1) = 1. Now, at C, =3 and = 1, and again, it meets the condition >=, the algorithm will prune the next child of C, which is G, and will not compute the complete sub-tree G. 113
  • 114. Step 8: C now returns 1 to A, with max (3, 1) = 3 being the greatest result for A. The completed game tree, which shows calculated and uncomputed nodes, is shown below. As a result, in this case, the best maximiser value is 3. 114
  • 115. 115
  • 116. 116
  • 117. 117
  • 118. Time Complexity: • Worst ordering: In some instances, the alpha-beta pruning technique does not trim any of the tree's leaves and functions identically to the minimax algorithm. Because of the alpha-beta factors, it also takes more time in this scenario; this type of pruning is known as worst ordering. The optimal move is on the right side of the tree in this situation. For such an order, the temporal complexity is O. (bm). • Ideal ordering: When a lot of plastic is in the tree and the best movements are made on the left side of the tree, the optimal placement for alpha-beta plastering takes place. We use DFS such that it is initially left searching and goes deep in same amount of time twice as a minimum method. Complexity is O(bm/2) in ideal ordering. 118
  • 119. Stochastic Games:  A stochastic game was introduced by Lloyd Shapley in the early 1950s. It is a dynamic game with probabilistic transitions played by one or more players. The game is played in a sequence of stages. At the beginning of each stage, the game is in a certain state.  Applications- Stochastic games have applications in economics, evolutionary biology and computer networks. They are generalizations of repeated games which correspond to the special case where there is only one state. 119
  • 120.  Many games are unpredictable in nature, such as those involving dice throw. These games are called as Stochastic Games. The outcome of the game depends on skills as well as luck.  In the Stochastic Games, the winner of the game is not only decided by the skill but also by luck.  Examples are  Gambling game  Golf ball game  Backgammon, etc. 120
  • 121. Stochastic Search Algorithms:  Stochastic search algorithms are designed for problems with inherent random noise or deterministic problems solved by injected randomness.  Desired properties of search methods are  High probability of finding near-optimal solutions (effectiveness)  Short processing time (Efficiency) • They are usually conflicting; a compromise is offered by stochastic techniques where certain steps are based on random choice. 121
  • 122. Why Stochastic Search?  Stochastic search is the method of choice for solving many hard combinatorial problems.  Ability of solving hard combinatorial problems has increased significantly.  Solution of large propositional satisfiability problems.  Solution of large travelling salesman problems Good results in new application areas. 122

Editor's Notes

  1. Implicit: It is a rule that satisfies how each element in a tuple should be related. Explicit: Rules that restrict each element will be choose from a given set.
  2. Game Playing is an important domain of artificial intelligence. Games don’t require much knowledge; the only knowledge we need to provide is the rules, legal moves and the conditions of winning or losing the game.