Control Strategies
Control Strategy in Artificial Intelligence
scenario is a technique or strategy, tells us about which rule has to be applied next while searching for the solution of a problem within problem space.
It helps us to decide which rule has to apply next without getting stuck at any point.
Characteristics of Control Strategies
A good Control strategy has two main
characteristics:
Control Strategy should cause Motion
Control strategy should be Systematic
Co ntrol Strategy should cause Motion
Each rule or strategy applied should cause the motion because if there will be no motion than such control strategy will never lead to a solution. Motion states about the change of state and if a state will not change then there be no movement from an initial state and we would never solve the problem.
Co ntrol Strategy should be Systematic
Though the strategy applied should create the
motion but if do not follow some systematic
strategy than we are likely to reach the same state
number of times before reaching the solution
which increases the number of steps. Taking care of only first strategy we may go through particular useless sequences of operators several times. Control Strategy should be systematic implies a need for global motion as well as for local motion.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
Control Strategies
Control Strategy in Artificial Intelligence
scenario is a technique or strategy, tells us about which rule has to be applied next while searching for the solution of a problem within problem space.
It helps us to decide which rule has to apply next without getting stuck at any point.
Characteristics of Control Strategies
A good Control strategy has two main
characteristics:
Control Strategy should cause Motion
Control strategy should be Systematic
Co ntrol Strategy should cause Motion
Each rule or strategy applied should cause the motion because if there will be no motion than such control strategy will never lead to a solution. Motion states about the change of state and if a state will not change then there be no movement from an initial state and we would never solve the problem.
Co ntrol Strategy should be Systematic
Though the strategy applied should create the
motion but if do not follow some systematic
strategy than we are likely to reach the same state
number of times before reaching the solution
which increases the number of steps. Taking care of only first strategy we may go through particular useless sequences of operators several times. Control Strategy should be systematic implies a need for global motion as well as for local motion.
FellowBuddy.com is an innovative platform that brings students together to share notes, exam papers, study guides, project reports and presentation for upcoming exams.
We connect Students who have an understanding of course material with Students who need help.
Benefits:-
# Students can catch up on notes they missed because of an absence.
# Underachievers can find peer developed notes that break down lecture and study material in a way that they can understand
# Students can earn better grades, save time and study effectively
Our Vision & Mission – Simplifying Students Life
Our Belief – “The great breakthrough in your life comes when you realize it, that you can learn anything you need to learn; to accomplish any goal that you have set for yourself. This means there are no limits on what you can be, have or do.”
Like Us - https://www.facebook.com/FellowBuddycom
In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do.
The method of solving problem through AI involves the process of defining the search space, deciding start and goal states and then finding the path from start state to goal state through search space.
State space search is a process used in the field of computer science, including artificial intelligence(AI), in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Minmax Algorithm In Artificial Intelligence slidesSamiaAziz4
Mini-max algorithm is a recursive or backtracking algorithm that is used in decision-making and game theory. Mini-Max algorithm uses recursion to search through the game-tree.
Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax decision for the current state.
Informed search algorithms are commonly used in various AI applications, including pathfinding, puzzle solving, robotics, and game playing. They are particularly effective when the search space is large and the goal state is not immediately visible. By intelligently guiding the search based on heuristic estimates, informed search algorithms can significantly reduce the search effort and find solutions more efficiently than uninformed search algorithms like depth-first search or breadth-first search.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
In which we see how an agent can find a sequence of actions that achieves its goals, when no single action will do.
The method of solving problem through AI involves the process of defining the search space, deciding start and goal states and then finding the path from start state to goal state through search space.
State space search is a process used in the field of computer science, including artificial intelligence(AI), in which successive configurations or states of an instance are considered, with the goal of finding a goal state with a desired property.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Minmax Algorithm In Artificial Intelligence slidesSamiaAziz4
Mini-max algorithm is a recursive or backtracking algorithm that is used in decision-making and game theory. Mini-Max algorithm uses recursion to search through the game-tree.
Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax decision for the current state.
Informed search algorithms are commonly used in various AI applications, including pathfinding, puzzle solving, robotics, and game playing. They are particularly effective when the search space is large and the goal state is not immediately visible. By intelligently guiding the search based on heuristic estimates, informed search algorithms can significantly reduce the search effort and find solutions more efficiently than uninformed search algorithms like depth-first search or breadth-first search.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Hadj Ounis's most notable work is his sculpture titled "Metamorphosis." This piece showcases Ounis's mastery of form and texture, as he seamlessly combines metal and wood to create a dynamic and visually striking composition. The juxtaposition of the two materials creates a sense of tension and harmony, inviting viewers to contemplate the relationship between nature and industry.
2137ad - Characters that live in Merindol and are at the center of main storiesluforfor
Kurgan is a russian expatriate that is secretly in love with Sonia Contado. Henry is a british soldier that took refuge in Merindol Colony in 2137ad. He is the lover of Sonia Contado.
Explore the multifaceted world of Muntadher Saleh, an Iraqi polymath renowned for his expertise in visual art, writing, design, and pharmacy. This SlideShare delves into his innovative contributions across various disciplines, showcasing his unique ability to blend traditional themes with modern aesthetics. Learn about his impactful artworks, thought-provoking literary pieces, and his vision as a Neo-Pop artist dedicated to raising awareness about Iraq's cultural heritage. Discover why Muntadher Saleh is celebrated as "The Last Polymath" and how his multidisciplinary talents continue to inspire and influence.
2137ad Merindol Colony Interiors where refugee try to build a seemengly norm...luforfor
This are the interiors of the Merindol Colony in 2137ad after the Climate Change Collapse and the Apocalipse Wars. Merindol is a small Colony in the Italian Alps where there are around 4000 humans. The Colony values mainly around meritocracy and selection by effort.
1. Chapter 6: Adversarial Search
In which we examine the problems that arise when we try to plan
ahead in a world where other agents are planning against us.
2. CS 420: Artificial Intelligence 2
Adversarial Search
Multi-agent environment:
any given agent needs to consider the actions
of other agents and how they affect its own
welfare
introduce possible contingencies into the
agent’s problem-solving process
cooperative vs. competitive
Adversarial search problems: agents have
conflicting goals -- games
3. CS 420: Artificial Intelligence 3
Games vs. Search Problems
"Unpredictable"
opponent
specifying a move
for every possible
opponent reply
Time limits
unlikely to find
goal, must
approximate
4. CS 420: Artificial Intelligence 4
AI and Games
In AI, “games” have special
format:
deterministic, turn-taking, 2-
player, zero-sum games of
perfect information
Zero-sum describes a situation in
which a participant’s gain or loss
is exactly balanced by the losses
or gains of the other
participant(s)
Or, the total payoff to all players
is the same for every instance of
the game (constant sum)
Go! 围棋
5. CS 420: Artificial Intelligence 5
Game Problem Formulation
A game with 2 players (MAX and MIN, MAX moves
first, turn-taking) can be defined as a search
problem with:
initial state: board position
player: player to move
successor function: a list of legal (move, state) pairs
goal test: whether the game is over – terminal states
utility function: gives a numeric value for the terminal
states (win, loss, draw)
Game tree = initial state + legal moves
6. CS 420: Artificial Intelligence 6
Game Tree (2-player, deterministic)
Utility value
7. CS 420: Artificial Intelligence 7
Optimal Strategies
MAX must find a contingent strategy, specifying MAX’s move in:
the initial state
the states resulting from every possible response by MIN
E.g., 2-ply game (the tree is one move deep, consisting of two half-
moves, each of which is called a ply):
3 12 8 2 4 6 14 5 2
MAX
MAX
MIN 3 2 2
3
8. CS 420: Artificial Intelligence 8
Minimax Value
Perfect play for deterministic game, assume both
players play optimally
Idea: choose move to position with highest
minimax value = best achievable payoff against
best play
( )
MINIMAX VALUE n
( )
Utility n
( )
max ( )
s Successors n MINIMAX s
( )
min ( )
s Successors n MINIMAX s
if n is a terminal state
if n is a MAX node
if n is a MIN node
9. CS 420: Artificial Intelligence 9
A Partial Game Tree for Tic-Tac-Toe
O’s turn (MIN)
X’s turn (MAX)
by Yosen Lin
11. CS 420: Artificial Intelligence 11
Minimax with Tic-Tac-Toe
Work on board.
Walk through a real tic-tac-toe program.
12. CS 420: Artificial Intelligence 12
Analysis of Minimax
Optimal play for MAX assumes that MIN also plays
optimally, what if MIN does not play optimally?
A complete depth-first search?
Yes
Time complexity?
O(bm)
Space complexity?
O(bm) (depth-first exploration)
For chess, b ≈ 35, m ≈ 100 for "reasonable"
games
exact solution completely infeasible
13. CS 420: Artificial Intelligence 13
Optimal Decisions for Multiplayer Games
Extend minimax idea to multiplayer
A
B
C
A
(1,2,6) (4,2,3) (6,1,2) (7,4,1) (5,1,1) (1,5,2) (7,7,1) (5,4,5)
(1,2,6) (6,1,2) (1,5,2) (5,4,5)
(1,2,6) (1,5,2)
(1,2,6)
14. CS 420: Artificial Intelligence 14
Interesting Thoughts
Multiplayer games usually involve alliances,
which can be a natural consequence of
optimal strategies
If the game is non zero-sum, collaboration
can also occur
For example, a terminal state with utilities <Va
= 1000, Vb = 1000>
The optimal strategy is for both players to do
everything possible to reach this state
15. CS 420: Artificial Intelligence 15
α-β Pruning
The number of game states with minimax
search is exponential in the # of moves
Is it possible to compute the correct
minimax decision without looking at every
node in the game tree?
Need to prune away branches that cannot
possibly influence the final decision
23. CS 420: Artificial Intelligence 23
A Simple Formula
( )
max(min(3,12,8),min(2, , ),min(14,5,2))
max(3,min(2, , ),2)
max(3, ,2)
3
MINIMAX VALUE root
x y
x y
z
The value of the root and hence the minimax decision
are independent of the values of pruned leaves x and y
where z ≤ 2
25. CS 420: Artificial Intelligence 25
Basic Idea of α-β
Consider a node n such that
Player has a choice of
moving to
If Player has a better choice
m either at the parent of n
or at any choice point
further up, then n will never
be reached in actual play
α-β pruning gets it name
from the two parameters
that describe bounds on the
backed-up values
26. CS 420: Artificial Intelligence 26
Definitions of α and β
α = the value of the best (highest-value) choice
we have found so far at any choice point along the
path for MAX
β = the value of the best (lowest-value) choice we
have found so far at any choice point along the
path for MIN
α-β search updates the values of α and β as it
goes along and prunes the remaining branches at
a node as soon as the value of the current node is
worse than the current α or β for MAX or MIN
respectively
28. CS 420: Artificial Intelligence 28
Now, Trace The Behavior
A
3 12 8 2 4 6 14 5 2
MAX
MIN 3 2 2
3
29. CS 420: Artificial Intelligence 29
Analysis of α-β
Pruning does not affect final result
The effectiveness of alpha-beta pruning is highly dependent
on the order of successors
It might be worthwhile to try to examine first the successors
that are likely to be best
With "perfect ordering," time complexity = O(bm/2)
effective branching factor becomes
For chess, 6 instead of 35
it can look ahead roughly twice as far as minimax in the same
amount of time
Ordering in chess: captures, threats, forward moves, and then
backward moves
b
30. CS 420: Artificial Intelligence 30
Random Ordering?
If successors are examined in random order rather
than best-first, the complexity will be roughly
O(b3m/4)
Adding dynamic move-ordering schemes, such as
trying first the moves that were found to be best
last time, brings us close to the theoretical limit
The best moves are often called killer moves
(killer move heuristic)
31. CS 420: Artificial Intelligence 31
Dealing with Repeated States
In games, repeated states occur frequently
because of transpositions --
different permutations of the move sequence end up in
the same position
e.g., [a1, b1, a2, b2] vs. [a1, b2, a2, b1]
It’s worthwhile to store the evaluation of this
position in a hash table the first time it is
encountered
similar to the “explored set” in graph-search
Tradeoff:
Transposition table can be too big
Which to keep and which to discard
32. CS 420: Artificial Intelligence 32
Imperfect, Real-Time Decisions
Minimax generates the entire game search space
Alpha-beta prunes large part of it, but still needs
to search all the way to terminal states
However, moves must be made in reasonable
amount of time
Standard approach: turning non-terminal nodes
into terminal leaves
cutoff test: replaces terminal test, e.g., depth limit
heuristic evaluation function = estimated desirability or
utility of position
33. CS 420: Artificial Intelligence 33
Evaluation Functions
The performance of a game-playing
program is dependent on the quality of its
evaluation function
Order the terminal states the same way as the
true utility function
Evaluation of nonterminal states correlate with
the actual chance of winning
Computation must not take too long!
34. CS 420: Artificial Intelligence 34
Playing Chess, Experience is
Important
Most eval. Functions work by calculating various
features of the state
Different features form various categories of
states
Consider the category of two-pawns vs. one pawn
and suppose experience suggests that 72% of the
states lead to a win (+1), 20% to a loss(0), and
8% to a draw (1/2)
Expected value = 0.72*1 + 0.20*0 + 0.08*1/2 = 0.76
Too many categories and too much experience are
required
35. Introductory chess books give an approximate material value
for each piece:
each pawn is worth 1, a knight or bishop is worth 3, a rook 5,
and the queen 9
Other features such as “good pawn structure” and “king safety”
worth half a pawn
For chess, typically linear weighted sum of features
Eval(s) = w1 f1 (s) + w2 f2 (s) + … + wn fn(s)
wi is a weight and fi is a feature of the position
e.g., f is the # of each piece, and w is the value of each piece
Linear assumption: contribution of each feature is independent
of the values of the other features
There are also non-linear combinations of features
CS 420: Artificial Intelligence 35
36. CS 420: Artificial Intelligence 36
Cutting off Search
Modify alpha-beta search so that:
Terminal? is replaced by Cutoff?
Utility is replaced by Eval
if Cutoff-Test(state, depth) then return Eval(state)
depth is chosen such that the amount of time used will
not exceed what the rules of the game allow
Iterative deepening search can be applied
When time runs out, returns the move selected by the deepest
completed search
37. CS 420: Artificial Intelligence 37
Result?
Does it work in practice?
Chess problem. Assume we can generate and evaluate a
million nodes per second, this will allow us to search
roughly 200 million nodes per move under standard time
control (3 minutes per move).
With minimax, only 5-ply, but with alpha-beta,10-ply
4-ply lookahead is a hopeless chess player!
4-ply ≈ human novice
8-ply ≈ typical PC, human master
12-ply ≈ Deep Blue, Kasparov
38. CS 420: Artificial Intelligence 38
Deterministic Games in Practice
Checkers: Chinook ended 40-year-reign of human world champion
Marion Tinsley in 1994. Used a precomputed endgame database
defining perfect play for all positions involving 8 or fewer pieces on
the board, a total of 444 billion positions.
Chess: Deep Blue defeated human world champion Garry Kasparov
in a six-game match in 1997. Deep Blue searches 200 million
positions per second, uses very sophisticated evaluation, and
undisclosed methods for extending some lines of search up to 40
ply.
Othello: human champions refuse to compete against computers,
who are too good.
Go: human champions refuse to compete against computers, who
are too bad. In go, b > 300, so most programs use pattern
knowledge bases to suggest plausible moves.
39. CS 420: Artificial Intelligence 39
Summary
Games are fun to work on!
They illustrate several important points
about AI
Perfection is unattainable must
approximate
40. CS 420: Artificial Intelligence 40
Exercise 1
Prove that: for every game tree, the utility
obtained by MAX using minimax decisions
against a suboptimal MIN will never be
lower than the utility obtained playing
against an optimal MIN.
41. CS 420: Artificial Intelligence 41
Exercise 2
Consider the following game:
Draw the complete game tree, using the following convention:
State: (Sa, Sb) where Sa and Sb denote the token locations
Identify the terminal state in a square box and assign it a value
Put loop states in double square boxes
Since it’s not clear how to assign values to loop states, annotate
each with a “?”
1 2 3 4
A B
42. CS 420: Artificial Intelligence 42
Answer to Ex1
Consider a MIN node whose children are
terminal nodes. If MIN plays suboptimally,
then the value of the node is greater than
or equal to the value it would have if MIN
played optimally. Hence, the value of the
MAX node that is the MIN node’s parent
can only be increased. This argument can
be extended by a simple induction all the
way to the root.
43. CS 420: Artificial Intelligence 43
Game Tree for Ex2
Think about how to get
the values