This document provides an introduction to using artificial intelligence for games. It discusses how AI can be used to create challenging opponents or helpful allies that act autonomously based on their programming. It notes that while human-level general intelligence is difficult to achieve, AI can perform well in narrow contexts like chess. For games, the AI must be intentionally flawed to ensure a fun challenge and cannot have obvious weaknesses. It must also be able to perform calculations and make decisions in real-time to interact with the game. The document then outlines some common AI techniques used in games, including MinMax search trees and finite state machines to control agent behavior. It provides pseudocode examples of the MinMax algorithm and discusses enhancements like Alpha-Beta pruning
Minmax Algorithm In Artificial Intelligence slidesSamiaAziz4
Mini-max algorithm is a recursive or backtracking algorithm that is used in decision-making and game theory. Mini-Max algorithm uses recursion to search through the game-tree.
Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax decision for the current state.
Minmax Algorithm In Artificial Intelligence slidesSamiaAziz4
Mini-max algorithm is a recursive or backtracking algorithm that is used in decision-making and game theory. Mini-Max algorithm uses recursion to search through the game-tree.
Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-tac-toe, go, and various tow-players game. This Algorithm computes the minimax decision for the current state.
This is a simple presentation on Game Theory in Network Security. I made it when I was searching for research points for my Master degree. Still searching for other research points. Any suggestions on research points in network security or network architecture? :)
This presentation is an attempt to introduce Game Theory in one session. It's suitable for undergraduates. In practice, it's best used as a taster since only a portion of the material can be covered in an hour - topics can be chosen according to the interests of the class.
The main reference source used was 'Games, Theory and Applications' by L.C.Thomas. Further notes available at: http://bit.ly/nW6ULD
I created this presentation for my college project and its consist everything you need to know about AI.This Presentation contains a HD video who describes application of AI. This presentation is ideal for college students, school students and for beginners.
This is a simple presentation on Game Theory in Network Security. I made it when I was searching for research points for my Master degree. Still searching for other research points. Any suggestions on research points in network security or network architecture? :)
This presentation is an attempt to introduce Game Theory in one session. It's suitable for undergraduates. In practice, it's best used as a taster since only a portion of the material can be covered in an hour - topics can be chosen according to the interests of the class.
The main reference source used was 'Games, Theory and Applications' by L.C.Thomas. Further notes available at: http://bit.ly/nW6ULD
I created this presentation for my college project and its consist everything you need to know about AI.This Presentation contains a HD video who describes application of AI. This presentation is ideal for college students, school students and for beginners.
2016 Fighting Game Artificial Intelligence Competitionftgaic
These are the slides about the 2016 Fighting Game Artificial Intelligence Competition presented at the 2016 IEEE Conference on Computational Intelligence and Games (CIG 2016) on September 22, 2016 in Santorini, Greece.
2013 Fighting Game Artificial Intelligence Competitionftgaic
These are the slides about the 2013 Fighting Game Artificial Intelligence Competition presented at The 2nd IEEE Global Conference on Consumer Electronics (GCCE 2013) on October 3, 2013.
Looking at artificial intelligence from a big data perspective. Do we really understand the math involved in generating predictive analysis? This introduction tries to simplify artificial intelligence and how you can use it in marketing of your company
An Introduction to Artificial IntelligenceSeth Juarez
Ever want to know how computers think? In this session attendees will learn the foundations of artificial intelligence through a collaborative discussion centered around the creation of an intelligent game. Attendees will also learn how to use advanced search techniques to solve complex problems using specialized heuristics. In short, attendees will understand how to make intelligent programs by learning how to pose an AI problem in order to maximize desired outcomes.
This seminar slide will give you the ideas how much the ARTIFICIAL INTELLIGENCE is important gaming and what hard work should do for movements of other persons in a game.
A brief Introduction to AI and its applications in Gaming. Talk was at "Advances & Research Challenges in the Applications of AI in Gaming, Medical Imaging and Bio-Informatics"
This Presentation will give you an overview about Artificial Intelligence : definition, advantages , disadvantages , benefits , applications .
We hope it to be useful .
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
2. Introduction to Artificial Intelligence
(AI)
• Many applications for AI
– Computer vision, natural language processing, speech
recognition, search …
• But games are some of the more interesting
• Opponents that are challenging, or allies that are
helpful
– Unit that is credited with acting on own
• Human-level intelligence too hard
– But under narrow circumstances can do pretty well
(ex: chess and Deep Blue)
– For many games, often constrained (by game rules)
• Artificial Intelligence (around in CS for some
time)
3. AI for CS different than AI for Games
• Must be smart, but purposely flawed
– Loose in a fun, challenging way
• No unintended weaknesses
– No “golden path” to defeat
– Must not look dumb
• Must perform in real time (CPU)
• Configurable by designers
– Not hard coded by programmer
• “Amount” and type of AI for game can vary
– RTS needs global strategy, FPS needs modeling of
individual units at “footstep” level
– RTS most demanding: 3 full-time AI programmers
– Puzzle, street fighting: 1 part-time AI programmer
– All of project 2.
4. Outline
• Introduction (done)
• MinMax (next)
• Agents
• Finite State Machines
• Common AI Techniques
• Promising AI Techniques
5. MinMax - Links
• Minimax Game Trees
• Minimax Explained
• Min-Max Search
• Wiki
• (See Project 2 Web page)
6. MinMax - Overview
• MinMax the heart of almost every computer board
game
• Applies to games where:
– Players take turns
– Have perfect information
• Chess, Checkers, Tactics
• But can work for games without perfect
information or chance
– Poker, Monopoly, Dice
• Can work in real-time (ie- not turn based) with
timer (iterative deepening, later)
7. MinMax - Overview
• Search tree
– Squares represent decision states (ie- after a move)
– Branches are decisions (ie- the move)
– Start at root
– Nodes at end are leaf nodes
– Ex: Tic-Tac-Toe (symmetrical positions removed)
• Unlike binary trees can have any number of children
– Depends on the game situation
• Levels usually called plies (a ply is one level)
– Each ply is where "turn" switches to other player
• Players called Min and Max (next)
8. MaxMin - Algorithm
• Named MinMax because of algorithm behind data
structure
• Assign points to the outcome of a game
– Ex: Tic-Tac-Toe: X wins, value of 1. O wins, value -1.
• Max (X) tries to maximize point value, while Min
(O) tries to minimize point value
• Assume both players play to best of their ability
– Always make a move to minimize or maximize points
• So, in choosing, Max will choose best move to get
highest points, assuming Min will choose best move
to get lowest points
9. MinMax – First Example
• Max’s turn
• Would like the “9” points (the
maximum)
• But if choose left branch, Min
will choose move to get 3
left branch has a value
of 3
• If choose right, Min can
choose any one of 5, 6 or 7
(will choose 5, the minimum)
right branch has a
value of 5
• Right branch is largest (the
maximum) so choose that
move
5
3 4 5
3 9 4 6 75
Max
Min
Max
10. MinMax – Second Example
• Max’s turn
• Circles represent Max, Squares represent Min
• Values inside represent the value the MinMax algorithm
• Red arrows represent the chosen move
• Numbers on left represent tree depth
• Blue arrow is the chosen move
Min
Min
Max
Max
11. MinMax and Chess
• With full tree, can determine best possible move
• However, full tree impossible for some games! Ex: Chess
– At a given time, chess has ~ 35 legal moves. Exponential
growth:
• 35 at one ply, 352
= 1225 at two plies … 356
= 2 billion and 3510
=
2 quadrillion
– Games can last 40 moves (or more), so 3540
… Stars in
universe: ~ 228
• For large games (Chess) can’t see end of the game. Must
estimate winning or losing from top portion
– Evaluate() function to guess end given board
– A numeric value, much smaller than victory (ie- Checkmate
for Max will be one million, for Min minus one million)
• So, computer’s strength at chess comes from:
– How deep can search
– How well can evaluate a board position
– (In some sense, like a human – a chess grand master can
evaluate board better and can look further ahead)
12. MinMax – Pseudo Code (1 of 3)
int MinMax(int depth) {
// White is Max, Black is Min
if (turn == WHITE)
return Max(depth);
else
return Min(depth);
}
• Then, call with:
value = MinMax(5); // search 5 plies
13. MinMax – Pseudo Code (2 of 3)
int Max(int depth) {
int best = -INFINITY; // first move is best
if (depth == 0)
return Evaluate();
GenerateLegalMoves();
while (MovesLeft()) {
MakeNextMove();
val = Min(depth – 1); // Min’s turn next
UnMakeMove();
if (val > best)
best = val;
}
return best;
}
14. MinMax – Pseudo Code (3 of 3)
int Min(int depth) {
int best = INFINITY; // different than MAX
if (depth == 0)
return Evaluate();
GenerateLegalMoves();
while (MovesLeft()) {
MakeNextMove();
val = Max(depth – 1); // Max’s turn next
UnMakeMove();
if (val < best) // different than MAX
best = val;
}
return best;
}
15. MinMax - Notes on Pseudo Code
• Dual-recursive call each other until bottom out
(depth of zero is reached)
• Try tracing with depth = 1
– Essentially, try each move out, choose best
• Need to modify to return best move. Implement:
– When store “best”, also store “move”
– Use global variable
– Pass in move via reference
– Use object/structure with “best” + “move”
• Since Max() and Min() are basically opposites
(zero-sum game), can make code shorter with
simple flip
– Called NegaMax
16. MinMax – NegaMax Pseudo Code
int NegaMax(int depth) {
int best = -INFINITY;
if (depth == 0)
return Evaluate();
GenerateLegalMoves();
while (MovesLeft()) {
MakeNextMove();
val = -1 * NegaMax(depth-1); // Note the -1
UnMakeMove();
if (val > best) // Still pick largest
best = val;
}
return best;
}
• Note, the -1 causes Min to pick smallest, Max biggest
• Ex: 4, 5, 6 Max will pick ‘6’, while Min will pick ‘-4’ so ‘4’
17. MinMax – AlphaBeta Pruning
• MinMax searches entire tree, even if in some cases the rest
can be ignored
• Example – Enemy lost bet. Owes you one thing from bag.
You choose bag, but he chooses thing. Go through bags one
item at a time.
– First bag: Sox tickets, sandwich, $20
• He’ll choose sandwich
– Second bag: Dead fish, …
• He’ll choose fish. Doesn’t matter if rest is car, $500,
Yankee’s tickets … Don’t need to look further. Can prune.
• In general, stop evaluating move when find worse than
previously examined move
Does not benefit the player to play that move, it need
not be evaluated any further.
Save processing time without affecting final result
18. MinMax – AlphaBeta Pruning Example
• From Max point of view, 1 is already lower
than 4 or 5, so no need to evaluate 2 and 3
(bottom right) Prune
19. MinMax – AlphaBeta Pruning Idea
• Two scores passed around in search
– Alpha – best score by some means
• Anything less than this is no use (can be pruned) since
we can already get alpha
• Minimum score Max will get
• Initially, negative infinity
– Beta – worst-case scenario for opponent
• Anything higher than this won’t be used by opponent
• Maximum score Min will get
• Initially, infinity
• Recursion progresses, the "window" of Alpha-Beta
becomes smaller
– Beta < Alpha current position not result of best
play and can be pruned
20. MinMax – AlphaBeta Pseudo Code
int AlphaBeta(int depth, int alpha, int beta) {
if (depth <= 0)
return Evaluate();
GenerateLegalMoves();
while (MovesLeft()) {
MakeNextMove();
val = -1 * AlphaBeta(depth-1, -beta, -alpha);
UnMakeMove();
if (val >= beta)
return val;
if (val > alpha)
alpha = val;
}
return alpha;
}
• Note, beta and alpha are reversed for subsequent calls
• Note, the -1 for beta and alpha, too
21. MinMax – AlphaBeta Notes
• Benefits heavily dependent upon order
searched
– If always start at worst, never prune
•Ex: consider previous with node 1 first
(worst)
– If always start at best, branch at
approximated sqrt(branch)
•Ex: consider previous with 5 first (best)
• For Chess:
– If ~35 choices per ply, at best can improve
from 35 to 6
Allows search twice as deep
22. MinMax – Notes
• Chess has many forced tactical situations (ie- taken knight,
better take other knight)
– MinMax can leave hanging (at tree depth)
– So, when done, check for captures only
• Time to search can vary (depending upon Evaluate() and
branches and pruning)
– Instead, search 1 ply. Check time. If enough, search 2
plies. Repeat. Called iterative deepening
depth = 1;
while (1) {
Val = AlphaBeta(depth, -INF, INF)
If (timeOut()) break;
}
– For enhancement, can pass in best set of moves (line) seen
last iteration (principle variation)
23. MinMax – Evaluate()
• Checkmate – worth more than rest combined
• Typical, use weighted function:
– c1*material + c2*mobility + c3*king
safety + c4*center control + ...
– Simplest is point value for material
• pawn 1, knight 3, bishop 3, castle 3, queen 9
• All other stuff worth 1.5 pawns (ie- can ignore most
everything else)
• What about a draw?
– Can be good (ie- if opponent is strong)
– Can be bad (ie- if opponent is weak)
– Adjust with contempt factor
• Makes a draw (0) slightly lower (play to win)
24. Outline
• Introduction (done)
• MinMax (done)
• Agents (next)
• Finite State Machines
• Common AI Techniques
• Promising AI Techniques
25. Game Agents
• Most AI focuses around game agent
– think of agent as NPC, enemy, ally or
neutral
• Loops through: sense-think-act cycle
– Acting is event specific, so talk about sense
and think first, then a bit on act
Sense Think Act
26. Game Agents – Sensing (1 of 2)
• Gather current world state: barriers, opponents,
objects, …
• Needs limitations: avoid “cheating” by looking at
game data
– Typically, same constraints as player (vision, hearing
range, etc.)
• Vision
– Can be quite complicated (CPU intensive) to test
visibility (ie- if only part of an object visible)
– Compute vector to each object
• Check magnitude (ie- is it too far away?)
• Check angle (dot product) (ie- within 120° viewing
angle?)
• Check if obscured. Most expensive, so do last.
27. Game Agents – Sensing (2 of 2)
• Hearing
– Ex- tip-toe past, enemy doesn’t hear, but if run past,
enemy hears (stealth games, like Thief)
– Implement as event-driven
• When player performs action, notify agents within range
– Rather than sound reflection (complicated) usually
distance within bounded area
• Can enhance with listen attributes by agent (if agent is
“keen eared” or paying attention)
• Communication
– Model sensing data from other agents
– Can be instant (ie- connected by radio)
– Or via hearing (ie- shout)
• Reaction times
– Sensing may take some time (ie- don’t have agent react
to alarm instantly, seems unrealistic)
– Build in delay. Implement with simple timer.
28. Game Agents – Thinking (1 of 3)
• Evaluate information and make decision
• As simple or elaborate as required
• Generally, two ways:
1. Pre-coded expert knowledge
• Typically hand-crafted “if-then” rules +
“randomness” to make unpredictable
2. Search algorithm for best (optimal)
solution
• Ex- MinMax
29. Game Agents – Thinking (2 of 3)
• Expert Knowledge
– Finite State Machines, decision trees, … (FSM most
popular, details next)
– Appealing since simple, natural, embodies common sense
and knowledge of domain
• Ex: See enemy weaker than you? Attack. See enemy
stronger? Go get help
– Trouble is, often does not scale
• Complex situations have many factors
• Add more rules, becomes brittle
– Still, often quite adequate for many AI tasks
• Many agents have quite narrow domain, so doesn’t matter
30. Game Agents – Thinking (3 of 3)
• Search
– Look ahead and see what move to do next
•Ex: piece on game board (MinMax), pathfinding
(A*)
– Works well with known information (ie- can
see obstacles, pieces on board)
• Machine learning
– Evaluate past actions, use for future action
– Techniques show promise, but typically too
slow
31. Game Agents – Acting (1 of 2)
• Learning and Remembering
– May not be important in many games where
agent short-lived (ie- enemy drone)
– But if alive for 30+ seconds, can be helpful
•ie- player attacks from right, so shield right
– Implementation - too avoid too much
information, can have fade from memory
(by time or by queue that becomes full)
32. Game Agents – Acting (2 of 2)
• Making agents stupid
– Many cases, easy to make agents dominate
• Ex: FPS bot always makes head-shot
– Dumb down by giving “human” conditions, longer
reaction times, make unnecessarily vulnerable, have
make mistakes
• Agent cheating
– Ideally, don’t have unfair advantage (such as more
attributes or more knowledge)
– But sometimes might “cheat” to make a challenge
• Remember, that’s the goal, AI lose in challenging way
– Best to let player know
33. AI for Games – Mini Outline
• Introduction (done)
• MinMax (done)
• Agents (done)
• Finite State Machines (next)
• Common AI Techniques
• Promising AI Techniques
34. Finite State Machines
• Many different rules for agents
– Ex: sensing, thinking and acting when fighting, running,
exploring…
– Can be difficult to keep rules consistent!
• Try Finite State Machine
– Probably most common game AI software pattern
– Natural correspondence between states and behaviors
– Easy: to diagram, program, debug
– General to any problem
– See AI Depot - FSM
• For each situation, choose appropriate state
– Number of rules for each state is small
35. Finite State Machines
• Abstract model of computation
• Formally:
– Set of states
– A starting state
– An input vocabulary
– A transition function that maps inputs and the
current state to a next state
W a n d e r A t t a c k
F le e
S e e E n e m y
Low
Health
No
Enem
y
N o E n e m y
(Do detailed
example next
slide)
36. Finite State Machines – Example (1 of 2)
• Game where raid Egyptian Tomb
• Mummies! Behavior
– Spend all of eternity
wandering in tomb
– When player is close, search
– When see player, chase
• Make separate states
– Define behavior in each state
• Wander – move slowly,
randomly
• Search – move faster, in
lines
• Chasing – direct to player
• Define transitions
– Close is 100 meters
(smell/sense)
– Visible is line of sight
Wandering
Searching
Chasing
ClosebyVisible
FarawayHidden
37. Finite State Machines – Example (2 of 2)
• Can be extended easily
• Ex: Add magical scarab
(amulet)
• When player gets scarab,
Mummy is afraid. Runs.
• Behavior
– Move away from
player fast
• Transition
– When player gets
scarab
– When timer expires
• Can have sub-states
– Same transitions, but
different actions
• ie- range attack
versus melee attack
Wandering
Searching
ChasingClosebyVisible
FarawayHidden
Afraid
Scarab
Scarab
Scarab
Timer
Expires
40. Finite-State Machine:
Problems with switch FSM
1. Code is ad hoc
– Language doesn’t enforce structure
2. Transitions result from polling (checking
each time)
– Inefficient – event-driven sometimes
better
•ie- when damage, call “pain” event for
monster and it may change states
3. Can’t determine 1st
time state is entered
4. Can’t be edited or specified by game
designers or players
41. Finite State Machine
Alternative Implementation
• Make objects
• Transitions are events (passed by objects
creating events)
– Ex: player runs. All objects within hearing
range get “run sound” event
• Each object can have step event
– Gets mapped to right action in state by call
back
43. Finite-State Machine:
Scripting Advantages
1. Structure enforced
2. Events can be handed as well as polling
3. OnEnter and OnExit concept exists
(If objects, when created or destroyed)
4. Can be authored by game designers
– Easier learning curve than straight C/C++
44. Finite-State Machine:
Scripting Disadvantages
• Not trivial to implement
• Several months of development of language
– Custom compiler
• With good compile-time error feedback
– Bytecode interpreter
• With good debugging hooks and support
• Scripting languages often disliked by users
– Can never approach polish and robustness of
commercial compilers/debuggers
45. Finite-State Machine:
Hybrid Approach
• Use a class and C-style macros to approximate a scripting
language
• Allows FSM to be written completely in C++ leveraging
existing compiler/debugger
• Capture important features/extensions
– OnEnter, OnExit
– Timers
– Handle events
– Consistent regulated structure
– Ability to log history
– Modular, flexible, stack-based
– Multiple FSMs, Concurrent FSMs
• Can’t be edited by designers or players
46. Finite-State Machine:
Extensions
• Many possible extensions to basic FSM
– Event driven: OnEnter, OnExit
– Timers: transition after certain time
– Global state with sub-states (same transitions,
different actions)
– Stack-Based (states or entire FSMs)
• Easy to revert to previous states
• Good for resuming earlier action
– Multiple concurrent FSMs
• Lower layers for, say, obstacle avoidance – high
priority
• Higher layers for, say, strategy
47. AI for Games – Mini Outline
• Introduction (done)
• MinMax (done)
• Agents (done)
• Finite State Machines (done)
• Common AI Techniques (next)
• Promising AI Techniques
48. Common Game AI Techniques (1 of 4)
• Whirlwind tour of common techniques
– For each, provide idea and example (where appropriate)
– Subset and grouped based on text
• Movement
– Flocking
• Move groups of creatures in natural manner
• Each creature follows three simple rules
– Separation – steer to avoid crowding flock mates
– Alignment – steer to average flock heading
– Cohesion – steer to average position
• Example – use for background creatures such as birds or
fish. Modification can use for swarming enemy
– Formations
• Like flocking, but units keep position relative to others
• Example – military formation (archers in the back)
49. Common Game AI Techniques (2 of 4)
• Movement (continued)
– A* pathfinding
• Cheapest path through environment
• Directed search exploit knowledge about destination
to intelligently guide search
• Fastest, widely used
• Can provide information (ie- virtual breadcrumbs) so
can follow without recompute
• See: http://www.antimodal.com/astar/
– Obstacle avoidance
• A* good for static terrain, but dynamic such as other
players, choke points, etc.
• Example – same path for 4 units, but can predict
collisions so furthest back slow down, avoid narrow
bridget, etc.
50. Common Game AI Techniques (3 of 4)
• Behavior organization
– Emergent behavior
• Create simple rules result in complex interactions
• Example: game of life, flocking
– Command hierarchy
• Deal with AI decisions at different levels
• Modeled after military hierarchy (ie- General does strategy
to Foot Soldier does fighting)
• Example: Real-time or turn based strategy games -- overall
strategy, squad tactics, individual fighters
– Manager task assignment
• When individual units act individually, can perform poorly
• Instead, have manager make tasks, prioritize, assign to
units
• Example: baseball – 1st
priority to field ball, 2nd
cover first
base, 3rd
to backup fielder, 4th
cover second base. All players
try, then disaster. Manager determines best person for
each. If hit towards 1st
and 2nd
, first baseman field ball,
pitcher cover first base, second basemen cover first
51. Common Game AI Techniques (4 of 4)
• Influence map
– 2d representation of power in game
– Break into cells, where units in each cell are summed up
– Units have influence on neighbor cells (typically, decrease
with range)
– Insight into location and influence of forces
– Example – can be used to plan attacks to see where
enemy is weak or to fortify defenses. SimCity used to
show fire coverage, etc.
• Level of Detail AI
– In graphics, polygonal detail less if object far away
– Same idea in AI – computation less if won’t be seen
– Example – vary update frequency of NPC based on
position from player
52. AI for Games – Mini Outline
• Introduction (done)
• MinMax (done)
• Agents (done)
• Finite State Machines (done)
• Common AI Techniques (done)
• Promising AI Techniques (next)
– Used in AI, but not (yet) in games
– Subset of what is in book
53. Promising AI Techniques (1 of 3)
• Bayesian network
– A probabilistic graphical model with variables and
probable influences
– Example - calculate probability of patient having a
specific disease given symptoms
– Example – AI can infer if player has warplanes, etc.
based on what it sees in production so far
– Can be good to give “human-like” intelligence without
cheating or being too dumb
• Decision tree learning
– Series of inputs (usually game state) mapped to output
(usually thing want to predict)
– Example – health and ammo predict bot survival
– Modify probabilities based on past behavior
– Example – Black and White could stroke or slap creature.
Learned what was good and bad.
54. Promising AI Techniques (2 of 3)
• Filtered randomness
– Want randomness to provide unpredictability to AI
– But even random can look odd (ie- if 4 heads in a
row, player think something wrong. And, if flip coin
100 times, will be streak of 8)
• Example – spawn at same point 5 times in a row, then
bad
– Compare random result to past history and avoid
• Fuzzy logic
– Traditional set, object belongs or not.
– In fuzzy, can have relative membership (ie- hungry,
not hungry. Or “in-kitchen” or “in-hall” but what if
on edge?)
– Cannot be resolved by coin-flip
– Can be used in games – ie- assess relative threat
55. Promising AI Techniques (3 of 3)
• Genetic algorithms
– Search and optimize based on evolutionary principles
– Good when “right” answer not well-understood
– Example – may not know best combination of AI settings.
Use GA to try out
– Often expensive, so do offline
• N-Gram statistical prediction
– Predict next value in sequence (ie- 1818180181 … next will
probably be 8)
– Search backward n values (usually 2 or 3)
– Example
• Street fighting (punch, kick, low punch…)
• Player does low kick and then low punch. What is next?
• Uppercut 10 times (50%), low punch (7 times, 35%),
sideswipe (3 times, 15%)
• Can predict uppercut or, proportionally pick next (ie- roll
dice)
56. Summary
• AI for games different than other fields
– Intelligent opponents, allies and neutral’s
but fun (lose in challenging way)
– Still, can draw upon broader AI techniques
• Agents – sense, think, act
– Advanced agents might learn
• Finite state machines allow complex
expertise to be expressed, yet easy to
understand and debug
• Dozens of other techniques to choose from