SlideShare a Scribd company logo
1 of 62
Download to read offline
Inteligência artificial
            Overview



Bruno Duarte Corrêa – poli/usp
Thiago Dias Pastor   – poli/usp
Purpose of ai
Maximize fun
Emulates the reality
    Not necessarily simulation
    Scoped by the gameplay
     Challenging
    ...
Desired Characteristics
Be smart
  Does not looks like dumb, but purposely
  flawed
Not intended weakness
Real time performance
Configurable
Transparent
Basic needs of ai
Navigation
Perceptions
  Understand / Capture a snapshot of
  the world
Acting
Planning
Learning / adapt
Navigation
Path finding
  Waypoints
  Navmesh
  Astar / Dijkstra
Steering behaviors
  Field based navigation
Waypoint
• World Discretization
  • Manual Waypoints
  • Automatic
    Waypoints
• Static Environment
• Connect by Raytrace
• Different types
NavMesh
•   Mesh processing algorithm
•   Pre-processing stage
•   Optimized waypoints
    placement.
      • Sloped based
      • Easier to handle
•   State of art
NavMesh - Recast
•   OpenSource Api for NavMesh Generation (c++)
•   Used in many commercial games
•   Fully integrated with Detour (Navigation System)
•   Recast is state of the art navigation mesh construction toolset for games.
•     It is automatic, which means that you can throw any level geometry at it
    and you will get robust mesh out
•     It is fast which means swift turnaround times for level designers
•     Extensible and easy to integrate


                         Source
        http://code.google.com/p/recastnavigation/
                          Information
         http://www.critterai.org/book/export/html/2
PathFinding - Astar
• Find paths in the
• waypoint graph
• Robot Paths
  • Smooth filters
  • Bezier
• Highly Parallelizable
• Can be implemented on
   GPU
PathFinding - Astar
           F=G+H
             where
G = the movement cost to move
 from the starting point A to a
    given square on the grid,
following the path generated to
           get there.
 H = the estimated movement
  cost to move from that given
 square on the grid to the final
      destination, point B.
PathFinding
          Summary of the A* Method
1.Add the starting square (or node) to the open list.
2.Repeat the following:
   1. Look for the lowest F cost square on the open list. We refer to this as the current
        square.
   2. Switch it to the closed list.
   3. For each of the 8 squares adjacent to this current square …
   4. If it is not walkable or if it is on the closed list, ignore it. Otherwise do the following. If it
        isn’t on the open list, add it to the open list. Make the current square the parent of this
        square. Record the F, G, and H costs of the square.
   5. If it is on the open list already, check to see if this path to that square is better, using G
        cost as the measure. If so, change the parent of the square to the current square, and
        recalculate the G and F scores of the square. If you are keeping your open list sorted by
        F score.
3.Stop when you:
   1. Add the target square to the closed list, in which case the path has been found (see note
        below), or
   2. Fail to find the target square, and the open list is empty. In this case, there is no path.
4.Save the path. Working backwards from the target square, go from each square to its parent
square until you reach the starting square. That is your path.
PathFinding
Steering Behaviors
• Flow Driven
• Atraction and Repulsion
• No need for environment
  discretization
• Dynamic calculation
• More human like (natural)
  walking style
• Characters can get stuck !!!
• Swarm Behavior !!!
Basic Behaviors
Seek and Flee
                Pursuit and Evasion
Basic Behaviors
Collision avoidance          Path Pursuit




Follow the Leader     Advanced Collisions Avoidance
Steering Behaviors
• Combine all results like a Resultant Force
• Use physic Integration to get:
  • Force
  • Aceleration
  • Velocity
  • Space
• Can be integrated with common physics APIs
Basic needs of ai
Navigation
Perceptions
  Understand / Capture a snapshot of
  the world
Acting
Planning
Learning / adapt
Perceptions
“See the world as like a human does”
  Emulate human senses
  Response time
Sensors
  Sight
    Field of view/Raycast
  Sounds
  Smells ?!?!
Receive Messages
Basic needs of ai
Navigation
Perceptions
  Understand / Capture a snapshot of
  the world
Acting
Planning
Learning / adapt
Finite state machine
             algorithm
       • Simple theoretical
         construct
          – Set of states (S)
          – Input vocabulary (I)
          – Transitional function
            T(s,i)
       • A way of denoting how
         an object can change
         its state over time.


State: Set of properties of the environment at a time
Finite state machine
            • States
              – E: enemy in sight
              – S: hear a sound
              – D: dead
            • Events
              – E: see an enemy
              – S: hear a sound
              – D: die
            • Action performed
              – On each transition
              – On each update in
                some states (e.g.
                attack)
Non deterministic
Finite state machine
FSM - Limitations

Hard to Expand
Hard to Maintain
Not Flexible enough
As larger it gets as f... you are.
Behavior tree
Replaces fsm
Encapsulates Actions / Logic / State
Direct acyclic graph that is traversed
  Non-leaf = Condition
  Leafs = Actions
Divide to conquer approach
Crysis Soldier




• Real Time Extensible
• SIMPLE
• Scriptable
Generic Behavior Tree
Example
Example
Example
Example
Example
Example
Example
Example
Example




END OF FRAME
Basic needs of ai
Navigation
Perceptions
  Understand / Capture a snapshot of
  the world
Acting
Planning
Learning / adapt
Planning
     Planning is a formalized process of
    searching for sequence of actions to
                satisfy a goal.




Goal oriented planning (GOAP)
HTN planning
STRIPS
GOAP
• Composed by:
   • Collection of states
   • Pre-condictions
   • Pos-condictions
   • Starting State
   • Goal State
• Answer the What
  question
• Self adaptable
GOAP-Example
GOAP-A*
Basic needs of ai
Navigation
Perceptions
  Understand / Capture a snapshot of
  the world
Acting
Planning
Learning / adapt
Learning
Supervised Learning
• Learning an input-output relationship from examples
• Tasks: regression, classification, ranking
• Applications: skill estimation, behavioural cloning

Reinforcement Learning
• Learning policies from state-action-reward sequences
• Tasks: control, value estimation, policy learning
• Applications: learning to drive, learning to walk, learning to fight

Unsupervised Learning
• Learning the underlying structure from examples
• Tasks: clustering, manifold learning, density estimation
• Applications: modelling motion capture data, user behaviour
Neural Network
• Information processing paradigm
• Is inspired by the way biological nervous systems
• Composed of a large number of highly
  interconnected processing elements
• ANNs, like people, learn by example.
Neural Network-Neuron
Neural Network
Neural Network Learning
Actual algorithm for a 3-layer network (only one hidden layer):
Initialize the weights in the network (often randomly)
Do
For each example e in the training set
O = neural-net-output(network, e) ; forward pass
T = teacher output for e
Calculate error (T - O) at the output units
Compute delta_wh for all weights from hidden layer to output layer ; backward
         pass
Compute delta_wi for all weights from input layer to hidden layer ; backward
         pass continued
Update the weights in the network
Until all examples classified correctly or stopping criterion satisfied
Return the network
Genetic Algorithm
• Genetic Algorithms are a way of solving problems
  by mimicking the same processes mother nature
  uses.
• Use the same combination of selection,
  recombination and mutation to evolve a solution to
  a problem
• Usefull to optimize uncertain domains
Genetic Algorithm
Genetic Algorithm-Problems
  •   Takes too many time to converge
  •   Hard to define the genes
  •   Hard to define the objective function
  •   Hard to delivery in games
Case Study – L4D

     Left 4 Dead is a
 replayable, cooperative,
    survival-horror game
   where four Survivors
   cooperate to escape
  environments swarming
with murderously enraged
  “Infected” (ie: zombies)
Case Study – L4D
  • Deliver Robust Behavior
    Performances

  •   Provide Competent Human
      Player Proxies

  •   Promote Replayability

  • Generate Dramatic Game
    Pacing
Deliver Robust Behavior
 Performances
• Reactive Path Following
Move towards “look ahead” point farther
  down path
• Use local obstacle avoidance
  • Good
         • (Re)pathing is cheap
         • Local avoidance handles
             small physics props, other
             bots, corners, etc
         • Superposes well with mob
            flocking behavior
         • Resultant motion is fluid
   • Bad
         • Can avoid off path too
             much, requiring repath
Deliver Robust Behavior
Performances
• Locomotion
   Owns moving the actor to a new
   position in the environment (collision
   resolution, etc)
• Body
   •Owns animation state
• Vision
   •Owns line-of-sight, field of view, and
   “am I able to see <X>” queries.
   •Maintains recognized set of entities.
• Intention
   •Behavior Tree based
Provide Competent Human Player
Proxies
  •   Believability/Fairness
      •   Imperfect knowledge (senses
          simulation)
      •   Reaction times
  •    Trust
      •   Survivor Bots Helps Human
      •   Undesirable events
          CANNOT occurs (Cheating)
         •    Bots that are far from the
              battle are magicaly
              teleported
         •    Survivor Bots cannot deal
              friendly fire
         •    ….
         •    Human Behavior
Promote Replayability
 • Complex Procedural
   Population
 • Game session is viewed as a
   skill challenge instead of a
 • memorization exercise
 • Structured Unpredictability
Structured unpredictability
 NavMesh
                Active Area Set   Potential Visible Area




Flow Distance
Generate Dramatic
        Game Pacing
Adaptive Dramatic Pacing algorithm
   • Creates peaks and valleys of intensity similar to the
proven pacing success of Counter-Strike
   • Algorithm:
      • Estimate the “emotional intensity” of each Survivor
      • Track the max intensity of all 4 Survivors
      • If intensity is too high, remove major threats for
awhile
      • Otherwise, create an interesting population of
threats
AI DIRECTOR
Use Survivor Intensity to modulate the Infected population
      • Build Up
            • Create full threat population until Survivor Intensity crosses peak
threshold
      • Sustain Peak
            • Continue full threat population for 3-5 seconds after Survivor
Intensity has peaked. Ensures minimum “build up” duration.
      • Peak Fade
            • Switch to minimal threat population (“Relax period”) and monitor
Survivor Intensity until it decays out of peak range. This state is needed so
current combat engagement can play out without using up entire Relax period.
Peak Fade won’t allow the Relax period to start until a natural break in the action
occurs.
      • Relax
            • Maintain minimal threat population for 30-45 seconds, or until
Survivors have traveled far enough toward the next safe room,
then resume Build Up.
Implementation
        considerations
Multithreading
  Functional
  Modular
Data driven
Cache friendly
Game interface
Level of Detail
Trends
Emotion !!!
Realistic Crowd simulation
Ai in contact with the player itself
Improve ai testing
Character control and animation helper
Online ais
Interactive cut scenes
Artificial intelligence


     ‘Far too often, AI has been a last-minute rush job,
implemented in the final two or three months of development
by overcaffeinated programmers with dark circles under their
eyes and thousands of other high-priority tasks to complete”
                 - Paul Tozour, Ion Storm

More Related Content

Similar to Artificial Inteligence for Games an Overview SBGAMES 2012

Chap 4 local_search
Chap 4 local_search Chap 4 local_search
Chap 4 local_search Rakhi Gupta
 
Search-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfSearch-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfMrRRThirrunavukkaras
 
Introduction to Steering behaviours for Autonomous Agents
Introduction to Steering behaviours for Autonomous AgentsIntroduction to Steering behaviours for Autonomous Agents
Introduction to Steering behaviours for Autonomous AgentsBryan Duggan
 
anintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdfanintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdfssuseradaf5f
 
An introduction to reinforcement learning
An introduction to reinforcement learningAn introduction to reinforcement learning
An introduction to reinforcement learningSubrat Panda, PhD
 
Utah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas HollowayUtah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas HollowayThomas Holloway
 
05 distance learning standards-scorm research
05 distance learning standards-scorm research05 distance learning standards-scorm research
05 distance learning standards-scorm research宥均 林
 
Player Traversal Mechanics in the Vast World of Horizon Zero Dawn
Player Traversal Mechanics in the Vast World of Horizon Zero DawnPlayer Traversal Mechanics in the Vast World of Horizon Zero Dawn
Player Traversal Mechanics in the Vast World of Horizon Zero DawnGuerrilla
 
Artificial Intelligence: Knowledge Acquisition
Artificial Intelligence: Knowledge AcquisitionArtificial Intelligence: Knowledge Acquisition
Artificial Intelligence: Knowledge AcquisitionThe Integral Worm
 
Ai architectureand designpatternsgdc2009
Ai architectureand designpatternsgdc2009Ai architectureand designpatternsgdc2009
Ai architectureand designpatternsgdc2009SinisterM
 
Jarrar: Un-informed Search
Jarrar: Un-informed SearchJarrar: Un-informed Search
Jarrar: Un-informed SearchMustafa Jarrar
 
.NET UY Meetup 7 - CLR Memory by Fabian Alves
.NET UY Meetup 7 - CLR Memory by Fabian Alves.NET UY Meetup 7 - CLR Memory by Fabian Alves
.NET UY Meetup 7 - CLR Memory by Fabian Alves.NET UY Meetup
 

Similar to Artificial Inteligence for Games an Overview SBGAMES 2012 (20)

Chap 4 local_search
Chap 4 local_search Chap 4 local_search
Chap 4 local_search
 
l2.pptx
l2.pptxl2.pptx
l2.pptx
 
Search-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdfSearch-Beyond-Classical-no-exercise-answers.pdf
Search-Beyond-Classical-no-exercise-answers.pdf
 
l2.pptx
l2.pptxl2.pptx
l2.pptx
 
cs-171-05-LocalSearch.pptx
cs-171-05-LocalSearch.pptxcs-171-05-LocalSearch.pptx
cs-171-05-LocalSearch.pptx
 
October 19, Probabilistic Modeling III
October 19, Probabilistic Modeling IIIOctober 19, Probabilistic Modeling III
October 19, Probabilistic Modeling III
 
Introduction to Steering behaviours for Autonomous Agents
Introduction to Steering behaviours for Autonomous AgentsIntroduction to Steering behaviours for Autonomous Agents
Introduction to Steering behaviours for Autonomous Agents
 
anintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdfanintroductiontoreinforcementlearning-180912151720.pdf
anintroductiontoreinforcementlearning-180912151720.pdf
 
An introduction to reinforcement learning
An introduction to reinforcement learningAn introduction to reinforcement learning
An introduction to reinforcement learning
 
AI Robotics
AI RoboticsAI Robotics
AI Robotics
 
Utah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas HollowayUtah Code Camp 2014 - Learning from Data by Thomas Holloway
Utah Code Camp 2014 - Learning from Data by Thomas Holloway
 
Connected Components Labeling
Connected Components LabelingConnected Components Labeling
Connected Components Labeling
 
Chap11 slides
Chap11 slidesChap11 slides
Chap11 slides
 
05 distance learning standards-scorm research
05 distance learning standards-scorm research05 distance learning standards-scorm research
05 distance learning standards-scorm research
 
Player Traversal Mechanics in the Vast World of Horizon Zero Dawn
Player Traversal Mechanics in the Vast World of Horizon Zero DawnPlayer Traversal Mechanics in the Vast World of Horizon Zero Dawn
Player Traversal Mechanics in the Vast World of Horizon Zero Dawn
 
Deep learning
Deep learningDeep learning
Deep learning
 
Artificial Intelligence: Knowledge Acquisition
Artificial Intelligence: Knowledge AcquisitionArtificial Intelligence: Knowledge Acquisition
Artificial Intelligence: Knowledge Acquisition
 
Ai architectureand designpatternsgdc2009
Ai architectureand designpatternsgdc2009Ai architectureand designpatternsgdc2009
Ai architectureand designpatternsgdc2009
 
Jarrar: Un-informed Search
Jarrar: Un-informed SearchJarrar: Un-informed Search
Jarrar: Un-informed Search
 
.NET UY Meetup 7 - CLR Memory by Fabian Alves
.NET UY Meetup 7 - CLR Memory by Fabian Alves.NET UY Meetup 7 - CLR Memory by Fabian Alves
.NET UY Meetup 7 - CLR Memory by Fabian Alves
 

Artificial Inteligence for Games an Overview SBGAMES 2012

  • 1. Inteligência artificial Overview Bruno Duarte Corrêa – poli/usp Thiago Dias Pastor – poli/usp
  • 2.
  • 3. Purpose of ai Maximize fun Emulates the reality Not necessarily simulation Scoped by the gameplay Challenging ...
  • 4. Desired Characteristics Be smart Does not looks like dumb, but purposely flawed Not intended weakness Real time performance Configurable Transparent
  • 5. Basic needs of ai Navigation Perceptions Understand / Capture a snapshot of the world Acting Planning Learning / adapt
  • 6. Navigation Path finding Waypoints Navmesh Astar / Dijkstra Steering behaviors Field based navigation
  • 7. Waypoint • World Discretization • Manual Waypoints • Automatic Waypoints • Static Environment • Connect by Raytrace • Different types
  • 8. NavMesh • Mesh processing algorithm • Pre-processing stage • Optimized waypoints placement. • Sloped based • Easier to handle • State of art
  • 9. NavMesh - Recast • OpenSource Api for NavMesh Generation (c++) • Used in many commercial games • Fully integrated with Detour (Navigation System) • Recast is state of the art navigation mesh construction toolset for games. • It is automatic, which means that you can throw any level geometry at it and you will get robust mesh out • It is fast which means swift turnaround times for level designers • Extensible and easy to integrate Source http://code.google.com/p/recastnavigation/ Information http://www.critterai.org/book/export/html/2
  • 10. PathFinding - Astar • Find paths in the • waypoint graph • Robot Paths • Smooth filters • Bezier • Highly Parallelizable • Can be implemented on GPU
  • 11. PathFinding - Astar F=G+H where G = the movement cost to move from the starting point A to a given square on the grid, following the path generated to get there. H = the estimated movement cost to move from that given square on the grid to the final destination, point B.
  • 12. PathFinding Summary of the A* Method 1.Add the starting square (or node) to the open list. 2.Repeat the following: 1. Look for the lowest F cost square on the open list. We refer to this as the current square. 2. Switch it to the closed list. 3. For each of the 8 squares adjacent to this current square … 4. If it is not walkable or if it is on the closed list, ignore it. Otherwise do the following. If it isn’t on the open list, add it to the open list. Make the current square the parent of this square. Record the F, G, and H costs of the square. 5. If it is on the open list already, check to see if this path to that square is better, using G cost as the measure. If so, change the parent of the square to the current square, and recalculate the G and F scores of the square. If you are keeping your open list sorted by F score. 3.Stop when you: 1. Add the target square to the closed list, in which case the path has been found (see note below), or 2. Fail to find the target square, and the open list is empty. In this case, there is no path. 4.Save the path. Working backwards from the target square, go from each square to its parent square until you reach the starting square. That is your path.
  • 14. Steering Behaviors • Flow Driven • Atraction and Repulsion • No need for environment discretization • Dynamic calculation • More human like (natural) walking style • Characters can get stuck !!! • Swarm Behavior !!!
  • 15. Basic Behaviors Seek and Flee Pursuit and Evasion
  • 16. Basic Behaviors Collision avoidance Path Pursuit Follow the Leader Advanced Collisions Avoidance
  • 17. Steering Behaviors • Combine all results like a Resultant Force • Use physic Integration to get: • Force • Aceleration • Velocity • Space • Can be integrated with common physics APIs
  • 18. Basic needs of ai Navigation Perceptions Understand / Capture a snapshot of the world Acting Planning Learning / adapt
  • 19. Perceptions “See the world as like a human does” Emulate human senses Response time Sensors Sight Field of view/Raycast Sounds Smells ?!?! Receive Messages
  • 20. Basic needs of ai Navigation Perceptions Understand / Capture a snapshot of the world Acting Planning Learning / adapt
  • 21. Finite state machine algorithm • Simple theoretical construct – Set of states (S) – Input vocabulary (I) – Transitional function T(s,i) • A way of denoting how an object can change its state over time. State: Set of properties of the environment at a time
  • 22. Finite state machine • States – E: enemy in sight – S: hear a sound – D: dead • Events – E: see an enemy – S: hear a sound – D: die • Action performed – On each transition – On each update in some states (e.g. attack)
  • 24. FSM - Limitations Hard to Expand Hard to Maintain Not Flexible enough As larger it gets as f... you are.
  • 25. Behavior tree Replaces fsm Encapsulates Actions / Logic / State Direct acyclic graph that is traversed Non-leaf = Condition Leafs = Actions Divide to conquer approach
  • 26. Crysis Soldier • Real Time Extensible • SIMPLE • Scriptable
  • 37. Basic needs of ai Navigation Perceptions Understand / Capture a snapshot of the world Acting Planning Learning / adapt
  • 38. Planning Planning is a formalized process of searching for sequence of actions to satisfy a goal. Goal oriented planning (GOAP) HTN planning STRIPS
  • 39. GOAP • Composed by: • Collection of states • Pre-condictions • Pos-condictions • Starting State • Goal State • Answer the What question • Self adaptable
  • 42. Basic needs of ai Navigation Perceptions Understand / Capture a snapshot of the world Acting Planning Learning / adapt
  • 43. Learning Supervised Learning • Learning an input-output relationship from examples • Tasks: regression, classification, ranking • Applications: skill estimation, behavioural cloning Reinforcement Learning • Learning policies from state-action-reward sequences • Tasks: control, value estimation, policy learning • Applications: learning to drive, learning to walk, learning to fight Unsupervised Learning • Learning the underlying structure from examples • Tasks: clustering, manifold learning, density estimation • Applications: modelling motion capture data, user behaviour
  • 44. Neural Network • Information processing paradigm • Is inspired by the way biological nervous systems • Composed of a large number of highly interconnected processing elements • ANNs, like people, learn by example.
  • 47. Neural Network Learning Actual algorithm for a 3-layer network (only one hidden layer): Initialize the weights in the network (often randomly) Do For each example e in the training set O = neural-net-output(network, e) ; forward pass T = teacher output for e Calculate error (T - O) at the output units Compute delta_wh for all weights from hidden layer to output layer ; backward pass Compute delta_wi for all weights from input layer to hidden layer ; backward pass continued Update the weights in the network Until all examples classified correctly or stopping criterion satisfied Return the network
  • 48. Genetic Algorithm • Genetic Algorithms are a way of solving problems by mimicking the same processes mother nature uses. • Use the same combination of selection, recombination and mutation to evolve a solution to a problem • Usefull to optimize uncertain domains
  • 50. Genetic Algorithm-Problems • Takes too many time to converge • Hard to define the genes • Hard to define the objective function • Hard to delivery in games
  • 51. Case Study – L4D Left 4 Dead is a replayable, cooperative, survival-horror game where four Survivors cooperate to escape environments swarming with murderously enraged “Infected” (ie: zombies)
  • 52. Case Study – L4D • Deliver Robust Behavior Performances • Provide Competent Human Player Proxies • Promote Replayability • Generate Dramatic Game Pacing
  • 53. Deliver Robust Behavior Performances • Reactive Path Following Move towards “look ahead” point farther down path • Use local obstacle avoidance • Good • (Re)pathing is cheap • Local avoidance handles small physics props, other bots, corners, etc • Superposes well with mob flocking behavior • Resultant motion is fluid • Bad • Can avoid off path too much, requiring repath
  • 54. Deliver Robust Behavior Performances • Locomotion Owns moving the actor to a new position in the environment (collision resolution, etc) • Body •Owns animation state • Vision •Owns line-of-sight, field of view, and “am I able to see <X>” queries. •Maintains recognized set of entities. • Intention •Behavior Tree based
  • 55. Provide Competent Human Player Proxies • Believability/Fairness • Imperfect knowledge (senses simulation) • Reaction times • Trust • Survivor Bots Helps Human • Undesirable events CANNOT occurs (Cheating) • Bots that are far from the battle are magicaly teleported • Survivor Bots cannot deal friendly fire • …. • Human Behavior
  • 56. Promote Replayability • Complex Procedural Population • Game session is viewed as a skill challenge instead of a • memorization exercise • Structured Unpredictability
  • 57. Structured unpredictability NavMesh Active Area Set Potential Visible Area Flow Distance
  • 58. Generate Dramatic Game Pacing Adaptive Dramatic Pacing algorithm • Creates peaks and valleys of intensity similar to the proven pacing success of Counter-Strike • Algorithm: • Estimate the “emotional intensity” of each Survivor • Track the max intensity of all 4 Survivors • If intensity is too high, remove major threats for awhile • Otherwise, create an interesting population of threats
  • 59. AI DIRECTOR Use Survivor Intensity to modulate the Infected population • Build Up • Create full threat population until Survivor Intensity crosses peak threshold • Sustain Peak • Continue full threat population for 3-5 seconds after Survivor Intensity has peaked. Ensures minimum “build up” duration. • Peak Fade • Switch to minimal threat population (“Relax period”) and monitor Survivor Intensity until it decays out of peak range. This state is needed so current combat engagement can play out without using up entire Relax period. Peak Fade won’t allow the Relax period to start until a natural break in the action occurs. • Relax • Maintain minimal threat population for 30-45 seconds, or until Survivors have traveled far enough toward the next safe room, then resume Build Up.
  • 60. Implementation considerations Multithreading Functional Modular Data driven Cache friendly Game interface Level of Detail
  • 61. Trends Emotion !!! Realistic Crowd simulation Ai in contact with the player itself Improve ai testing Character control and animation helper Online ais Interactive cut scenes
  • 62. Artificial intelligence ‘Far too often, AI has been a last-minute rush job, implemented in the final two or three months of development by overcaffeinated programmers with dark circles under their eyes and thousands of other high-priority tasks to complete” - Paul Tozour, Ion Storm