Algorithms for Computer
Games
Jouni Smed
Department of Information Technology
University of Turku
http://www.iki.fi/smed
Course syllabus
credits: 5 cp (3 cu)
n  prerequisites
n 

n  fundamentals

of algorithms and data structures (e.g.,
Cor...
Lectures
n 

Lecture times
n  Tuesdays

10–12 a.m.
n  Thursdays 10–12 a.m.

September 8 – October 8, 2009
n  Lecture r...
Examinations 1(2)
n 

examination dates (to be confirmed)
1. 
2. 
3. 

n 

n 

?? (possibly October 2009)
?? (possibly ...
Examinations 2(2)
n 

questions
n  based

on both lectures and the textbook
n  two questions, à 5 points
n  to pass th...
Web page
http://www.iki.fi/smed/a4cg
news and announcements
n  slides, code examples, additional material
n  discussion ...
Follow-up course:
Multiplayer Computer Games
focus: networking in computer games
n  credits: 5 cp (3 cu)
n  schedule:
n...
Textbook

n 

n 

Jouni Smed & Harri Hakonen:
Algorithms and Networking for Computer
Games, John Wiley & Sons, 2006.
htt...
Computer games
In the beginning...
“If, when walking down the
halls of MIT, you should
happen to hear strange cries
of ‘No! No! Turn! Fir...
...and then
n 
n 
n 
n 
n 
n 
n 
n 

1962: Spacewar
1971: Nutting: Computer Space
1972: Atari: Pong
1978: Midway: ...
Three academic perspectives to
computer games
Humanistic
perspective

Game design
n 
n 
n 

GAME

Technical
n 
perspec...
Game development is team work

source: Game Developer, 2005
Intention of this cource
n 

to provide a glance into the world of computer
games as seen from the perspective of a
compu...
Contents
§1
§2
§3
§4
§5
§6
§7

Introduction
Random Numbers
Tournaments
Game Trees
Path Finding
Decision-making
Modelling U...
§1 Introduction
definitions: play, game, computer game
n  anatomy of computer games
n  synthetic players
n  multiplayin...
First, a thought game
n 

what features are common to all games?
Components of a game
players: willing to participate for enjoyment,
diversion or amusement
n  rules: define limits of the...
Components, relationships and
aspects of a game

cor r

n ce
ponde
es

representation

rules

definition

goal

obstru

ct...
Definition for ‘computer game’
a game that is carried out with the help of a
computer program
n  roles:
n 

n  coordina...
Model–View–Controller
model
controller

state instance

control logic

core structures

proto-view

configuration

driver
...
Synthetic players
n 

synthetic player = computer-generated actor in
the game
n  displays

human-like features
n  has a...
Humanness
n 

human traits and characteristics
n  fear

n 

and panic (Half-Life, Halo)

computer game comprising only ...
Stance towards the player
n 

ally
neutral

enemy
Enemy
n 

provides challenge
n  opponent

must demonstrate intelligent (or at least
purposeful) behaviour
n  cheating
n...
Ally
n 

augmenting the user interface
n  hints

n 

and guides

aiding the human player
n  reconnaissance

officer
n...
Neutral
n 

commentator
n  highlighting

events and providing background

information
n 

camera director
n  choosing
...
Studying synthetic players:
AIsHockey
n 

simplified ice hockey:
official IIHF rules
n  realistic measures and
weights
n...
Example: MyAI.java
import fi.utu.cs.hockey.ai.*;
public class MyAI extends AI implements Constants {
public void react() {...
Try it yourself!
challenge: implement a team of autonomous
collaborating synthetic players
n  the platform and ready-to-u...
Multiplaying
multiple human players sharing the same game
n  methods:
n 

n  divide

the screen
n  divide the playtime...
Games and story-telling
n 

traditional, linear story-telling
n 
n 
n 

n 

events remain from time to time (almost) ...
A story is always told to human
beings
n 

story-telling is not about actions but reasons for
actions
n  humans

use a s...
Other game design considerations
customization
n  tutorial
n  profiles
n  modification
n  replaying
n 

→ parameteriz...
§2 Random Numbers
what is randomness?
n  linear congruential method
n 

n  parameter

choices

n  testing

random shuf...
What are random numbers good for
(according to D.E.K.)
simulation
n  sampling
n  numerical analysis
n  computer program...
Random numbers?
n 

there is no such thing as a ‘random number’
n  is

n 

42 a random number?

definition: a sequence ...
Methods
n 

random selection
n  drawing

n 

tables of random digits
n  decimals

n 

balls out of a ‘well-stirred ur...
Generating random numbers with
arithmetic operations
n 

von Neumann (ca. 1946): middle square method
n  take

the squar...
Truly random numbers?
each number is completely determined by its
predecessor!
n  sequence is not random but appears to b...
Middle square (revisited)
n 

another example:
n  ri

= 6100
n  ri + 1 = 2100 (ri2 = 37210000)
n  ri + 2 = 4100 (ri + ...
Words of the wise
‘random numbers should not be generated with
a method chosen at random’
— D. E. Knuth
n  ‘Any one who c...
Even the wise get humbled 1(2)

source: Knuth, 1998
Even the wise get humbled 2(2)

source: Knuth, 1998
Words of the more (or less) wise
n 

‘We guarantee that each number is random
individually, but we don’t guarantee that m...
Other concerns
speed of the algorithm
n  ease of implementation
n  parallelization techniques
n  portable implementatio...
Linear congruential method
D. H. Lehmer (1949)
n  choose four integers
n 

n  modulus:

m (0 < m)
n  multiplier: a (0 ...
Linear congruential method (cont’d)
let b = a – 1
n  generalization:
Xn + k = (akXn + (ak – 1) c/b) mod m
(k ≥ 0, n ≥ 0)
...
Random integers from a given
interval
n 

Monte Carlo methods
approximate solution
n  accuracy can be improved at the co...
Choice of modulus m
sequence of random numbers is finite → period
(repeating cycle)
n  period has at most m elements → mo...
Choice of multiplier a
n 

period of maximum length
n 
n 

n 

a = c = 1: Xn + 1 = (Xn + 1) mod m
hardly random: …, 0,...
Choice of increment c
n 

no common factor with m
n  c

=1
n  c = a
n 

if c = 0, addition operation can be eliminated...
Choice of starting value X0
determines from where in the sequence the
numbers are taken
n  to guarantee randomness, initi...
Tests for randomness 1(2)
Frequency test
n  Serial test
n  Gap test
n  Poker test
n  Coupon collector’s test
n 
Tests for randomness 2(2)
Permutation test
n  Run test
n  Collision test
n  Birthday spacings test
n  Spectral test
n...
Spectral test
good generators will pass it
n  bad generators are likely to fail it
n  idea:
n 

n  let

the length of ...
Spectral test results 1(3)

source: Knuth, 1998
Spectral test results 2(3)

source: Hellekalek, 1998
Spectral test results 3(3)

source: Hellekalek, 1998
Random shuffling
n 

n 
n 
n 

generate random permutation, where all permutations
have a uniform random distribution
...
Random sampling without
replacement
n 

guarantees that the distribution of permutations is
uniform
every element has a p...
Riffle shuffle
Perfect shuffle
Premo: Standard order
Premo: After a riffle shuffle and card
insertion
Probability of success: 52 cards, m
shuffles, n guesses
m
n
1

2

3

4

5

6

7

8

9

10

11

12

∞

997 839 288 088 042 ...
Cut-off phenomenon: distance d from
the perfect shuffle, critical number k0

source: Aldous & Diaconis, 1986
Random numbers in games
terrain generation
n  events
n  character creation
n  decision-making
n  game world compressio...
Game world compression
n 
n 
n 
n 
n 

used in Elite (1984)
finite and discrete galaxy
enumerate the positions
set th...
Example: Elite
Random game world generation
n 

discrete game worlds
n  example:

Nethack, Age of Empires
n  rooms, passages, item pla...
Example: Age of Empires 2
Terraing generation: height map
Terrain generation methods
simple random
n  limited random
n  particle deposition
n  fault line
n  circle hill
n  mid...
Simple random terrain
Limited random terrain
Particle deposition terrain
Fault line terrain
Circle hill terrain
Midpoint displacement terrain
§3 Tournaments
n 

rank adjustment (or challege) tournament
n 
n 

n 

elimination tournament (or cup)
n 
n 

n 

e...
Other uses
n 

game balancing
n 
n 

n 

heuristic search
n 

n 

selecting suboptimal candidates for a genetic algo...
Example: Hill climbing tournament
Juhani
Tuomas
Aapo
Simeoni
Timo
Lauri
Eero

m0
m1
m2
m3
m4
m5
Example: Elimination tournament
Juhani
Tuomas
Aapo
Simeoni
Timo
Lauri
Eero

m0
m3
m1
m5
m2
m4
Example: Scoring tournament
Tuomas
Juhani

Aapo

Simeoni

Timo

Lauri

Eero

m0

m6

m11

m15

m18

m20

Tuomas

m1

m7

m...
Terms
players: p0…pn − 1
n  match between pi and pj: match(i, j)
n  outcome: WIN, LOSE, TIE
n  rank of pi: rank(i)
n  ...
Rank adjustment tournaments
a set of already ranked players
n  matches
n 

n  independent

from one another
n  outcome...
Ladder and pyramid tournaments
pi: rank(i) = 0

pi: rank(i) = 0

pj: rank(j) = 1

pj: rank(j) = 1

pk: rank(k) = 2

pk: ra...
Hill-climbing tournament
n 

a.k.a.
n 
n 

n 

top-of-the-mountain tournament
last man standing tournament

specializa...
Elimination tournaments
n 

loser of a match is eliminated from the
tournament
n  no

ties! → tiebreak competition

winn...
Single elimination
pi
pj

quarter-final
semifinal

pk
pm
pn
pq
pr
ps

final
Bye
pi
pj
pk
pm
pn
pq
pr
bye
Seeding
some match pairing will not occur in a single
elimination tournament
n  pairings for the first round (i.e., seedi...
Seeding methods
n 

random
n 
n 

n 

standard and ordered standard
n 
n 

n 

does not favour any player
does not ...
Byes and fairness
the byes have bottom ranks so that they get
paired with best players
n  the byes appear only in the fir...
Runners-up
n 

we find only the champion
n  how

to determine the runners-up (e.g. silver and
bronze medallists)?

n 

...
Double elimination tournament
n 

two brackets
n  winners’

bracket
n  losers’ (or consolation) bracket
n 

initially ...
Scoring tournaments
round robin: everybody meets everybody else
once
n  scoring table determines the tournament winner
n...
Reduction to a graph
n players
n  clique Kn
n  players as vertices, matches as edges
n  how to organize the rounds?
n ...
Reduction to a graph (cont’d)
n 

if n is odd, partition the edges of the clique to
(n − 1) / 2 disjoint sets
n  in

eac...
Round robin with seven players
round

matches

resting

0

1–6

2–5

3–4

0

1

2–0

3–6

4–5

1

2

3–1

4–0

5–6

2

3

...
Real-world tournament examples
n 

boxing
n 

n 

sport wrestling
n 

n 

n 
n 

double elimination: consolation br...
matches (n = 15)

14

14

14

105

rounds

14

6

4

15

1…14

1…6

3…4

14

1

1…4

1…7

7

champion’s
matches
match in a...
Practical considerations
home matches
n  venue bookings
n  travelling times
n  risk management
n  other costs
n 
§4 Game Trees
n 

perfect information games
n 

n 

two-player, perfect information games
n 
n 
n 

n 

Noughts and...
Game tree
n 

all possible plays of two-player, perfect
information games can be represented with a
game tree
n  nodes:
...
Division Nim with seven matches
Problem statement
Given a node v in a game tree
find a winning strategy for MAX (or MIN) from v
or (equivalently)
show tha...
Minimax
n 
n 

assumption: players are rational and try to win
given a game tree, we know the outcome in the leaves
n 
...
Minimax rules
1. 

2. 

n 

If the node is labelled to MAX, assign it to the
maximum value of its children.
If the node i...
MAX

–1
–1

–1
–1

+1
+1

+1
–1

+1

–1
+1

–1
+1

–1

MIN

MAX

MIN

MAX

MIN
Analysis
n 

simplifying assumptions
n 
n 

n 

time consumption is proportional to the number of
expanded nodes
n 
n...
Rough estimates on running
times when d = 5
suppose expanding a node takes 1 ms
n  branching factor b depends on the game...
Controlling the search depth
usually the whole game tree is too large
→ limit the search depth
→ a partial game tree
→ par...
Evaluation function
n 

combination of numerical measurements
mi(s, p) of the game state
n  single

measurement: mi(s, p...
Example: Noughts and Crosses
n 

heuristic evaluation function e:
n  count

the winning lines open to MAX
n  subtract t...
Examples of the evaluation
e(•) = 6 – 5 = 1

e(•) = 4 – 5 = –1
e(•) = +∞
Drawbacks of partial minimax
n 

horizon effect
heuristically promising path can lead to an unfavourable
situation
n  st...
The deeper the better...?
n 

assumptions:
n 
n 
n 

n 

minimax convergence theorem:
n 

n 

n increases → root va...
Alpha-beta pruning
reduce the branching factor of nodes
n  alpha value
n 

n  associated

with MAX nodes
n  represents...
Example
n 

in a MAX node, α = 4
n  we

know that MAX can make a move which will
result at least the value 4
n  we can ...
Rules of pruning
1. 

2. 

Prune below any MIN node having a beta value
less than or equal to the alpha value of any of
it...
Example: αβ-pruning 1(3)
Example: αβ-pruning 2(3)
Example: αβ-pruning 3(3)
Best-case analysis
omit the principal variation
n  at depth d – 1 optimum pruning: each node
expands one child at depth d...
Principal variation search
n 

alpha-beta range should be small
n 
n 

n 

limit the range artificially → aspiration s...
Games of chance
n 

minimax trees assume determistic moves
n  what

about indeterministic events like tossing a coin,
ca...
§5 Path Finding
n 

common problem in computer games
n  routing

n 

characters, troops etc.

computationally intensive...
Problem statement
given a start point s and a goal point r, find a
path from s to r minimizing a given criterion
n  searc...
The three phases of path finding
1. 

discretize the game world
n 

2. 

solve the path finding problem in a graph
n 
n...
Discretization
n 

waypoints (vertices)
n  doorways,

n 

corners, obstacles, tunnels, passages, …

connections (edges)...
Grid
n 

regular tiling of polygons
n 
n 
n 

n 
n 

square grid
triangular grid
hexagonal grid

tile = waypoint
til...
Navigation mesh
n 

convex partitioning of the game world geometry
n  convex

polygons covering the game world
n  adjac...
Solving the convex partitioning
problem
n 

minimize the number of polygons
n 
n 

n 

optimal solution
n 

n 

poin...
Path finding in a graph
n 

after discretization form a graph G = (V, E)
n  waypoints

= vertices (V)
n  connections = ...
Graph algorithms
n 

breadth-first search
n  running

n 

depth-first search
n  running

n 

time: O(|V| + |E|)
time:...
Heuristical improvements
n 

best-first search
n  order

the vertices in the neighbourhood according to
a heuristic esti...
Evaluation function
expand vertex minimizing
f(v) = g(s ~> v) + h(v ~> r)
n  g(s ~> v) estimates the minimum cost from th...
Cost function g
n 

actual cost from s to v along the cheapest path
found so far
n  exact

cost if G is a tree
n  can n...
Heuristic function h
carries information from outside the graph
n  defined for the problem domain
n  the closer to the a...
Admissibility
let Algorithm A be a best-first search using the
evaluation function f
n  search algorithm is admissible if...
Monotonicity
h is locally admissible → h is monotonic
n  monotonic heuristic is also admissible
n  actual cost is never ...
Optimality
Optimality theorem: The first path from s to r
found by A* is optimal.
n  Proof: textbook p. 105
n 
Informedness
the more closely h approximates h*, the better A*
performs
n  if A1 using h1 will never expand a vertex that...
Algorithm A*
n 

because of monotonicity
n  all

weights must be positive
n  closed list can be omitted
n 

the path i...
A* example 1(6)
A* example 2(6)
A* example 3(6)
A* example 4(6)
A* example 5(6)
A* example 6(6)
Practical considerations
n 

computing h
n  despite

the extra vertices expanded, less informed h
may yield computationa...
Realizing the movement
n 

movement through the waypoints
n  unrealistic:

does not follow the game world

geometry
n  ...
Recapitulation
1. 

discretization of the game world
n 
n 

2. 

path finding in a graph
n 

3. 

grid, navigation mesh...
Alternatives?
Although this is the de facto approach in
(commercial) computer games, are there
alternatives?
n  possible ...
§6 Decision-Making
n 

decision-making and games
n 
n 
n 

n 

example methods
n 
n 
n 

n 

levels of decision-m...
MVC (revisited)
model
controller

state instance

control logic

core structures

proto-view

configuration

driver

synth...
Decision-making system
Previous
primitives

Requested
actions

Pattern
recognition

Possible actions

World

Primitive eve...
Three perspectives for decisionmaking in computer games
n 

level of decision-making
n  strategic,

n 

tactical, opera...
Level of decision-making
n 

strategic
n  what

n 

tactical
n  how

n 

should be done
to actuate it

operational
n...
Strategic level
n 

long-term decisions
n  infrequent

→ can be computed offline or in the
background

n 

large amount...
Tactical level
medium-term decisions
n  intermediary between strategic and operational
levels
n 

n  follow

the plan m...
Operational level
n 

short-term decisions
n  reactive,

real-time response

concrete and closely connected to the game
...
Use of the modelled knowledge
time series data
n  world = a generator of events and states, which
can be labelled with sy...
Prediction

Modeller

Generator

maximum
probability
Production

Modeller

random
selection from
probability
distribution
Decision-making methods
n 

optimization
n  find

an optimal solution for a given objective
function
n  affecting facto...
Optimization
optimality
objective
function

local
optimum

global
optimum

solution
Optimization methods
n 

hill-climbing
n  how

to escape local optima?

tabu search
n  simulated annealing
n  genetic ...
Adaptation
feedback
fitted function

sample cases

solution
Adaptation methods
n 

neural networks
n  training
n  supervised

learning
n  unsupervised learning (e.g., self-organi...
Finite state machine (FSM)
n 

components:
n  states
n  transitions
n  events
n  actions

n 

state chart: fully con...
Properties of FSM
1. 

acceptor
n 

2. 

transducer
n 

3. 

what is the corresponding output sequence for a
given input...
Mealy and Moore machines
n 
n 

theoretical cathegories for FSMs
Mealy machine
actions are in transitions
n  the next a...
Implementation
n 

design by contract
n  two

parties: the supplier and the client
n  formal agreement using interfaces...
Noteworthy
n 

structure is static
n 

n 

reactivity
n 

n 

not for continuous or multivalued values

combinatorial...
Flocking
C. W. Reynolds: “Flocks, herds, and schools: A
distributed behavioral model” (1987)
n  a flock seems to react as...
Rules of flocking
1. 
2. 

3. 
4. 

Separation: Do not crowd flockmates.
Alignment: Move in the same direction as
flockmat...
Observations
n 

stateless algorithm
n  no

information needs to be maintained
n  boid re-evaluates the environment on ...
Other uses for flocking
n 

swarm algorithms
n  solution

candidate = boid
n  solution space = flying space
n  separat...
Influence maps
discrete representation of the synthetic player’s
knowledge of the world
n  strategic and tactical informa...
Assumptions
a regular grid over the game world
n  each tile holds numeric information of the
corresponding area
n 

n  ...
Construction
1. 

initialization
n 

2. 

assign values to the tiles where the influence exists

propagation
n 
n 
n 
...
Example: Initialization and
propagation
5

10 20 10

5

0

-1

-2

-2

-2

10 20 40 20 10

-1

-2

-6

-4

-6

5

10 20 10...
Aggregation
n 

influence map can be combined
n  the

n 

same (or compatible) granularity

example
n  map

1 = my tro...
Example: Aggregation
5

9

18

8

3

9

18 36 16

4

3

4

0

-5

-3

-5 -10 -7

-8

-1

-3

-3

8

-5

-4
Evaluation
static features: compute beforehand
n  periodical updates
n 

n  categorize

the maps based on the rate of c...
Key questions for synthetic players
how to achieve real-time response?
n  how to distribute the synthetic players in a
ne...
§7 Modelling Uncertainty
n 

probabilistic uncertainty
n  probability

of an outcome
n  dice, shuffled cards
n  statis...
Probabilistic or possibilistic
uncertainty?
Is the vase broken?
n  Is the vase broken by a burglar?
n  Is there a burgla...
Bayes’ theorem
hypothesis H
n  evidence E
n  probability of the hypothesis P(H)
n  probability of the evidence P(E)
n ...
Example
H — there is a bug in the code
n  E — a bug is detected in the test
n  E|H — a bug is detected in the test given...
Example (cont’d)
P(H) = 0.10
n  P(E|H) = 0.90
n  P(E|¬H) = 0.10
n  P(E) = P(E|H) · P(H) + P(E|¬H) · P(¬H)
= 0.18
n  fr...
Bayesian networks
n 

describe cause-and-effect relationships with a
directed graph
n  vertices

= propositions or varia...
Dempster-Shafer theory
belief about a proposition as an interval
[ belief, plausability ] ⊆ [ 0, 1]
n  belief supporting ...
Belief interval

Belief

Non-belief

Uncertainty
Plausability

Doubt
0

Bel(A)

Pl(A)

1
Example 1(5)
n 

hypotheses: animal, weather, trap, enemy
n  Θ

n 

= { A, W, T, E}

task: assign a belief value for ea...
Example 2(3)
n 

evidence ‘footprints’ supports A, T, E
n 

n 

combination with Dempster’s rule:
n 

n 

mf({ A, T, ...
Example 3(3)
n 

evidence ‘candy wrapper’ supports T, E
n 

n 

combination with Dempster’s rule:
n 

n 

mc({E}) = 0...
Fuzzy sets
n 

element x has a membership in the set A
defined by a membership function μA(x)
in the set: μA(x) = 0
n  f...
Membership function
μ
1
A

μA(x)

0
x

U
How to assign membership
functions?
n 

real-word data
n 
n 

n 

subjective evaluation
n 
n 

n 

human experts’ c...
Fuzzy operations
union: μC(x) = max{μA(x), μB(x)}
n  intersection: μC(x) = min{μA(x), μB(x)}
n  complement: μC(x) = 1 − ...
Fuzzy operations (cont’d)
μ
1

A∪B
A

B A

A∩B
0
U
Uses for fuzzy sets
approximate reasoning
n  fuzzy constraint satisfaction problem
n  fuzzy numbers
n  almost any ‘cris...
Constraint satisfaction problem
n 

constraint satisfaction problem (CSP):
n  a

set of n variables X
n  a domain Di fo...
Example: n queens problem as a
CSP
problem: place n queens on a n × n chessboard
so that they do not threat one another
n...
Fuzzy constraint satisfaction
problem
n 

fuzzy constraint satiscation problem (FCSP) is a
five-tuple P = 〈 V, Cµ, W, T, ...
Dog Eat Dog: Modelling the
criteria as fuzzy sets
if the visual observation of the enemy is reliable,
then avoid the enemy...
Dog Eat Dog: Weighting the
criteria importances
n 

fuzzy criterion Ci has a weight wi
n 

n 

[0, 1]

a greater value ...
Dog Eat Dog: Aggregating the
criteria
aggregator should have compensatory properties
n  the effect of a poorly satisfied ...
Ordered weighted averaging
(OWA)
n 

weight sequence W = (w0, w1,…,wn – 1)T
n 

n 

F(a0, a1,…,an – 1) = Σwjbj
n 

n ...
Outroduction
§1
§2
§3
§4
§5
§6
§7

Introduction
Random Numbers
Tournaments
Game Trees
Path Finding
Decision-Making
Modelli...
The intention, huh?
n 

to provide a glance into the world of computer
games as seen from the perspective of a
computer s...
Examinations
n 

examination dates
1. 
2. 
3. 

n 

n 

October 12 2009
November 16, 2009
December 14, 2009

check the ...
source: The New Yorker, Sep. 17, 2007
Examinations (cont’d)
n 

questions
n  based

on both lectures and the textbook
n  two questions, à 5 points
n  to pas...
Follow-up course:
Multiplayer Computer Games
focus: networking in computer games
n  credits: 5 cp (3 cu)
n  schedule:
n...
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Algorithms for Computer Games - lecture slides 2009
Upcoming SlideShare
Loading in...5
×

Algorithms for Computer Games - lecture slides 2009

746

Published on

The course concentrates on algorithmic problems present in computer games. The aim of the course is to review common solution methods, analyse their usability, and describe possible improvements. The topics cover among other things random numbers, game trees, path finding, terrain generation, and decision-making for synthetic players.

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
746
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
29
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Algorithms for Computer Games - lecture slides 2009

  1. 1. Algorithms for Computer Games Jouni Smed Department of Information Technology University of Turku http://www.iki.fi/smed
  2. 2. Course syllabus credits: 5 cp (3 cu) n  prerequisites n  n  fundamentals of algorithms and data structures (e.g., Cormen et al., Introduction to Algorithms) n  knowledge in programming (e.g., with Java) n  assessment n  examination only (no exercises)
  3. 3. Lectures n  Lecture times n  Tuesdays 10–12 a.m. n  Thursdays 10–12 a.m. September 8 – October 8, 2009 n  Lecture room B2033, ICT Building n 
  4. 4. Examinations 1(2) n  examination dates (to be confirmed) 1.  2.  3.  n  n  ?? (possibly October 2009) ?? (possibly November 2009) ?? (possibly January 2010) check the exact times and places at http:// www.it.utu.fi/opetus/tentit/ remember to enrol! https://ssl.utu.fi/nettiopsu/
  5. 5. Examinations 2(2) n  questions n  based on both lectures and the textbook n  two questions, à 5 points n  to pass the examination, at least 5 points (50%) are required n  grade: g = ⎡p − 5⎤ n  questions are in English, but you can answer in English or in Finnish
  6. 6. Web page http://www.iki.fi/smed/a4cg news and announcements n  slides, code examples, additional material n  discussion forum n 
  7. 7. Follow-up course: Multiplayer Computer Games focus: networking in computer games n  credits: 5 cp (3 cu) n  schedule: n  n  October 27 – November 26, 2009 n  Tuesdays 10–12 a.m., and Thursdays 10–12 a.m. n  web page: http://www.iki.fi/smed/mcg
  8. 8. Textbook n  n  Jouni Smed & Harri Hakonen: Algorithms and Networking for Computer Games, John Wiley & Sons, 2006. http://www.wiley.com/go/smed
  9. 9. Computer games
  10. 10. In the beginning... “If, when walking down the halls of MIT, you should happen to hear strange cries of ‘No! No! Turn! Fire! ARRRGGGHHH!!,’ do not be alarmed. Another western is not being filmed—MIT students and others are merely participating in a new sport, SPACEWAR!” D. J. Edwards & J. M. Graetz, “PDP-1 Plays at Spacewar”, Decuscope, 1(1):2–4, April 1962
  11. 11. ...and then n  n  n  n  n  n  n  n  1962: Spacewar 1971: Nutting: Computer Space 1972: Atari: Pong 1978: Midway: Space Invaders 1979: Roy Trubshaw: MUD 1980: Namco: Pac-Man 1981: Nintendo: Donkey Kong 1983: Commodore 64 n  n  n  n  n  n  n  1985: Alexei Pajitnov: Tetris 1989: Nintendo Game Boy 1993: id Software: Doom 1994: Sony Playstation 1997: Origin: Ultima Online 2001: Microsoft Xbox 2006: Nintendo Wii
  12. 12. Three academic perspectives to computer games Humanistic perspective Game design n  n  n  GAME Technical n  perspective rules graphics animation audio Game programming n  n  Administrative/ business perspective n  Software development n  n  n  n  n  design patterns architectures testing reuse gfx & audio simulation networking AI
  13. 13. Game development is team work source: Game Developer, 2005
  14. 14. Intention of this cource n  to provide a glance into the world of computer games as seen from the perspective of a computer scientist
  15. 15. Contents §1 §2 §3 §4 §5 §6 §7 Introduction Random Numbers Tournaments Game Trees Path Finding Decision-making Modelling Uncertainty
  16. 16. §1 Introduction definitions: play, game, computer game n  anatomy of computer games n  synthetic players n  multiplaying n  games and story-telling n  other game design considerations n 
  17. 17. First, a thought game n  what features are common to all games?
  18. 18. Components of a game players: willing to participate for enjoyment, diversion or amusement n  rules: define limits of the game n  goals: gives a sense of purpose n  opponents: give arise to contest and rivarly n  representation: concretizes the game n 
  19. 19. Components, relationships and aspects of a game cor r n ce ponde es representation rules definition goal obstru ction opponent CHALLENGE PLAY CONFLICT player
  20. 20. Definition for ‘computer game’ a game that is carried out with the help of a computer program n  roles: n  n  coordinating the game process n  illustrating the situation n  participating as a player → Model–View–Controller architectural pattern
  21. 21. Model–View–Controller model controller state instance control logic core structures proto-view configuration driver synthetic player instance data input device script action human player synthetic view view rendering output device perception options
  22. 22. Synthetic players n  synthetic player = computer-generated actor in the game n  displays human-like features n  has a stance towards the human player n  games are anthropocentric!
  23. 23. Humanness n  human traits and characteristics n  fear n  and panic (Half-Life, Halo) computer game comprising only synthetic players n  semi-autonomous actors (The Sims) n  fully autonomous actors (Core War, AOE2)
  24. 24. Stance towards the player n  ally neutral enemy
  25. 25. Enemy n  provides challenge n  opponent must demonstrate intelligent (or at least purposeful) behaviour n  cheating n  n  quick-and-dirty methods n  when the human player cannot observe enemy’s actions
  26. 26. Ally n  augmenting the user interface n  hints n  and guides aiding the human player n  reconnaissance officer n  teammate, wingman n  should observe the human point of view n  provide information in an accessible format n  consistency of actions
  27. 27. Neutral n  commentator n  highlighting events and providing background information n  camera director n  choosing n  referee n  judging n  camera views, angles and cuts the rule violations should observe the context and conventions
  28. 28. Studying synthetic players: AIsHockey n  simplified ice hockey: official IIHF rules n  realistic measures and weights n  Newtonian physics engine n  n  distributed system n  n  client/server architecture implemented with Java n  source code available (under BSD licence)
  29. 29. Example: MyAI.java import fi.utu.cs.hockey.ai.*; public class MyAI extends AI implements Constants { public void react() { if (isPuckWithinReach()) { head(headingTo(0.0, THEIR_GOAL_LINE)); brake(0.5); shoot(1.0); say(1050L); } else { head(headingTo(puck())); dash(1.0); } } }
  30. 30. Try it yourself! challenge: implement a team of autonomous collaborating synthetic players n  the platform and ready-to-use teams available at: http://www.iki.fi/smed/aishockey n 
  31. 31. Multiplaying multiple human players sharing the same game n  methods: n  n  divide the screen n  divide the playtime n  networking All this and more in the follow-up course Multiplayer Computer Games starting October 27, 2009.
  32. 32. Games and story-telling n  traditional, linear story-telling n  n  n  n  events remain from time to time (almost) unchangeable books, theatre, cinema participant (reader, watcher) is passive interactive story-telling n  n  n  events change and adapt to the choices the participant makes computer games participant (player) is active
  33. 33. A story is always told to human beings n  story-telling is not about actions but reasons for actions n  humans use a story (i.e., a narrative) to understand intentional behaviour n  how can we model and generate this? n  story-telling is about humans n  humans humanize the characters’ behaviour and understand the story through themselves n  how can we model and generate this? All this and more in the course Interactive Storytelling lectured in the Autumn 2010.
  34. 34. Other game design considerations customization n  tutorial n  profiles n  modification n  replaying n  → parameterization!
  35. 35. §2 Random Numbers what is randomness? n  linear congruential method n  n  parameter choices n  testing random shuffling n  uses in computer games n 
  36. 36. What are random numbers good for (according to D.E.K.) simulation n  sampling n  numerical analysis n  computer programming n  decision-making n  aesthetics n  recreation n 
  37. 37. Random numbers? n  there is no such thing as a ‘random number’ n  is n  42 a random number? definition: a sequence of statistically independent random numbers with a uniform distribution n  numbers are obtained by chance n  they have nothing to do with the other numbers in the sequence n  uniform distribution: each possible number is equally probable
  38. 38. Methods n  random selection n  drawing n  tables of random digits n  decimals n  balls out of a ‘well-stirred urn’ from π generating data n  white noise generators n  cosmic background radiation n  computer programs?
  39. 39. Generating random numbers with arithmetic operations n  von Neumann (ca. 1946): middle square method n  take the square of previous number and extract the middle digits n  example: four-digit numbers n  ri = 8269 n  ri + 1 = 3763 (ri2 = 68376361) n  ri + 2 = 1601 (ri + 12 = 14160169) n  ri + 3 = 5632 (ri + 22 = 2563201)
  40. 40. Truly random numbers? each number is completely determined by its predecessor! n  sequence is not random but appears to be n  → pseudo-random numbers n  all random generators based arithmetic operation have their own in-built characteristic regularities n  hence, testing and analysis is required n 
  41. 41. Middle square (revisited) n  another example: n  ri = 6100 n  ri + 1 = 2100 (ri2 = 37210000) n  ri + 2 = 4100 (ri + 12 = 4410000) n  ri + 3 = 8100 (ri + 22 = 16810000) n  ri + 4 = 6100 = ri (ri + 32 = 65610000) n  how to counteract?
  42. 42. Words of the wise ‘random numbers should not be generated with a method chosen at random’ — D. E. Knuth n  ‘Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin.’ — J. von Neumann n 
  43. 43. Even the wise get humbled 1(2) source: Knuth, 1998
  44. 44. Even the wise get humbled 2(2) source: Knuth, 1998
  45. 45. Words of the more (or less) wise n  ‘We guarantee that each number is random individually, but we don’t guarantee that more than one of them is random.’ — anonymous computer centre’s programming consultant (quoted in Numerical Recipes in C)
  46. 46. Other concerns speed of the algorithm n  ease of implementation n  parallelization techniques n  portable implementations n 
  47. 47. Linear congruential method D. H. Lehmer (1949) n  choose four integers n  n  modulus: m (0 < m) n  multiplier: a (0 ≤ a < m) n  increment: c (0 ≤ c < m) n  starting value (or seed): X0 (0 ≤ X0 < m) n  obtain a sequence 〈Xn〉 by setting Xn + 1 = (aXn + c) mod m (n ≥ 0)
  48. 48. Linear congruential method (cont’d) let b = a – 1 n  generalization: Xn + k = (akXn + (ak – 1) c/b) mod m (k ≥ 0, n ≥ 0) n  random floating point numbers Un ∈ [0, 1): Un = Xn / m n 
  49. 49. Random integers from a given interval n  Monte Carlo methods approximate solution n  accuracy can be improved at the cost of running time n  n  Las Vegas methods n  n  n  exact solution termination is not guaranteed Sherwood methods exact solution, termination guaranteed n  reduce the difference between good and bad inputs n 
  50. 50. Choice of modulus m sequence of random numbers is finite → period (repeating cycle) n  period has at most m elements → modulus should be large n  recommendation: m is a prime n  reducing modulo: m is a power of 2 n  n  m = 2i : x mod m = x п (2i – 1)
  51. 51. Choice of multiplier a n  period of maximum length n  n  n  a = c = 1: Xn + 1 = (Xn + 1) mod m hardly random: …, 0, 1, 2, …, m – 1, 0, 1, 2, … results from Theorem 2.1.1 if m is a product of distinct primes, only a = 1 produces full period n  if m is divisible by a high power of some prime, there is latitude when choosing a n  n  rules of thumb n  n  0.01m < a < 0.99m no simple, regular bit patterns in the binary representation
  52. 52. Choice of increment c n  no common factor with m n  c =1 n  c = a n  if c = 0, addition operation can be eliminated n  faster processing n  period length decreases
  53. 53. Choice of starting value X0 determines from where in the sequence the numbers are taken n  to guarantee randomness, initialization from a varying source n  n  built-in clock of the computer n  last value from the previous run n  using the same value allows to repeat the sequence
  54. 54. Tests for randomness 1(2) Frequency test n  Serial test n  Gap test n  Poker test n  Coupon collector’s test n 
  55. 55. Tests for randomness 2(2) Permutation test n  Run test n  Collision test n  Birthday spacings test n  Spectral test n 
  56. 56. Spectral test good generators will pass it n  bad generators are likely to fail it n  idea: n  n  let the length of the period be m n  take t consecutive numbers n  construct a set of t-dimensional points: { (Xn, Xn + 1, …, Xn + t – 1) | 0 ≤ n < m } n  when t increases the periodic accuracy decreases n  a truly random sequence would retain the accuracy
  57. 57. Spectral test results 1(3) source: Knuth, 1998
  58. 58. Spectral test results 2(3) source: Hellekalek, 1998
  59. 59. Spectral test results 3(3) source: Hellekalek, 1998
  60. 60. Random shuffling n  n  n  n  generate random permutation, where all permutations have a uniform random distribution shuffling ≈ inverse sorting (!) ordered set S = 〈s1, …, sn〉 to be shuffled naïve solution enumerate all possible n! permutations n  generate a random integer [1, n!] and select the corresponding permutation n  practical only when n is small n 
  61. 61. Random sampling without replacement n  guarantees that the distribution of permutations is uniform every element has a probability 1/n to become selected in the first position n  subsequent position are filled with the remaining n – 1 elements n  because selections are independent, the probability of any generated ordered set is 1/n · 1/(n – 1) · 1/(n – 2) · … · 1/1 = 1/n! n  there are exactly n! possible permutations → generated ordered sets have a uniform distribution n 
  62. 62. Riffle shuffle
  63. 63. Perfect shuffle
  64. 64. Premo: Standard order
  65. 65. Premo: After a riffle shuffle and card insertion
  66. 66. Probability of success: 52 cards, m shuffles, n guesses m n 1 2 3 4 5 6 7 8 9 10 11 12 ∞ 997 839 288 088 042 028 023 021 020 020 019 019 2 1000 943 471 168 083 057 047 042 040 039 039 038 3 1000 965 590 238 123 085 070 063 061 059 058 058 13 1000 998 884 617 427 334 290 270 260 254 252 250 26 1000 999 975 835 688 596 548 524 513 505 503 500 cut-off
  67. 67. Cut-off phenomenon: distance d from the perfect shuffle, critical number k0 source: Aldous & Diaconis, 1986
  68. 68. Random numbers in games terrain generation n  events n  character creation n  decision-making n  game world compression n  synchronized simulation n 
  69. 69. Game world compression n  n  n  n  n  used in Elite (1984) finite and discrete galaxy enumerate the positions set the seed value generate a random value for each position n  n  n  n  if smaller than a given density, create a star otherwise, space is void each star is associated with a randomly generated number, which used as a seed when creating the star system details (name, composition, planets) can be hierarchically extended
  70. 70. Example: Elite
  71. 71. Random game world generation n  discrete game worlds n  example: Nethack, Age of Empires n  rooms, passages, item placements n  continuous game worlds n  random world is not believable n  modular segments put together randomly n  terrain generation
  72. 72. Example: Age of Empires 2
  73. 73. Terraing generation: height map
  74. 74. Terrain generation methods simple random n  limited random n  particle deposition n  fault line n  circle hill n  midpoint displacement n 
  75. 75. Simple random terrain
  76. 76. Limited random terrain
  77. 77. Particle deposition terrain
  78. 78. Fault line terrain
  79. 79. Circle hill terrain
  80. 80. Midpoint displacement terrain
  81. 81. §3 Tournaments n  rank adjustment (or challege) tournament n  n  n  elimination tournament (or cup) n  n  n  each match eliminates the loser from the tournament types: single elimination scoring tournament n  n  n  each match is a challenge for a rank exchange types: ladder, hill climbing, pyramid, king of the hill each match rewards the winner types: round robin hybridizations
  82. 82. Other uses n  game balancing n  n  n  heuristic search n  n  selecting suboptimal candidates for a genetic algorithm group behaviour n  n  duelling synthetic players adjusting point rewarding schemes modelling pecking order learning player characteristics n  managing history knowledge
  83. 83. Example: Hill climbing tournament Juhani Tuomas Aapo Simeoni Timo Lauri Eero m0 m1 m2 m3 m4 m5
  84. 84. Example: Elimination tournament Juhani Tuomas Aapo Simeoni Timo Lauri Eero m0 m3 m1 m5 m2 m4
  85. 85. Example: Scoring tournament Tuomas Juhani Aapo Simeoni Timo Lauri Eero m0 m6 m11 m15 m18 m20 Tuomas m1 m7 m12 m16 m19 Aapo m2 m8 m13 m17 Simeoni m3 m9 m14 Timo m4 m10 Lauri m5
  86. 86. Terms players: p0…pn − 1 n  match between pi and pj: match(i, j) n  outcome: WIN, LOSE, TIE n  rank of pi: rank(i) n  players with the rank r: rankeds(r) n  round: a set of (possibly) concurrent matches n  bracket: diagram of match pairings and rounds n 
  87. 87. Rank adjustment tournaments a set of already ranked players n  matches n  n  independent from one another n  outcome affects only the participating players n  suits on-going tournaments n  example: n  boxing matches can be limited by the rank difference
  88. 88. Ladder and pyramid tournaments pi: rank(i) = 0 pi: rank(i) = 0 pj: rank(j) = 1 pj: rank(j) = 1 pk: rank(k) = 2 pk: rank(k) = 2 pm: rank(m) = 3 pn: rank(n) = 4 pm: rank(m) = 2 pn: rank(n) = 2 rankeds(2) = { k, m, n }
  89. 89. Hill-climbing tournament n  a.k.a. n  n  n  top-of-the-mountain tournament last man standing tournament specialization of the ladder tournament reigning champion defends the title against challlengers n  similarly: king of the hill tournament specializes the pyramid tournament n  n  initialization n  n  based on previous competitions random
  90. 90. Elimination tournaments n  loser of a match is eliminated from the tournament n  no ties! → tiebreak competition winner of a match continues to the next round n  how to assign pairings for the first round? n  n  seeding n  examples n  football cups, snooker tournaments
  91. 91. Single elimination pi pj quarter-final semifinal pk pm pn pq pr ps final
  92. 92. Bye pi pj pk pm pn pq pr bye
  93. 93. Seeding some match pairing will not occur in a single elimination tournament n  pairings for the first round (i.e., seeding) affects the future pairings n  seeding can be based on existing ranking n  n  favour the top-ranked players n  reachability: give the best players an equal opportunity to proceed the final rounds
  94. 94. Seeding methods n  random n  n  n  standard and ordered standard n  n  n  does not favour any player does not fulfil reachability criterion favours the top-ranked players ordered standard: matches are listed in increasing order equitable n  in the first round, the rank difference between the players is the same for each match
  95. 95. Byes and fairness the byes have bottom ranks so that they get paired with best players n  the byes appear only in the first round n 
  96. 96. Runners-up n  we find only the champion n  how to determine the runners-up (e.g. silver and bronze medallists)? n  random pairing can reduce the effect of seeding n  best players are put into different sub-brackets n  the rest is seeded randomly n  re-seed the players before each round n  previous n  matches indicate the current position multiple matches per round (best-of-m)
  97. 97. Double elimination tournament n  two brackets n  winners’ bracket n  losers’ (or consolation) bracket n  initially everyone is in the winners’ bracket n  if a player loses, he is moved to the losers’ bracket n  if he loses again, he is out from the tournament n  the brackets are combined at some point n  for example, the champion of the losers’ bracket gets to the semifinal in the winners’ bracket
  98. 98. Scoring tournaments round robin: everybody meets everybody else once n  scoring table determines the tournament winner n  n  players are rewarded with scoring points n  winner and tie n  matches are independent from one another
  99. 99. Reduction to a graph n players n  clique Kn n  players as vertices, matches as edges n  how to organize the rounds? n  n  a player has at most one match in a round n  a round has as many matches as possible K5
  100. 100. Reduction to a graph (cont’d) n  if n is odd, partition the edges of the clique to (n − 1) / 2 disjoint sets n  in each turn, one player is resting n  player pi rests in the round i n  if n is even, reduce the problem n  player pn − 1 is taken out from the clique n  solve the pairings for n − 1 players as above n  for each round, pair the resting player pi with player pn − 1
  101. 101. Round robin with seven players round matches resting 0 1–6 2–5 3–4 0 1 2–0 3–6 4–5 1 2 3–1 4–0 5–6 2 3 4–2 5–1 6–0 3 4 5–3 6–2 0–1 4 5 6–4 0–3 1–2 5 6 0–5 1–4 2–3 6
  102. 102. Real-world tournament examples n  boxing n  n  sport wrestling n  n  n  n  double elimination: consolation bracket professional wrestling n  n  reigning champion and challengers royal rumble World Cup ice hockey championship snooker
  103. 103. matches (n = 15) 14 14 14 105 rounds 14 6 4 15 1…14 1…6 3…4 14 1 1…4 1…7 7 champion’s matches match in a round
  104. 104. Practical considerations home matches n  venue bookings n  travelling times n  risk management n  other costs n 
  105. 105. §4 Game Trees n  perfect information games n  n  two-player, perfect information games n  n  n  n  Noughts and Crosses Chess Go imperfect information games n  n  n  n  no hidden information Poker Backgammon Monopoly zero-sum property n  one player’s gain equals another player’s loss
  106. 106. Game tree n  all possible plays of two-player, perfect information games can be represented with a game tree n  nodes: positions (or states) n  edges: moves players: MAX (has the first move) and MIN n  ply = the length of the path between two nodes n  n  n  has even plies counting from the root node MIN has odd plies counting from the root node MAX
  107. 107. Division Nim with seven matches
  108. 108. Problem statement Given a node v in a game tree find a winning strategy for MAX (or MIN) from v or (equivalently) show that MAX (or MIN) can force a win from v
  109. 109. Minimax n  n  assumption: players are rational and try to win given a game tree, we know the outcome in the leaves n  n  at nodes one ply above the leaves, we choose the best outcome among the children (which are leaves) n  n  n  assign the leaves to win, draw, or loss (or a numeric value like +1, 0, –1) according to MAX’s point of view MAX: win if possible; otherwise, draw if possible; else loss MIN: loss if possible; otherwise, draw if possible; else win recurse through the nodes until in the root
  110. 110. Minimax rules 1.  2.  n  If the node is labelled to MAX, assign it to the maximum value of its children. If the node is labelled to MIN, assign it to the minimum value of its children. MIN minimizes, MAX maximizes → minimax
  111. 111. MAX –1 –1 –1 –1 +1 +1 +1 –1 +1 –1 +1 –1 +1 –1 MIN MAX MIN MAX MIN
  112. 112. Analysis n  simplifying assumptions n  n  n  time consumption is proportional to the number of expanded nodes n  n  n  n  n  internal nodes have the same branching factor b game tree is searched to a fixed depth d 1 — root node (the initial ply) b — nodes in the first ply b2 — nodes in the second ply bd — nodes in the dth ply overall running time O(bd)
  113. 113. Rough estimates on running times when d = 5 suppose expanding a node takes 1 ms n  branching factor b depends on the game n  Draughts (b ≈ 3): t = 0.243 s n  Chess (b ≈ 30): t = 6¾ h n  Go (b ≈ 300): t = 77 a n  alpha-beta pruning reduces b n 
  114. 114. Controlling the search depth usually the whole game tree is too large → limit the search depth → a partial game tree → partial minimax n  n-move look-ahead strategy n  n  stop searching after n moves n  make the internal nodes (i.e., frontier nodes) leaves n  use an evaluation function to ‘guess’ the outcome
  115. 115. Evaluation function n  combination of numerical measurements mi(s, p) of the game state n  single measurement: mi(s, p) n  difference measurement: mi(s, p) − mj(s, q) n  ratio of measurements: mi(s, p) / mj(s, q) n  aggregate the measurements maintaining the zero-sum property
  116. 116. Example: Noughts and Crosses n  heuristic evaluation function e: n  count the winning lines open to MAX n  subtract the number of winning lines open to MIN n  forced wins n  state is evaluated +∞, if it is a forced win for MAX n  state is evaluated –∞, if it is forced win for MIN
  117. 117. Examples of the evaluation e(•) = 6 – 5 = 1 e(•) = 4 – 5 = –1 e(•) = +∞
  118. 118. Drawbacks of partial minimax n  horizon effect heuristically promising path can lead to an unfavourable situation n  staged search: extend the search on promising nodes n  iterative deepening: increase n until out of memory or time n  phase-related search: opening, midgame, end game n  however, horizon effect cannot be totally eliminated n  n  bias we want to have an estimate of minimax but get a minimax of estimates n  distortion in the root: odd plies → win, even plies → loss n 
  119. 119. The deeper the better...? n  assumptions: n  n  n  n  minimax convergence theorem: n  n  n increases → root value converges to f(b, d) last player theorem: n  n  n-move look-ahead branching factor b, depth d, leaves with uniform random distribution root values from odd and even plies not comparable minimax pathology theorem: n  n increases → probability of selecting non-optimal move increases (← uniformity assumption!)
  120. 120. Alpha-beta pruning reduce the branching factor of nodes n  alpha value n  n  associated with MAX nodes n  represents the worst outcome MAX can achieve n  can never decrease n  beta value n  associated with MIN nodes n  represents the worst outcome MIN can achieve n  can never increase
  121. 121. Example n  in a MAX node, α = 4 n  we know that MAX can make a move which will result at least the value 4 n  we can omit children whose value is less than or equal to 4 n  in a MIN node, β = 4 n  we know that MIN can make a move which will result at most the value 4 n  we can omit children whose value is greater than or equal to 4
  122. 122. Rules of pruning 1.  2.  Prune below any MIN node having a beta value less than or equal to the alpha value of any of its MAX ancestors. Prune below any MAX node having an alpha value greater than or equal to the beta value of any of its MIN ancestors Or, simply put: If α ≥ β, then prune below!
  123. 123. Example: αβ-pruning 1(3)
  124. 124. Example: αβ-pruning 2(3)
  125. 125. Example: αβ-pruning 3(3)
  126. 126. Best-case analysis omit the principal variation n  at depth d – 1 optimum pruning: each node expands one child at depth d n  at depth d – 2 no pruning: each node expands all children at depth d – 1 n  at depth d – 3 optimum pruning n  at depth d – 4 no pruning, etc. n  total amount of expanded nodes: Ω(bd/2) n 
  127. 127. Principal variation search n  alpha-beta range should be small n  n  n  limit the range artificially → aspiration search if search fails, revert to the original range if we find a move between α and β, assume we have found a principal variation node search the rest of nodes the assuming they will not produce a good move n  if the assumption fails, re-search the node n  n  works well if the principal variation node is likely to get selected first
  128. 128. Games of chance n  minimax trees assume determistic moves n  what about indeterministic events like tossing a coin, casting a die or shuffling cards? chance nodes: *-minimax tree n  expectiminimax n  n  if node v is labelled to CHANCE, multiply the probability of a child with its expectiminimax value and return the sum over all v’s children n  otherwise, act as in minimax
  129. 129. §5 Path Finding n  common problem in computer games n  routing n  characters, troops etc. computationally intensive problem n  complex game worlds n  high number of entities n  dynamically changing environments n  real-time response
  130. 130. Problem statement given a start point s and a goal point r, find a path from s to r minimizing a given criterion n  search problem formulation n  n  find n  a path that minimizes the cost optimization problem formulation n  minimize cost subject to the constraint of the path
  131. 131. The three phases of path finding 1.  discretize the game world n  2.  solve the path finding problem in a graph n  n  3.  select the waypoints and connections let waypoints = vertices, connections = edges, costs = weights find a minimum path in the graph realize the movement in the game world n  n  aesthetic concerns user-interface concerns
  132. 132. Discretization n  waypoints (vertices) n  doorways, n  corners, obstacles, tunnels, passages, … connections (edges) n  based on the game world geometry, are two waypoints connected n  costs (weights) n  distance, n  environment type, difference in altitude, … manual or automatic process? n  grids, navigation meshes
  133. 133. Grid n  regular tiling of polygons n  n  n  n  n  square grid triangular grid hexagonal grid tile = waypoint tile’s neighbourhood = connections
  134. 134. Navigation mesh n  convex partitioning of the game world geometry n  convex polygons covering the game world n  adjacent polygons share only two points and one edge n  no overlapping n  polygon = waypoint n  middlepoints, n  centre of edges adjacent polygons = connections
  135. 135. Solving the convex partitioning problem n  minimize the number of polygons n  n  n  optimal solution n  n  points: n points with concave interior angle (notches): r ≤ n − 3 dynamic programming: O(r2n log n) Hertel–Mehlhorn heuristic n  n  n  number of polygons ≤ 4 × optimum running time: O(n + r log r) requires triangulation running time: O(n) (at least in theory) n  Seidel’s algorithm: O(n lg* n) (also in practice) n 
  136. 136. Path finding in a graph n  after discretization form a graph G = (V, E) n  waypoints = vertices (V) n  connections = edges (E) n  costs = weights of edges (weight : E → R+) n  next, find a path in the graph
  137. 137. Graph algorithms n  breadth-first search n  running n  depth-first search n  running n  time: O(|V| + |E|) time: Θ(|V| + |E|) Dijkstra’s algorithm time: O(|V|2) n  can be improved to O(|V| log |V| + |E|) n  running
  138. 138. Heuristical improvements n  best-first search n  order the vertices in the neighbourhood according to a heuristic estimate of their closeness to the goal n  returns optimal solution n  beam search n  order the vertices but expand only the most promising candidates n  can return suboptimal solution
  139. 139. Evaluation function expand vertex minimizing f(v) = g(s ~> v) + h(v ~> r) n  g(s ~> v) estimates the minimum cost from the start vertex to v n  h(v ~> r) estimates (heuristically) the cost from v to the goal vertex n  if we had exact evaluation function f *, we could solve the problem without expanding any unnecessary vertices n 
  140. 140. Cost function g n  actual cost from s to v along the cheapest path found so far n  exact cost if G is a tree n  can never underestimate the cost if G is a general graph n  f(v) = g(s ~> v) and unit cost → breadth-first search n  f(v) = –g(s ~> v) and unit cost → depth-first search
  141. 141. Heuristic function h carries information from outside the graph n  defined for the problem domain n  the closer to the actual cost, the less superfluous vertices are expanded n  f(v) = g(s ~> v) → cheapest-first search n  f(v) = h(v ~> r) → best-first search n 
  142. 142. Admissibility let Algorithm A be a best-first search using the evaluation function f n  search algorithm is admissible if it finds the minimal path (if it exists) n  n  if n  f = f *, Algorithm A is admissible Algorithm A* = Algorithm A using an estimate function h n  A* is admissible, if h does not overestimate the actual cost
  143. 143. Monotonicity h is locally admissible → h is monotonic n  monotonic heuristic is also admissible n  actual cost is never less than the heuristic cost → f will never decrease n  monotonicity → A* finds the shortest path to any vertex the first time it is expanded n  n  if a vertex is rediscovered, path will not be shorter n  simplifies implementation
  144. 144. Optimality Optimality theorem: The first path from s to r found by A* is optimal. n  Proof: textbook p. 105 n 
  145. 145. Informedness the more closely h approximates h*, the better A* performs n  if A1 using h1 will never expand a vertex that is not also expanded by A2 using h2, A1 is more informed that A2 n  informedness → no other search strategy with the same amount of outside knowledge can do less work than A* and be sure of finding the optimal solution n 
  146. 146. Algorithm A* n  because of monotonicity n  all weights must be positive n  closed list can be omitted n  the path is constructed from the mapping π starting from the goal vertex n  s → … → π(π(π(r))) → π(π(r)) → π(r) → r
  147. 147. A* example 1(6)
  148. 148. A* example 2(6)
  149. 149. A* example 3(6)
  150. 150. A* example 4(6)
  151. 151. A* example 5(6)
  152. 152. A* example 6(6)
  153. 153. Practical considerations n  computing h n  despite the extra vertices expanded, less informed h may yield computationally less intensive implementation n  suboptimal solutions n  by allowing overestimation A* becomes inadmissible, but the results may be good enough for practical purposes
  154. 154. Realizing the movement n  movement through the waypoints n  unrealistic: does not follow the game world geometry n  aesthetically displeasing: straight lines and sharp turns n  improvements n  line-of-sight testing n  obstacle avoidance n  combining path finding to user-interface n  real-time response
  155. 155. Recapitulation 1.  discretization of the game world n  n  2.  path finding in a graph n  3.  grid, navigation mesh waypoints, connections, costs Algorithm A* realizing the movement n  n  geometric corrections aesthetic improvements
  156. 156. Alternatives? Although this is the de facto approach in (commercial) computer games, are there alternatives? n  possible answers n  n  AI processors (unrealistic?) n  robotics: reactive agents (unintelligent?) n  analytical approaches (inaccessible?)
  157. 157. §6 Decision-Making n  decision-making and games n  n  n  n  example methods n  n  n  n  levels of decision-making modelled knowledge method finite state machines flocking algorithms influence maps this will not be a comprehensive guide into decisionmaking!
  158. 158. MVC (revisited) model controller state instance control logic core structures proto-view configuration driver synthetic player instance data input device script action human player synthetic view view rendering output device perception options
  159. 159. Decision-making system Previous primitives Requested actions Pattern recognition Possible actions World Primitive events and states Observed events and states Decisionmaking system
  160. 160. Three perspectives for decisionmaking in computer games n  level of decision-making n  strategic, n  tactical, operational use of the modelled knowledge n  prediction, n  production methods n  optimization, adaptation
  161. 161. Level of decision-making n  strategic n  what n  tactical n  how n  should be done to actuate it operational n  how to carry it out
  162. 162. Strategic level n  long-term decisions n  infrequent → can be computed offline or in the background n  large amount of data, which is filtered to bring forth the essentials n  quantization problem? speculative (what-if scenarios) n  the cost of a wrong decision is high n 
  163. 163. Tactical level medium-term decisions n  intermediary between strategic and operational levels n  n  follow the plan made on the strategic level n  convey the feedback from the operational level n  considers a group of entities n  a selected set of data to be scrutinized n  co-operation within the group
  164. 164. Operational level n  short-term decisions n  reactive, real-time response concrete and closely connected to the game world n  considers individual entities n  the cost of a wrong decision is relatively low n  n  of course not to the entity itself
  165. 165. Use of the modelled knowledge time series data n  world = a generator of events and states, which can be labelled with symbols n  prediction n  n  what n  the generator will produce next? production n  simulating n  the output of the generator how to cope with uncertainty?
  166. 166. Prediction Modeller Generator maximum probability
  167. 167. Production Modeller random selection from probability distribution
  168. 168. Decision-making methods n  optimization n  find an optimal solution for a given objective function n  affecting factors can be modelled n  adaption n  find a function behind the given solutions n  affecting factors are unknown or dynamic
  169. 169. Optimization optimality objective function local optimum global optimum solution
  170. 170. Optimization methods n  hill-climbing n  how to escape local optima? tabu search n  simulated annealing n  genetic algorithms n  n  multiple n  search traces swarm algorithms
  171. 171. Adaptation feedback fitted function sample cases solution
  172. 172. Adaptation methods n  neural networks n  training n  supervised learning n  unsupervised learning (e.g., self-organizing maps) n  execution n  hidden Markov model n  recurring structures
  173. 173. Finite state machine (FSM) n  components: n  states n  transitions n  events n  actions n  state chart: fully connected directed graph n  vertices = states n  edges = transitions
  174. 174. Properties of FSM 1.  acceptor n  2.  transducer n  3.  what is the corresponding output sequence for a given input sequence? computator n  n  does the input sequence fulfil given criteria? what is the sequence of actions for a given input sequence? these properties are independent!
  175. 175. Mealy and Moore machines n  n  theoretical cathegories for FSMs Mealy machine actions are in transitions n  the next action is determined by the current state and the occurring event n  more compact but harder to comprehend n  n  Moore machine n  n  n  actions are in states the next action is determined by the next state helps to understand and use state machines in UML
  176. 176. Implementation n  design by contract n  two parties: the supplier and the client n  formal agreement using interfaces n  FSM software components n  environment: view to the FSM (client) n  context: handles the dynamic aspects of the FSM (supplier) n  structure: maintains the representation of the FSM (supplier)
  177. 177. Noteworthy n  structure is static n  n  reactivity n  n  not for continuous or multivalued values combinatorial explosion n  n  memoryless representation of all possible walks from the initial state states are mutually exclusive: one state at a time n  n  hard to modify if the states and events are independent risk of total rewriting n  high cohesion of actions
  178. 178. Flocking C. W. Reynolds: “Flocks, herds, and schools: A distributed behavioral model” (1987) n  a flock seems to react as autonomous entity although it is a collection of individual beings n  flocking algorithm emulates this phenomenon n  results resemble various natural group movements n  boid = an autonomous agent in a flock n 
  179. 179. Rules of flocking 1.  2.  3.  4.  Separation: Do not crowd flockmates. Alignment: Move in the same direction as flockmates. Cohesion: Stay close to flockmates. Avoidance: Avoid obstacles and enemies. → boid’s behavioural urges
  180. 180. Observations n  stateless algorithm n  no information needs to be maintained n  boid re-evaluates the environment on each update cycle n  no centralized control n  emergent behaviour
  181. 181. Other uses for flocking n  swarm algorithms n  solution candidate = boid n  solution space = flying space n  separation prevents crowding the local optima n  obstacle avoidance in path finding n  steer away from obstacles along the path
  182. 182. Influence maps discrete representation of the synthetic player’s knowledge of the world n  strategic and tactical information n  n  frontiers, n  control points, weaknesses influence n  type n  repulsiveness/alluringness n  recall path finding and terrain generation
  183. 183. Assumptions a regular grid over the game world n  each tile holds numeric information of the corresponding area n  n  positive values: alluringness n  negative values: repulsiveness
  184. 184. Construction 1.  initialization n  2.  assign values to the tiles where the influence exists propagation n  n  n  spread the effect to the neighbouring tiles linear or exponential fall-off cut-off point
  185. 185. Example: Initialization and propagation 5 10 20 10 5 0 -1 -2 -2 -2 10 20 40 20 10 -1 -2 -6 -4 -6 5 10 20 10 5 -2 -6 -12 -10 -10 2 5 10 5 2 -5 -10 -20 -12 -10 1 2 5 2 1 -2 -5 -10 -6 -4
  186. 186. Aggregation n  influence map can be combined n  the n  same (or compatible) granularity example n  map 1 = my troops n  map 2 = enemy’s troops n  map 3 = map 1 + map 2 = battlefield n  aggregation n  operator: sum, product n  weights: to balance the effects
  187. 187. Example: Aggregation 5 9 18 8 3 9 18 36 16 4 3 4 0 -5 -3 -5 -10 -7 -8 -1 -3 -3 8 -5 -4
  188. 188. Evaluation static features: compute beforehand n  periodical updates n  n  categorize the maps based on the rate of change n  lazy evaluation
  189. 189. Key questions for synthetic players how to achieve real-time response? n  how to distribute the synthetic players in a network? n  how autonomous the synthetic players should be? n  how to communicate with other synthetic players? n 
  190. 190. §7 Modelling Uncertainty n  probabilistic uncertainty n  probability of an outcome n  dice, shuffled cards n  statistical reasoning n  Bayesian n  networks, Dempster-Shafer theory possibilistic uncertainty n  possibility of classifying object n  sorites paradoxes n  fuzzy sets
  191. 191. Probabilistic or possibilistic uncertainty? Is the vase broken? n  Is the vase broken by a burglar? n  Is there a burglar in the closet? n  Is the burglar in the closet a man? n  Is the man in the closet a burglar? n 
  192. 192. Bayes’ theorem hypothesis H n  evidence E n  probability of the hypothesis P(H) n  probability of the evidence P(E) n  probability of the hypothesis based on the evidence P(H|E) = (P(E|H) · P(H)) / P(E) n 
  193. 193. Example H — there is a bug in the code n  E — a bug is detected in the test n  E|H — a bug is detected in the test given that there is a bug in the code n  H|E — there is a bug in the code given that a bug is detected in the test n 
  194. 194. Example (cont’d) P(H) = 0.10 n  P(E|H) = 0.90 n  P(E|¬H) = 0.10 n  P(E) = P(E|H) · P(H) + P(E|¬H) · P(¬H) = 0.18 n  from Bayes’ theorem: P(H|E) = 0.5 n  conclusion: a detected bug has fifty-fifty chance that it is not in the actual code n 
  195. 195. Bayesian networks n  describe cause-and-effect relationships with a directed graph n  vertices = propositions or variables n  edges = dependencies as probabilities propagation of the probabilities n  problems: n  n  relationships between the evidence and hypotheses are known n  establishing and updating the probabilities
  196. 196. Dempster-Shafer theory belief about a proposition as an interval [ belief, plausability ] ⊆ [ 0, 1] n  belief supporting A: Bel(A) n  plausability of A: Pl(A) = 1 − Bel(¬A) n  Bel(intruder) = 0.3, Pl(intruder) = 0.8 n  n  Bel(no intruder) = 0.2 n  0.5 of the probability range is indeterminate
  197. 197. Belief interval Belief Non-belief Uncertainty Plausability Doubt 0 Bel(A) Pl(A) 1
  198. 198. Example 1(5) n  hypotheses: animal, weather, trap, enemy n  Θ n  = { A, W, T, E} task: assign a belief value for each hypothesis n  evidence n  mass function m(H) = current belief to the set H of hypotheses n  in n  can affect one or more hypotheses the beginning m(Θ) = 1 evidence ‘noise’ supports A, W and E n  mass function mn({ A, W, E }) = 0.6, mn(Θ) = 0.4
  199. 199. Example 2(3) n  evidence ‘footprints’ supports A, T, E n  n  combination with Dempster’s rule: n  n  mf({ A, T, E }) = 0.8, mf(Θ) = 0.2 mnf({A, E}) = 0.48, mnf({W, A, E}) = 0.12, mnf({A, T, E}) = 0.32, mnf(Θ) = 0.08 enemy, trap, trap or enemy, weather, or animal? n  n  n  n  n  Bel(E) = 0, Pl(E) = 1 Bel(T) = 0, Pl(T) = 0.4 Bel(T, E) = 0, Pl(T, E) = 1 Bel(W) = 0, Pl(W) = 0.2 Bel(A) = 0, Pl(A) = 1
  200. 200. Example 3(3) n  evidence ‘candy wrapper’ supports T, E n  n  combination with Dempster’s rule: n  n  mc({E}) = 0.6, mc({T}) = 0.3, mc(Θ) = 0.1 mnfc({E}) = 0.73, mnfc({T}) = 0.15, mnfc({A, E}) = 0.06, mnfc({A, T, E}) = 0.04, mnfc({W, A, E}) = 0.01, mnfc(Θ) = 0.01 enemy, trap, trap or enemy, weather, or animal? n  n  n  n  n  Bel(E) = 0.73, Pl(E) = 0.85 Bel(T) = 0.15, Pl(T) = 0.2 Bel(T, E) = 0.88, Pl(T, E) = 1 Bel(W) = 0, Pl(W) = 0.02 Bel(A) = 0, Pl(A) = 0.03
  201. 201. Fuzzy sets n  element x has a membership in the set A defined by a membership function μA(x) in the set: μA(x) = 0 n  fully in the set: μA(x) = 1 n  partially in the set: 0 < μA(x) < 1 n  not n  contrast to classical ‘crisp’ sets n  not in the set: χA(x) = 0 n  in the set: χA(x) = 1
  202. 202. Membership function μ 1 A μA(x) 0 x U
  203. 203. How to assign membership functions? n  real-word data n  n  n  subjective evaluation n  n  n  human experts’ cognitive knowledge questionnaires, psychological tests adaptation n  n  physical measurements statistical data neural networks, genetic algorithms → simple functions usually work well enough as long as they model the general trend
  204. 204. Fuzzy operations union: μC(x) = max{μA(x), μB(x)} n  intersection: μC(x) = min{μA(x), μB(x)} n  complement: μC(x) = 1 − μA(x) n  n  note: operations can be defined differently
  205. 205. Fuzzy operations (cont’d) μ 1 A∪B A B A A∩B 0 U
  206. 206. Uses for fuzzy sets approximate reasoning n  fuzzy constraint satisfaction problem n  fuzzy numbers n  almost any ‘crisp’ method can be fuzzified! n 
  207. 207. Constraint satisfaction problem n  constraint satisfaction problem (CSP): n  a set of n variables X n  a domain Di for each variable xi in X n  a set of constraints restricting the feasibility of the tuples (x0, x1,…, xn – 1) ∈ D0 × … × Dn – 1 n  solution: an assignment of value to each variable so that every constraint is satisfied n  no objective function → not an optimization problem
  208. 208. Example: n queens problem as a CSP problem: place n queens on a n × n chessboard so that they do not threat one another n  CSP formulation n  n  variables: xi for each row i n  domain: Di = { 1, 2,…, n } n  constraints: n  xi ≠ xj n  xi – xj ≠ i – j n  xj – xi ≠ i – j
  209. 209. Fuzzy constraint satisfaction problem n  fuzzy constraint satiscation problem (FCSP) is a five-tuple P = 〈 V, Cµ, W, T, U 〉 n  V: variables n  U: universes (domains) for the variables n  Cµ: constraints as membership functions n  W: weighting scheme n  T: aggregation function
  210. 210. Dog Eat Dog: Modelling the criteria as fuzzy sets if the visual observation of the enemy is reliable, then avoid the enemy n  if the visual observation of the prey is reliable, then chase the prey n  if the olfactory observation of the pond is reliable, then go to the pond n  if the visual observation of the enemy is reliable, then stay in the centre of the play field n 
  211. 211. Dog Eat Dog: Weighting the criteria importances n  fuzzy criterion Ci has a weight wi n  n  [0, 1] a greater value wi corresponds to a greater importance the weighted value from the implication wi → Ci n  n  classical definition (A → B ⇔ ¬A B): min{ (1 − wi ), Ci } Yager’s weighting scheme: the weighted membership value: μCw(x) = ⎧ 1, if μ(x) = 0 and w = 0 ⎨ ⎩ (μC(x))w, otherwise
  212. 212. Dog Eat Dog: Aggregating the criteria aggregator should have compensatory properties n  the effect of a poorly satisfied criterion is not so drastic n  mean-based operators instead of conjunction n  n  ordered weighted averaging (OWA)
  213. 213. Ordered weighted averaging (OWA) n  weight sequence W = (w0, w1,…,wn – 1)T n  n  F(a0, a1,…,an – 1) = Σwjbj n  n  bj is the (j+1)th largest element of the sequence A = 〈a0, a1,…,an – 1〉 by setting the weight sequence we can get n  n  n  n  ∀wi ∈ [0, 1] and Σwi = 1 conjunction: W = { 0, 0,…, 1} = min{A} disjunction: W = { 1, 0,…, 0} = max{A} average: W = {1/n, 1/n,…, 1/n} soft-and operator: wi = 2(i + 1) / (n(n + 1)) n  example: n = 4, W = { 0.1, 0.2, 0.3, 0.4 }
  214. 214. Outroduction §1 §2 §3 §4 §5 §6 §7 Introduction Random Numbers Tournaments Game Trees Path Finding Decision-Making Modelling Uncertainty
  215. 215. The intention, huh? n  to provide a glance into the world of computer games as seen from the perspective of a computer scientist
  216. 216. Examinations n  examination dates 1.  2.  3.  n  n  October 12 2009 November 16, 2009 December 14, 2009 check the exact times and places at http:// www.it.utu.fi/opetus/tentit/ remember to enrol! https://ssl.utu.fi/nettiopsu/
  217. 217. source: The New Yorker, Sep. 17, 2007
  218. 218. Examinations (cont’d) n  questions n  based on both lectures and the textbook n  two questions, à 5 points n  to pass the examination, at least 5 points (50%) are required n  grade: g = ⎡p − 5⎤ n  questions are in English, but you can answer in English or in Finnish
  219. 219. Follow-up course: Multiplayer Computer Games focus: networking in computer games n  credits: 5 cp (3 cu) n  schedule: n  n  October 27 – November 26, 2009 n  Tuesdays 10–12 a.m., and Thursdays 10–12 p.m. n  web page: http://www.iki.fi/smed/mcg
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×