Ph.D proposal
steven hamblin
+

=

?
Social foraging

Equilibrium
behaviour

Learning
Producer
Scrounger
Producers
Chapter 1: Evolution of learning rules for foraging.
Chapter 1: Evolution of learning rules for foraging.
Chapter 2: Learning rules, the next step.
Chapter 1: Evolution of learning rules for foraging.
Chapter 2: Learning rules, the next step.
Chapter 3: Landscape geometry and foraging.
Chapter 1: Evolution of learning rules for foraging.
Chapter 2: Learning rules, the next step.
Chapter 3: Landscape geometry and foraging.
Chapter 4: Predator-prey coevolution.
Chapter 1:
	 	 Evolution of learning rules.
Scrounger
Producers
Rules
Rules
Relative payoff sum
Rules
Relative payoff sum
Perfect Memory
Rules
Relative payoff sum
Perfect Memory
Linear Operator
Relative
Payoff Sum?

Si (t) = xSi (t

1) + (1

Perfect
Memory?

Si (t) =

Linear
Operator?

Si (t) = xSi (t

x)ri + Pi (t)

+ Ri (t)/(⇥ + Ni (t))

1) + (1

x)Pi (t)
Relative
Payoff Sum?

Si (t) = xSi (t

1) + (1

Perfect
Memory?

Si (t) =

Linear
Operator?

Si (t) = xSi (t

x)ri + Pi (t)

+ Ri (t)/(⇥ + Ni (t))

1) + (1

x)Pi (t)
Relative
Payoff Sum?

Si (t) = xSi (t

1) + (1

Perfect
Memory?

Si (t) =

Linear
Operator?

Si (t) = xSi (t

x)ri + Pi (t)

+ Ri (t)/(⇥ + Ni (t))

1) + (1

x)Pi (t)

Multiple stable rules with multiple parameters?
Perfect
Memory?
Relative
Payoff Sum?

Linear
Operator?
Agent Start

Produce or
scrounge?

Produce

At a patch
with food?

Scrounge

No

Move
randomly

No

Any
conspecifics
feeding?

Yes

Feed

No

Move to
closest

Still food in
patch?

Yes

There yet?

Yes

No

Feed

Closest still
feeding?

Yes

No
Foraging grid is a
variable-sized square
grid with movement in
the 4 cardinal directions.

Agent Start

Produce or
scrounge?

Produce

At a patch
with food?

Scrounge

No

Move
randomly

No

Number of patches and
number of agents kept
to 20% and 10% of grid
size.

Any
conspecifics
feeding?

Yes

Feed

No

Move to
closest

Still food in
patch?

Yes

There yet?

Yes

No

Feed

Closest still
feeding?

Yes

No

Thus: 40x40 grid would
have 320 patches and
160 agents
Genetic Algorithms

Algorithms that
simulate evolution to
solve optimization
problems.
Initial population

Measure fitness

> n generations

Select for
reproduction

Mutation

Exit
Genetic algorithm to optimize parameters and
simulate population dynamics.

Foraging / Learning rule simulation.
Chapter 2:
	 	 Learning rules, the next step.
Si (t) = xSi (t

1) + (1

Perfect
Memory?
Si (t) = xSi (t

x)ri + Pi (t)

Si (t) =

1) + (1

Relative
Payoff Sum?

+ Ri (t)/(⇥ + Ni (t))

x)Pi (t)

Linear
Operator?
Initial population

Measure fitness

> n generations

Select for
reproduction

Mutation

Exit
Genetic algorithm to optimize parameters and
simulate population dynamics.

Foraging / Learning rule simulation.
Genetic programming to optimize rule structure.

Genetic algorithm to optimize parameters and
simulate population dynamics.

Foraging / Learning rule simulation.
Chapter 3:
	 	 Landscape geometry and foraging.
+
Foraging simulation.
Foraging simulation.

Swappable grids (Moore / VN / Hex / Dirichlet)
Genetic algorithm to optimize parameters and
simulate population dynamics.

Foraging simulation.

Swappable grids (Moore / VN / Hex / Dirichlet)
Chapter 4:
	 	 Predator-prey coevolution.
Predator scrounging

Prey clumping
Predator scrounging

Prey clumping
Predator scrounging

Prey clumping
Predator
characteristic

Prey
characteristic
Predator
characteristic

Prey
characteristic
Predator scrounging

Prey clumping
Predator scrounging

Prey clumping

Foraging simulation.
Predator scrounging

GA to optimize predator
characteristics (scrounging).

Prey clumping
Predator scrounging

Prey clumping

GA to optimize prey
characteristics (clumping)
Predator scrounging

Prey clumping

GA to optimize predator
characteristics (scrounging).

GA to optimize prey
characteristics (clumping)

Foraging simulation.
Finis. Questions?
Predator scrounging

Prey clumping
+

Before
ß

+

*
x

*
-

ß
1

∆

x

+

After
ß

+

+
x

*
-

ß
1

∆

x
Relative Payoff Sum
Si (t) = xSi (t

1) + (1

x)ri + Pi (t)

where 0 < x < 1 is a memory factor,
ri > 0 is the residual value associated with alternative i,
Pi (t) is the payo to alternative i at time t, and
Si (t) is the value that the animal places on the behavioural alternative i at
time t.
Relative Payoff Sum
Si (t) = xSi (t

1) + (1

x)ri + Pi (t)

where 0 < x < 1 is a memory factor,
ri > 0 is the residual value associated with alternative i,
Pi (t) is the payo to alternative i at time t, and
Si (t) is the value that the animal places on the behavioural alternative i at
time t.
Relative Payoff Sum
Si (t) = xSi (t

1) + (1

x)ri + Pi (t)

where 0 < x < 1 is a memory factor,
ri > 0 is the residual value associated with alternative i,
Pi (t) is the payo to alternative i at time t, and
Si (t) is the value that the animal places on the behavioural alternative i at
time t.
Relative Payoff Sum
Si (t) = xSi (t

1) + (1

x)ri + Pi (t)

where 0 < x < 1 is a memory factor,
ri > 0 is the residual value associated with alternative i,
Pi (t) is the payo to alternative i at time t, and
Si (t) is the value that the animal places on the behavioural alternative i at
time t.
Relative Payoff Sum
Si (t) = xSi (t

1) + (1

x)ri + Pi (t)

where 0 < x < 1 is a memory factor,
ri > 0 is the residual value associated with alternative i,
Pi (t) is the payo to alternative i at time t, and
Si (t) is the value that the animal places on the behavioural alternative i at
time t.
Perfect Memory
Si (t) =

+ Ri (t)/(⇥ + Ni (t))

where Ri (t) is the cumulative payo s from alternative i to time t,
Ni (t) is the number of time periods from the beginning in which the option
was selected,
and ⇥ are parameters.
Perfect Memory
Si (t) =

+ Ri (t)/(⇥ + Ni (t))

where Ri (t) is the cumulative payo s from alternative i to time t,
Ni (t) is the number of time periods from the beginning in which the option
was selected,
and ⇥ are parameters.
Perfect Memory
Si (t) =

+ Ri (t)/(⇥ + Ni (t))

where Ri (t) is the cumulative payo s from alternative i to time t,
Ni (t) is the number of time periods from the beginning in which the option
was selected,
and ⇥ are parameters.
Perfect Memory
Si (t) =

+ Ri (t)/(⇥ + Ni (t))

where Ri (t) is the cumulative payo s from alternative i to time t,
Ni (t) is the number of time periods from the beginning in which the option
was selected,
and ⇥ are parameters.
Linear Operator
Si (t) = xSi (t

1) + (1

x)Pi (t)

where 0 < x < 1 is a memory factor,
Pi (t) is the payo to alternative i at time t, and
Si (t) is the value that the animal places on the behavioural alternative i at
time t.

Phd Proposal