SlideShare a Scribd company logo
The asymptotic maxima of a branching random walk via
spine techniques
Josh Young
Supervised by Dr Matthew Roberts
August 19, 2016
Summary
This paper is the product of a 10 week research internship granted by the Bath Institute
for Mathematical Innovation, under the supervision of Dr Matthew Roberts of the University
of Bath. Most of the results within have been well studied; the aim of this paper is to apply
these results to more specific cases, in a manner understandable to the average mathematics
undergraduate. We begin by looking at some elementary properties of Galton-Watson trees
with random infinite spines, also called size-biased Galton-Watson trees, and using these to
prove the Kesten-Stigum theorem. We then apply these spine techniques to the case of a
binary branching random walk to derive the asymptotic maximal growth rate.
Contents
1 Size-biased Galton-Watson trees 2
1.1 The canonical Galton-Watson Process . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Size-biasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Spine Decomposition and the Kesten-Stigum theorem . . . . . . . . . . . . . . . . . 5
2 A discrete-time branching process on the unit square 8
2.1 Preliminaries and heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Change of measure and Spine Decomposition . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Asymptotic growth of the maximal particle . . . . . . . . . . . . . . . . . . . . . . . 13
1
1 Size-biased Galton-Watson trees
1.1 The canonical Galton-Watson Process
We will define a Galton-Watson tree (henceforth abbreviated to GWT) in the standard manner.
Let L be a random variable with P(L = k) = pk for k ∈ N ∪ {0}. Let (L
(n)
i ; n, i ∈ N ∪ {0}) be
independent copies of L. We define a sequence (Zn, n ≥ 0) inductively by
Zn+1 :=
Zn
i=1
L
(n)
i
with the convention that Z0 = 1. This can be visualised as a breeding process from a single ancestor,
where Zn is the number of descendants in the nth generation, and L
(n)
i is the number of children
produced by the ith descendant in generation n. We will use the notation |u| = n to indicate that
particle u belongs to generation n. We also denote the mean of the process by m := E[L]
1.2 Size-biasing
Galton-Watson trees can also be endowed with a spine, which we will construct as follows. Label
the root of the tree ξ0. For each i ∈ N ∪ {0}, uniformly select one of the children of ξi, and label
this ξi+1. The sequence Ξ := (ξn, n ∈ N ∪ {0}) is called the spine. We will denote the number of
children of ξi by Lξi . In the next few sections, we will define the filtrations and martingales used
in the study of GW trees with spines.
Definition 1.1. For all n ∈ N, define:
1. Wn := Zn
mn
2. Mn := 1
mn
n−1
i=0
Lξn
Definition 1.2. For all n ∈ N, let:
1. Fn be the σ-algebra generated by the first n generations of the process
2. Gn be the σ-algebra generated by the first n spinal particles, and the children of the first n−1.
3. Fn := σ(Fn ∪ Gn)
Proposition 1.3. 1. The process W := (Wn, n ∈ N) is a non-negative Fn-martingale
2. The process M := (Mn, n ∈ N) is a non-negative Fn-martingale
Proof of (1). We have:
2
E
Zn
mn
Fn−1 = E
1
mn
Zn−1
i=1
L
(n−1)
i Fn−1
=
1
mn
Zn−1
i=1
E L
(n−1)
i Fn−1
=
1
mn
Zn−1
i=1
m =
Zn−1
mn−1
Hence Wn is a martingale. Non-negativity is trivial.
Proof of (2) We have:
E[Mn|Fn−1] =
1
mn
E
n−1
i=0
Lξi Fn−1
=
1
mn
×
n−2
i=0
Lξi × E[Lξn−1 |Fn−1]
=
1
mn−1
n−2
i=0
Lξi = Mn−1
Once again, non-negativity is trivial.
We will now use the martingale Mn to define a new probability measure, Q by setting:
dQ
dP Fn
= Mn
The following lemma and subsequent proposition will allow us to visualise how GW trees behave
under this new measure.
Lemma 1.4. E[Mn|Fn] = Wn
Proof. From the definitions, we immediately have that E[Mn|Fn] = 1
mn E[
n−1
i=0 Lξn
|Fn]. We now
sum over the indicator of j ∈ Ξ:
1
mn
E
Zn
j=1
n−1
i=0
L
(i)
j 1{j∈Ξ} Fn
Now, both Zn and L
(i)
j are Fn-measurable, and the indicator of j ∈ Ξ is independent of Fn. Hence
this reduces to
3
1
mn
Zn
j=1
n−1
i=0
L
(i)
j P(j ∈ Ξ) =
The probability of any given j with |j| = n being a spinal particle is 1
n−1
i=0 Lξi
. The two products
now cancel out, leaving us with 1
mn
Zn
j=1 1 = Zn
mn = Wn
Proposition 1.5. Let |u| = n. Then
Q(L(n)
u = k) =
kpk
m u ∈ ξ
pk u ∈ ξ
Proof. Consider u ∈ ξ. That is, u = ξn. Hence Q(L
(n)
u = k) = Q(Lξn
= k) = E[Mn+11{Lξn+1
=k}].
The law of total expectation tells us that
E[Mn1{Lξn=k}] = E[Mn+1|Lξn
= k] × P(Lξn
= k)
= E
k
mn+1
n−1
i=0
Lξi
× pk
=
kpk
mn+1
n−1
i=0
E[Lξi
]
=
kpk
mn+1
× mn
=
kpk
m
We now consider u ∈ ξ. This gives us Q(L
(n)
u = k) = E[Mn+11{L
(n)
u =k}
]. Since u ∈ ξ, we have
that Mn and the indicator function are independent. We can use this and reduce the expression to
E[Mn+1]P(L
(n)
u = k) = E[Mn+1]pk. We have as a corollary to Lemma 1.4 that E[Mn] = E[Wn] = 1,
completing the proof.
This proposition tells us that the offspring of particles in the spine follow a size-biased distribution,
whereas the other particles behave in the usual manner. In particular, the probability of any spinal
particle having no children is 0. Since the root of the tree is in the spine, this tells us that under Q,
the event of extinction almost surely does not occur. Lyons and Peres [2] call GWTs under the law
Q size-biased Galton-Watson trees. An example of a size-biased GWT is given in figure 1, where
each non-spine particle forms the root of an independent GWT.
4
ξ0
ξ1
ξ2
ξ3
GW GW
GW
GW
Figure 1: An example tree after 4 generations
1.3 Spine Decomposition and the Kesten-Stigum theorem
We will now use the properties of sized-biased trees to prove Kesten and Stigum’s classic limit
theorem, stated below.
Theorem 1.6. The Kesten-Stigum Theorem [1]
Let L be the offspring random variable of a Galton-Watson process with mean m ∈ (1, ∞) and
martingale limit W. Then the following are equivalent:
a. P[W = 0] = q
b. E[W] = 1
c. E[L log+
L] < ∞
Proposition 1.7. Spine Decomposition
For all n ≥ 1
EQ[Wn|G∞] =
n−1
i=0
(Lξi
− 1)m−(i+1)
+ 1
Proof. We will prove this result inductively. Since the root is also the first particle in the spine,
we have that Z1 = Lξ0
. Therefore EQ[Wn|G∞] = 1
m Lξ0
. This settles the base case. Now, we will
consider EQ[Wk+1|G∞] for some k > 1, which we will denote Ek+1. From the definitions of Wn and
Zn, we have that
Ek+1 =
1
mk+1
EQ
Zk
i=0
L
(k)
i G∞
5
Now, under Q we know that there is exactly 1 spine particle in generation k, specifically ξk.
Removing ξk’s children from the sum gives
1
mk+1
EQ
Zk−1
i=0
L
(k)
i + Lξk
G∞
We can now recall that Lξn
is G∞-measurable, and use the iid nature of the L
(k)
i s to rewrite this as
1
mk+1
(EQ[Zk|G∞] − 1)m + Lξk
=
1
mk+1
(mk
Ek − 1)m + Lξk
.
The remaining steps are simply an exercise in algebraic manipulation. This proves the result.
Lemma 1.8. Let X1, X2, ... be a sequence of non-negative iid random variables. Then
lim sup
n→∞
1
n
Xn =
0 if E[X] < ∞
∞ if E[X] = ∞
almost surely.
Proof. We aim to use the Borel-Cantelli lemma to show that Xn/n is positive only finitely often. To
do this, we consider the event {Xn
n ≥ } Since the Xn’s are iid, this event has the same probability
as {X
n ≥ }. We have:
∞
k=1
P
X
k
≥ =
∞
k=1
P
X
≥ k =
∞
k=1
E[1{X/ ≥k}] = E
∞
k=1
1{X/ ≥k} = E X/ .
Now by the Borel-Cantelli lemma, if E[X] < ∞ then the event {Xn/n ≥ } happens only finitely
often for any > 0 (no matter how small), so Xn/n → 0. On the other hand, by the second
Borel-Cantelli lemma, if E[X] = ∞, then the event {Xn/n ≥ } happens infinitely often for any
> 0 (no matter how large), so lim supn→∞ Xn/n = ∞.
Lemma 1.9. Let X1, X2, ... be a sequence of non-negative iid random variables. Then for any
c ∈ (0, 1),
∞
k=1
eXk
ck < ∞ if E[X] < ∞
= ∞ if E[X] = ∞
almost surely.
Proof. Suppose first that E[X] < ∞. By Lemma 1.8, we have that lim sup eXk/k
= e0
= 1. Using
the definition of limsup, we have that for all > 0, there exists an M such that eXk/k
≤ 1 + , for
each k ≥ M. To prove our result, fix c ∈ (0, 1) and choose > 0 such that c(1 + ) < 1. Then select
an M as above. Now:
∞
k=M
eXk
ck
≤
∞
k=M
(1 + )k
ck
.
Since (1 + )c < 1, we have that the right hand side is finite. Finally, we write
∞
k=1
eXk
ck
=
M−1
k=1
eXk
ck
+
∞
k=M
eXk
ck
.
6
Since both of these sums are almost surely finite, we have proven the result in the case E[X] < ∞.
Now suppose E[X] = ∞. By Lemma 1.8, lim sup Xn/n = ∞, so for any K, we can find n1, n2, . . . →
∞ such that Xni
/ni ≥ K for all i. Fix c ∈ (0, 1), and choose ni as above with K = − log c. Then
∞
n=1
eXn
cn
≥
∞
i=1
eXni cni
≥
∞
i=1
e−ni log c
cni
=
∞
i=1
1 = ∞
as required.
We will now use these two lemmas to prove that (c) implies (b) in the Kesten-Stigum theorem. We
need one more tool, which is proved in [2].
Lemma 1.10. Suppose that µ and ν are probability measures and dµ
dν |Fn = Xn. Let X∞ =
lim supn→∞ Xn. Then
X∞ < ∞ ν-almost surely ⇔ Eµ[X∞] = 1
and
X∞ = ∞ ν-almost surely ⇔ Eµ[X∞] = 0.
We can now prove the Kesten-Stigum thoerem.
Proposition 1.11. Let W and L be as in Theorem 1.6. Then:
E[L log+
L] < ∞ ⇔ E[W] = 1
Proof. We first show that 1/Wn is a supermartingale under Q. Recall from the definition that
EQ[1/Wn|Fn−1] is the (almost surely unique) Fn−1-measurable random variable Y such that EQ[1/Wn1A] =
EQ[Y 1A] for all A ∈ Fn−1. Now,
EQ[
1
Wn
1A] = EP[1A1{Wn>0}] = EP[1A1{Wn−1>0}P(Wn > 0|Fn−1)] = EQ[
1
Wn−1
1AP(Wn > 0|Fn−1)].
Therefore
EQ[
1
Wn
|Fn−1] =
1
Wn−1
P(Wn > 0|Fn−1) ≤
1
Wn−1
.
So 1/Wn is a non-negative supermartingale under Q, so it converges almost surely to a limit 1/W∞
(which may be 0). In particular, lim supn→∞ Wn = lim infn→∞ Wn, Q-almost surely.
Now we demonstrate that lim supn→∞ EQ[Wn|G∞] is almost surely finite if E[L log+
L] < ∞. From
Lemma 1.7, we have that
EQ[Wn|G∞] = 1 +
n−1
i=0
(Lξi
− 1)m−(i+1)
≤ 1 +
n
i=1
elog+
(Lξi−1
−1)
m−i
Clearly, by Lemma 1.9 this converges when EQ[log+
(Lξi − 1)] < EQ[log+
(Lξi )] < ∞. Now,
7
EQ[log+
(Lξi )] =
∞
k=0
Q(Lξi = k) log+
k
=
∞
k=0
kpk
m
log+
k
=
1
m
E[L log+
L]
Hence, E[L log+
L] < ∞ =⇒ lim sup EQ[Wn|G∞] < ∞ almost surely. Now Fatou’s lemma,
combined with the fact (proven above) that lim supn→∞ Wn = lim infn→∞ Wn, Q-almost surely,
gives
EQ[lim sup Wn|G∞] = EQ[lim inf Wn|G∞] ≤ lim inf EQ[lim inf Wn|G∞] ≤ lim sup EQ[lim inf Wn|G∞] < ∞.
Therefore lim sup Wn < ∞ Q-almost surely, so by Lemma 1.10 we have EP[W] = 1.
2 A discrete-time branching process on the unit square
2.1 Preliminaries and heuristics
Our process begins with the unit square. It splits into two rectangles of area U and 1 − U respec-
tively, where U is a random variable uniform on [0, 1]. In each subsequent generation, each of the
rectangles splits in a manner similar to the original square. The orientation of these splits is not
relevant to the following results, so we assume that each split occurs either vertically or horizontally
with probability 1
2 . Figure 2 shows a simulated outcome of this process after ten generations.
A natural question to ask is: what size would we expect the smallest and largest rectangles to be
after n generations? There are many ways one could interpret the notion of size in this context;
for our purposes, we will be considering the area of each rectangle to be its size. After n ≥ 1
generations, the area of any given rectangle in the nth generation, here denoted An, can be written:
An =
n
k=1
Uk
where (Uk : k ∈ N) is a sequence of iid unif(0, 1) random variables. We will denote the set of
rectangles in generation n by Nn.
We will now use our simulation to plot the areas of the rectangles in each generation, and hopefully
provide some heuristic justification for the main result of this paper.
8
Figure 2: The process after 10 generations
9
Here we see what our intuition tells us: the area of the rectangles decreases exponentially quickly.
This plot however is not particularly useful due to the nature of exponential growth. To rectify
this, we will plot the logarithm of the area of each rectangle.
Clearly, the growth of the log-area in this plot is linear, somewhat confirming our suspicion that
the growth is exponential. It also appears that there are upper and lower boundary lines between
which all of the rectangles fall. It is the upper boundary line which, over the course of this sec-
tion, we shall not only demonstrate the existence of, but also give an explicit expression for its slope.
Before we do this, we shall construct a spine for this process in a manner very similar to that in
Section 1.3. We will henceforth refer to the rectangles as particles. As before, we shall define the
spine recursively. Define ξ0 as the root particle. For each ξk, uniformly select one of its children,
and set that particle to be ξk+1. Finally, define Ξ = {ξk|k ∈ N0}. This forms our spine.
2.2 Change of measure and Spine Decomposition
Definition 2.1. Important filtrations
1. Fn is the filtration defined by the first n generations of the process
2. Gn is the filtration defined by the first n generations of the spine
3. Fn = σ(Fn ∪ Gn)
10
Proposition 2.2. Let α > −1. Then:
1. The process W(α)
= (W
(α)
n )n∈N defined by W
(α)
n = (α+1
2 )n
u∈Nn
Aα
u is a Fn-martingale
2. The process M = (Mn)n∈N defined by Mn = Aα
ξn
(α + 1)n
is a Fn-martingale
We now define a new measure Q by setting dQ
dP Fn
= Mn. In the following propositions, we will
show that the spine decomposition converges Q almost-surely to a finite limit. First, we need to
consider how our process changes under this new measure.
Lemma 2.3. Let u ∈ Nn. Then for all k ∈ [0, 1] and β ∈ N:
(i) Q(Uu ≤ x) =
xα+1
u ∈ Ξ
x u ∈ Ξ
(ii) EQ[Uβ
u ] =
α+1
α+β+1 u ∈ Ξ
1
β+1 u ∈ Ξ
(iii) EQ[log Uξn ] = − 1
α+1
Proof. Consider the case that u ∈ Ξ. Then u = ξn, and:
Q(Uξn ≤ x) = E[Mn|Uξn ≤ x] × P(Uξn ≤ x)
= x(α + 1)n
× E[Aα
ξn−1
Uα
ξn
|Uξn ≤ x]
= x(α + 1)n
× E[Aα
ξn−1
] × E[Uα
ξn
|Uξn ≤ x]
= x(α + 1)n
×
1
(a + 1)n−1
×
x
0
1
x
xα
dx
= x(α + 1) ×
1
x
×
xα+1
α + 1
= xα+1
Define gu(x) to be the pdf of Uu under Q. Let u = ξn. We now know that
gξn (x) :=
dQ(Uξn
≤ x)
dx
= (α + 1)xα
Hence,
EQ[Uβ
u ] = (α + 1)
1
0
xα+β
dx
=
α + 1
α + β + 1
As required. The results for u ∈ Ξ are trivial. Finally, we have:
EQ[log Uξn ] = (α + 1)
1
0
log(x)xα
dx
= −
α + 1
(α + 1)2
= −
1
α + 1
11
Proposition 2.4. The Spine Decomposition
Let α ∈ [0, 1]. Then:
EQ W(α)
n G∞ = Aα
ξn
α + 1
2
n
+
α + 1
2
n−1
k=0
α + 1
2
k
Aξk
− Aξk+1
α
Proof. Let us consider what particles exist at time n. A trivial examination of this tree structure
reveals that for any k < n, there will be 2n−k−1
particles alive at time n whose last spinal ancestor
was ξk. We will denote the set of non-spine descendants of ξk alive at time n by Cn(ξk). Each
u ∈ Cn(ξk) has size distribution Aξk
×
n−k
Ui, where the Ui’s are uniform (0, 1) random variables.
Now, we can once again use the binary structure of the process to remove a degree of randomness
from this expression. ξk has two children. One of these children is in the spine, and hence contributes
no descendants to Cn(ξk). However, the other child has size distribution Aξk+1
− Aξk
, and its
descendants at time n form precisely the set Cn(ξk). As such, each u ∈ Cn(ξk) has distribution
(Aξk+1
− Aξk
) ×
n−k−1
Ui. The keen observer will notice however that | k Cn(ξk)| = 2n
− 1; we
are missing ξn, which trivially has size distribution Aξn . This completes our characterisation of the
particles in Nn. Hence:
EQ W(α)
n |G∞ =
α + 1
2
n
× EQ Aα
ξn
|G∞ +
n−1
k=0
2n−k−1
EQ Aξk+1
− Aξk
×
n−k−1
i=1
Ui
α
G∞
Now, the Aξk
’s are G∞-measurable, so we can take them out of the expectations. The Ui’s are also
independent of G∞, so we simply take their Q-expectations. By Proposition 2.3.(ii), we have that
EQ [Uα
i ] = 1
α+1 . Hence our full expression for the spine decomposition is
EQ W(α)
n |G∞ =
α + 1
2
n
× Aα
ξn
+
n−1
k=0
2
α + 1
n−k−1
Aξk+1
− Aξk
α
some simple algebraic manipulation gives the required result.
Proposition 2.5. Set α such that e− α
α+1 < (α+1
2 ). Then:
lim sup EQ[W(α)
n |G∞] < ∞
almost surely.
Proof. First we will consider the convergence of
∞
k=0
α+1
2
k
Aξk
− Aξk+1
α
. We will first rewrite
this as
∞
k=0
α+1
2 e
α
k log(Aξk
−Aξk+1 )
k
. We have that:
1
k
log Aξk
− Aξk+1
=
1
k
log 1 − Uξk+1
k
i=0
Uξi
=
log(1 − Uξk+1
)
k
+
1
k
k
i=1
log(Uξi )
12
We now have two summands to consider. By the SLLN, 1
k
k
i=1 log Uξi → EQ[log Uξi ] Q-almost
surely. By Lemma 2.3 we know that this is − 1
α+1 . We will now consider the convergence of
1
k log(1 − Uξk+1
). By Lemmas 1.8 and 2.3, we have that −1
k log(1 − Uξk+1
) converges to 0. Hence
we also have 1
k log(1 − Uξk+1
) → 0, Q-almost-surely. This gives us a precise limit:
lim
k→∞
1
k
log Aξk
− Aξk+1
= −
1
α + 1
Hence, we have that
lim
k→∞
α + 1
2
exp
α
k
log Aξk
− Aξk+1
=
α + 1
2
e− α
α+1
We have now that the sum
∞
k=0
α+1
2
k
Aξk
− Aξk+1
α
converges Q-almost surely if (α+1
2 )e− α
α+1 <
1. It remains now to find which values of α ensure the convergence of Aα
ξn
α+1
2
n
. Now:
lim
n→∞
1
n
log(Aξn ) = lim
n→∞
1
n
log
n
i=1
Uξi
= lim
n→∞
1
n
n
i=1
log(Uξn )
= EQ[log Uξn ] = −
1
α + 1
Hence, similar to before,
lim sup
n→∞
α + 1
2
n
Aα
ξn
= lim sup
n→∞
α + 1
2
e
α
n log(Aξn )
n
= lim sup
n→∞
α + 1
2
e− α
α+1
n
Which is finite iff (α+1
2 )e− α
α+1 < 1. This fact, combined with the convergence of the aforementioned
sum on the same interval gives us our required condition for convergence.
2.3 Asymptotic growth of the maximal particle
The following three lemmas are essential in proving our main result.
Lemma 2.6. Set α such that e− α
α+1 < (α+1
2 ). Then:
E lim
n→∞
W(α)
n = 1
Q(α)
-almost surely.
Proof. We first use the proof of Proposition 1.11 to deduce that lim supn→∞ W
(α)
n = lim infn→∞ W
(α)
n .
We now have
EQ[lim sup W(α)
n |G∞] = EQ[lim inf W(α)
n |G∞] ≤ lim inf EQ[W(α)
n |G∞] ≤ lim sup EQ[W(α)
n |G∞] < ∞
From a combination of Fatou’s Lemma and Proposition 2.5. Therefore, lim W
(α)
n < ∞ Q-almost
surely, and by Lemma 1.10 we have that E limn→∞ W
(α)
n = 1 as required.
13
Lemma 2.7. Let A ∈ Fn. Then:
Q(α)
(A) = EP W(α)
∞ 1E + P(A ∩ {W(α)
∞ = ∞})
Proof is a standard result of measure theory
Lemma 2.8. Let Eδ be the event {supu∈Nn
1
n log Au > δ i.o.}. Then P(Eδ) is either 0 or 1.
Proof. We shall consider the probability of the complement of Eδ given the filtration Fk.
We have that:
P(Ec
δ|Fk) = P ∀v ∈ Nk, sup
u∈Nn, v≤u
log Au
n
≤ δ e.v. Fk
= P
v∈Nk
sup
u∈Nn, v≤u
log Au
n
≤ δ e.v. Fk
=
v∈Nk
P sup
u∈Nn, v≤u
log Au
n
≤ δ e.v. Fk
=
v∈Nk
P sup
u∈Nn−k
log Au
n
≤ δ e.v.
=
v∈Nk
P sup
u∈Nn
log Au
n
·
n + k
n
≤ δ e.v.
≤
v∈Nk
P sup
u∈Nn
log Au
n
≤ δ + e.v.
= P Ec
δ+
|Nk|
To summarise, we now have that P(Ec
δ|Fk) ≤ P(Ec
δ+ )|Nk|
. We can now take the limit as → 0 and
take the expectation of both sides to obtain P(Ec
δ) ≤ P(Ec
δ)2k
. This proves the result.
The following definition and lemma are purely technical, serving only to simplify the statement of
the main theorem.
Definition 2.9. The Lambert W Function
Let z be any complex number. Then W is the unique function satisfying the equation
z = W(z)eW (z)
Lemma 2.10.
min
x>0
1
x
log
2
x + 1
= W −
1
2e
Proof. Let f(x) = 1
x log 2
x+1 . Simple calculus shows that
f (x) = −
1
x2
log
2
x + 1
−
1
x(x + 1)
14
setting f (x) = 0, we obtain
1
x
log
2
x + 1
= −
1
x + 1
(1)
Let x∗
denote the solution to this equation, and let f∗
denote the minimum of f. Then clearly f∗
is given by
f∗
= f(x∗
) = −
1
x∗ + 1
Rearranging (1), we can obtain
−
1
2e
= −
1
x + 1
e− 1
x+1
Hence the f∗
is the solution to the equation
−
1
2e
= f∗
ef∗
The definition of the Lambert W Function says that for any real number z, we have
z = W(z)eW (z)
Therefore, f∗
= W(− 1
2e )
Theorem 2.11. Let both Au and Nn be defined as in Section 2.1. Then,
lim sup
n→∞
max
u∈Nn
log Au
n
= W −
1
2e
Almost surely.
Proof. This proof is split into two parts, in which we will derive both upper and lower bounds for
the limit. We will begin with the former. Fix some γ ∈ R, and suppose that there exists some
particle u ∈ Nn such that log Au
n > γ. Then,
W(α)
n =
α + 1
2
n
v∈Nn
Aα
u ≥
α + 1
2
n
Aα
u >
α + 1
2
n
eαγn
Now, since W
(α)
n is a non-negative martingale, it converges almost surely to a finite limit. Hence
to ensure convergence, we must have that α+1
2 eαγ
> 1 only finitely often. Rewriting this, we
see that this implies maxu∈Nn
log Au
n > γ > 1
α log 2
α+1 happens only finitely often for all α > 0.
This inequality relies on varying α, so we will optimise over α via Lemma 2.10, giving us that
minα>0
1
α log 2
α+1 = W(− 1
2e ) Therefore, we have
lim sup
n→∞
max
u∈Nn
log Au
n
≤ W −
1
2e
almost surely.
Now, we will use our spine decomposition to provide the lower bound. Let W
(α)
∞ := lim sup W
(α)
n .
By Lemma 2.6 we have that E[W
(α)
∞ ] = 1. Hence P(W
(α)
∞ = ∞) = 0. Now, combining this with
Lemma 2.7, we have
15
Q(α)
(A) = EP W(α)
∞ 1A
Let E be the event {supu∈Nn
1
n log Au ≥ − 1
α+1 i.o.}. In Proposition 2.5, we showed that limn→∞
1
n log Aξn
=
− 1
α+1 , i.e. Q(α)
{ 1
n log Aξn
= − 1
α+1 i.o.} = 1. The following sequence of set inclusions will make it
obvious that Q(α)
(E) = 1:
1
n
log Aξn
= −
1
α + 1
i.o. ⊆
1
n
log Aξn
≥ −
1
α + 1
i.o.
⊆ sup
u∈Nn
1
n
log Au ≥ −
1
α + 1
i.o.
= E
We also have that
Q(α)
(E) = EP W(α)
∞ 1E
Hence P(E) > 0. Finally, by Lemma 2.8 we have that P(E) = 0 or 1. We have just shown that
P(E) > 0, therefore we necessarily have that P(E) = 0, i.e.
lim sup
n→∞
sup
u∈Nn
log Au
n
≥ −
1
α + 1
= W −
1
2e
Proving our result.
References
[1] H. Kesten and B. P. Stigum. A limit theorem for multidimensional galton-watson processes.
Ann. Math. Statist., 37(5):1211–1223, 10 1966.
[2] R. Lyons and Y. Peres. Probability on Trees and Networks. Cambridge University Press, 2016.
16

More Related Content

What's hot

Lecture 5 castigliono's theorem
Lecture 5 castigliono's theoremLecture 5 castigliono's theorem
Lecture 5 castigliono's theorem
Deepak Agarwal
 
Halliday, resnick, and krane sol solv1
Halliday, resnick, and krane sol solv1Halliday, resnick, and krane sol solv1
Halliday, resnick, and krane sol solv1
cmpetty
 
1 d heat equation
1 d heat equation1 d heat equation
1 d heat equation
NEERAJ PARMAR
 
M1l4
M1l4M1l4
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Shu Tanaka
 
Epidemiology Meets Quantum: Statistics, Causality, and Bell's Theorem
Epidemiology Meets Quantum: Statistics, Causality, and Bell's TheoremEpidemiology Meets Quantum: Statistics, Causality, and Bell's Theorem
Epidemiology Meets Quantum: Statistics, Causality, and Bell's Theorem
Richard Gill
 
PhotonModel
PhotonModelPhotonModel
PhotonModel
Doug Leadenham
 
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij (Stepan Douplii)
 
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij (Stepan Douplii)
 
Boyd chap10
Boyd chap10Boyd chap10
Boyd chap10
Koki Isokawa
 
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...
Steven Duplij (Stepan Douplii)
 
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VECUnit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
sundarKanagaraj1
 
iit
iitiit
PaperNo5-HabibiYousefi-IJAM
PaperNo5-HabibiYousefi-IJAMPaperNo5-HabibiYousefi-IJAM
PaperNo5-HabibiYousefi-IJAM
Mezban Habibi
 
Anisotropicbriefreferencetotheindex
AnisotropicbriefreferencetotheindexAnisotropicbriefreferencetotheindex
Anisotropicbriefreferencetotheindex
Antonio Gutierrez
 
1 d wave equation
1 d wave equation1 d wave equation
1 d wave equation
NEERAJ PARMAR
 
Gravity tests with neutrons
Gravity tests with neutronsGravity tests with neutrons
Gravity tests with neutrons
Los Alamos National Laboratory
 
Jee main set A all questions
Jee main set A all questionsJee main set A all questions
Jee main set A all questions
embibe100marks
 
Direct method for soliton solution
Direct method for soliton solutionDirect method for soliton solution
Direct method for soliton solution
MOHANRAJ PHYSICS
 
Bertail
BertailBertail
Bertail
eric_gautier
 

What's hot (20)

Lecture 5 castigliono's theorem
Lecture 5 castigliono's theoremLecture 5 castigliono's theorem
Lecture 5 castigliono's theorem
 
Halliday, resnick, and krane sol solv1
Halliday, resnick, and krane sol solv1Halliday, resnick, and krane sol solv1
Halliday, resnick, and krane sol solv1
 
1 d heat equation
1 d heat equation1 d heat equation
1 d heat equation
 
M1l4
M1l4M1l4
M1l4
 
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
Network-Growth Rule Dependence of Fractal Dimension of Percolation Cluster on...
 
Epidemiology Meets Quantum: Statistics, Causality, and Bell's Theorem
Epidemiology Meets Quantum: Statistics, Causality, and Bell's TheoremEpidemiology Meets Quantum: Statistics, Causality, and Bell's Theorem
Epidemiology Meets Quantum: Statistics, Causality, and Bell's Theorem
 
PhotonModel
PhotonModelPhotonModel
PhotonModel
 
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
 
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Preprint...
 
Boyd chap10
Boyd chap10Boyd chap10
Boyd chap10
 
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...
Steven Duplij, "Higher regularity, inverse and polyadic semigroups", Universe...
 
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VECUnit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VEC
 
iit
iitiit
iit
 
PaperNo5-HabibiYousefi-IJAM
PaperNo5-HabibiYousefi-IJAMPaperNo5-HabibiYousefi-IJAM
PaperNo5-HabibiYousefi-IJAM
 
Anisotropicbriefreferencetotheindex
AnisotropicbriefreferencetotheindexAnisotropicbriefreferencetotheindex
Anisotropicbriefreferencetotheindex
 
1 d wave equation
1 d wave equation1 d wave equation
1 d wave equation
 
Gravity tests with neutrons
Gravity tests with neutronsGravity tests with neutrons
Gravity tests with neutrons
 
Jee main set A all questions
Jee main set A all questionsJee main set A all questions
Jee main set A all questions
 
Direct method for soliton solution
Direct method for soliton solutionDirect method for soliton solution
Direct method for soliton solution
 
Bertail
BertailBertail
Bertail
 

Similar to Bath_IMI_Summer_Project

LDP.pdf
LDP.pdfLDP.pdf
Imc2016 day1-solutions
Imc2016 day1-solutionsImc2016 day1-solutions
Imc2016 day1-solutions
Christos Loizos
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
BRNSS Publication Hub
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
BRNSS Publication Hub
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AI
Marc Lelarge
 
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
BRNSS Publication Hub
 
stochastic-processes-1.pdf
stochastic-processes-1.pdfstochastic-processes-1.pdf
stochastic-processes-1.pdf
oricho
 
Jere Koskela slides
Jere Koskela slidesJere Koskela slides
Jere Koskela slides
Christian Robert
 
Imc2017 day2-solutions
Imc2017 day2-solutionsImc2017 day2-solutions
Imc2017 day2-solutions
Christos Loizos
 
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Marc Lelarge
 
Mathematical induction by Animesh Sarkar
Mathematical induction by Animesh SarkarMathematical induction by Animesh Sarkar
Mathematical induction by Animesh Sarkar
Animesh Sarkar
 
Imc2017 day1-solutions
Imc2017 day1-solutionsImc2017 day1-solutions
Imc2017 day1-solutions
Christos Loizos
 
Unit II PPT.pptx
Unit II PPT.pptxUnit II PPT.pptx
Unit II PPT.pptx
VIKASPALEKAR18PHD100
 
physics-of-vibration-and-waves-solutions-pain
 physics-of-vibration-and-waves-solutions-pain physics-of-vibration-and-waves-solutions-pain
physics-of-vibration-and-waves-solutions-pain
miranteogbonna
 
Imc2016 day2-solutions
Imc2016 day2-solutionsImc2016 day2-solutions
Imc2016 day2-solutions
Christos Loizos
 
On Application of Power Series Solution of Bessel Problems to the Problems of...
On Application of Power Series Solution of Bessel Problems to the Problems of...On Application of Power Series Solution of Bessel Problems to the Problems of...
On Application of Power Series Solution of Bessel Problems to the Problems of...
BRNSS Publication Hub
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)
Mel Anthony Pepito
 
03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf
BRNSS Publication Hub
 
03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf
BRNSS Publication Hub
 
Statistical thermodynamics lecture notes.pdf
Statistical thermodynamics lecture notes.pdfStatistical thermodynamics lecture notes.pdf
Statistical thermodynamics lecture notes.pdf
EphriemTadesse1
 

Similar to Bath_IMI_Summer_Project (20)

LDP.pdf
LDP.pdfLDP.pdf
LDP.pdf
 
Imc2016 day1-solutions
Imc2016 day1-solutionsImc2016 day1-solutions
Imc2016 day1-solutions
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AI
 
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
 
stochastic-processes-1.pdf
stochastic-processes-1.pdfstochastic-processes-1.pdf
stochastic-processes-1.pdf
 
Jere Koskela slides
Jere Koskela slidesJere Koskela slides
Jere Koskela slides
 
Imc2017 day2-solutions
Imc2017 day2-solutionsImc2017 day2-solutions
Imc2017 day2-solutions
 
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
Tutorial APS 2023: Phase transition for statistical estimation: algorithms an...
 
Mathematical induction by Animesh Sarkar
Mathematical induction by Animesh SarkarMathematical induction by Animesh Sarkar
Mathematical induction by Animesh Sarkar
 
Imc2017 day1-solutions
Imc2017 day1-solutionsImc2017 day1-solutions
Imc2017 day1-solutions
 
Unit II PPT.pptx
Unit II PPT.pptxUnit II PPT.pptx
Unit II PPT.pptx
 
physics-of-vibration-and-waves-solutions-pain
 physics-of-vibration-and-waves-solutions-pain physics-of-vibration-and-waves-solutions-pain
physics-of-vibration-and-waves-solutions-pain
 
Imc2016 day2-solutions
Imc2016 day2-solutionsImc2016 day2-solutions
Imc2016 day2-solutions
 
On Application of Power Series Solution of Bessel Problems to the Problems of...
On Application of Power Series Solution of Bessel Problems to the Problems of...On Application of Power Series Solution of Bessel Problems to the Problems of...
On Application of Power Series Solution of Bessel Problems to the Problems of...
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)
 
03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf
 
03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf03_AJMS_209_19_RA.pdf
03_AJMS_209_19_RA.pdf
 
Statistical thermodynamics lecture notes.pdf
Statistical thermodynamics lecture notes.pdfStatistical thermodynamics lecture notes.pdf
Statistical thermodynamics lecture notes.pdf
 

Bath_IMI_Summer_Project

  • 1. The asymptotic maxima of a branching random walk via spine techniques Josh Young Supervised by Dr Matthew Roberts August 19, 2016 Summary This paper is the product of a 10 week research internship granted by the Bath Institute for Mathematical Innovation, under the supervision of Dr Matthew Roberts of the University of Bath. Most of the results within have been well studied; the aim of this paper is to apply these results to more specific cases, in a manner understandable to the average mathematics undergraduate. We begin by looking at some elementary properties of Galton-Watson trees with random infinite spines, also called size-biased Galton-Watson trees, and using these to prove the Kesten-Stigum theorem. We then apply these spine techniques to the case of a binary branching random walk to derive the asymptotic maximal growth rate. Contents 1 Size-biased Galton-Watson trees 2 1.1 The canonical Galton-Watson Process . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Size-biasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Spine Decomposition and the Kesten-Stigum theorem . . . . . . . . . . . . . . . . . 5 2 A discrete-time branching process on the unit square 8 2.1 Preliminaries and heuristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Change of measure and Spine Decomposition . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Asymptotic growth of the maximal particle . . . . . . . . . . . . . . . . . . . . . . . 13 1
  • 2. 1 Size-biased Galton-Watson trees 1.1 The canonical Galton-Watson Process We will define a Galton-Watson tree (henceforth abbreviated to GWT) in the standard manner. Let L be a random variable with P(L = k) = pk for k ∈ N ∪ {0}. Let (L (n) i ; n, i ∈ N ∪ {0}) be independent copies of L. We define a sequence (Zn, n ≥ 0) inductively by Zn+1 := Zn i=1 L (n) i with the convention that Z0 = 1. This can be visualised as a breeding process from a single ancestor, where Zn is the number of descendants in the nth generation, and L (n) i is the number of children produced by the ith descendant in generation n. We will use the notation |u| = n to indicate that particle u belongs to generation n. We also denote the mean of the process by m := E[L] 1.2 Size-biasing Galton-Watson trees can also be endowed with a spine, which we will construct as follows. Label the root of the tree ξ0. For each i ∈ N ∪ {0}, uniformly select one of the children of ξi, and label this ξi+1. The sequence Ξ := (ξn, n ∈ N ∪ {0}) is called the spine. We will denote the number of children of ξi by Lξi . In the next few sections, we will define the filtrations and martingales used in the study of GW trees with spines. Definition 1.1. For all n ∈ N, define: 1. Wn := Zn mn 2. Mn := 1 mn n−1 i=0 Lξn Definition 1.2. For all n ∈ N, let: 1. Fn be the σ-algebra generated by the first n generations of the process 2. Gn be the σ-algebra generated by the first n spinal particles, and the children of the first n−1. 3. Fn := σ(Fn ∪ Gn) Proposition 1.3. 1. The process W := (Wn, n ∈ N) is a non-negative Fn-martingale 2. The process M := (Mn, n ∈ N) is a non-negative Fn-martingale Proof of (1). We have: 2
  • 3. E Zn mn Fn−1 = E 1 mn Zn−1 i=1 L (n−1) i Fn−1 = 1 mn Zn−1 i=1 E L (n−1) i Fn−1 = 1 mn Zn−1 i=1 m = Zn−1 mn−1 Hence Wn is a martingale. Non-negativity is trivial. Proof of (2) We have: E[Mn|Fn−1] = 1 mn E n−1 i=0 Lξi Fn−1 = 1 mn × n−2 i=0 Lξi × E[Lξn−1 |Fn−1] = 1 mn−1 n−2 i=0 Lξi = Mn−1 Once again, non-negativity is trivial. We will now use the martingale Mn to define a new probability measure, Q by setting: dQ dP Fn = Mn The following lemma and subsequent proposition will allow us to visualise how GW trees behave under this new measure. Lemma 1.4. E[Mn|Fn] = Wn Proof. From the definitions, we immediately have that E[Mn|Fn] = 1 mn E[ n−1 i=0 Lξn |Fn]. We now sum over the indicator of j ∈ Ξ: 1 mn E Zn j=1 n−1 i=0 L (i) j 1{j∈Ξ} Fn Now, both Zn and L (i) j are Fn-measurable, and the indicator of j ∈ Ξ is independent of Fn. Hence this reduces to 3
  • 4. 1 mn Zn j=1 n−1 i=0 L (i) j P(j ∈ Ξ) = The probability of any given j with |j| = n being a spinal particle is 1 n−1 i=0 Lξi . The two products now cancel out, leaving us with 1 mn Zn j=1 1 = Zn mn = Wn Proposition 1.5. Let |u| = n. Then Q(L(n) u = k) = kpk m u ∈ ξ pk u ∈ ξ Proof. Consider u ∈ ξ. That is, u = ξn. Hence Q(L (n) u = k) = Q(Lξn = k) = E[Mn+11{Lξn+1 =k}]. The law of total expectation tells us that E[Mn1{Lξn=k}] = E[Mn+1|Lξn = k] × P(Lξn = k) = E k mn+1 n−1 i=0 Lξi × pk = kpk mn+1 n−1 i=0 E[Lξi ] = kpk mn+1 × mn = kpk m We now consider u ∈ ξ. This gives us Q(L (n) u = k) = E[Mn+11{L (n) u =k} ]. Since u ∈ ξ, we have that Mn and the indicator function are independent. We can use this and reduce the expression to E[Mn+1]P(L (n) u = k) = E[Mn+1]pk. We have as a corollary to Lemma 1.4 that E[Mn] = E[Wn] = 1, completing the proof. This proposition tells us that the offspring of particles in the spine follow a size-biased distribution, whereas the other particles behave in the usual manner. In particular, the probability of any spinal particle having no children is 0. Since the root of the tree is in the spine, this tells us that under Q, the event of extinction almost surely does not occur. Lyons and Peres [2] call GWTs under the law Q size-biased Galton-Watson trees. An example of a size-biased GWT is given in figure 1, where each non-spine particle forms the root of an independent GWT. 4
  • 5. ξ0 ξ1 ξ2 ξ3 GW GW GW GW Figure 1: An example tree after 4 generations 1.3 Spine Decomposition and the Kesten-Stigum theorem We will now use the properties of sized-biased trees to prove Kesten and Stigum’s classic limit theorem, stated below. Theorem 1.6. The Kesten-Stigum Theorem [1] Let L be the offspring random variable of a Galton-Watson process with mean m ∈ (1, ∞) and martingale limit W. Then the following are equivalent: a. P[W = 0] = q b. E[W] = 1 c. E[L log+ L] < ∞ Proposition 1.7. Spine Decomposition For all n ≥ 1 EQ[Wn|G∞] = n−1 i=0 (Lξi − 1)m−(i+1) + 1 Proof. We will prove this result inductively. Since the root is also the first particle in the spine, we have that Z1 = Lξ0 . Therefore EQ[Wn|G∞] = 1 m Lξ0 . This settles the base case. Now, we will consider EQ[Wk+1|G∞] for some k > 1, which we will denote Ek+1. From the definitions of Wn and Zn, we have that Ek+1 = 1 mk+1 EQ Zk i=0 L (k) i G∞ 5
  • 6. Now, under Q we know that there is exactly 1 spine particle in generation k, specifically ξk. Removing ξk’s children from the sum gives 1 mk+1 EQ Zk−1 i=0 L (k) i + Lξk G∞ We can now recall that Lξn is G∞-measurable, and use the iid nature of the L (k) i s to rewrite this as 1 mk+1 (EQ[Zk|G∞] − 1)m + Lξk = 1 mk+1 (mk Ek − 1)m + Lξk . The remaining steps are simply an exercise in algebraic manipulation. This proves the result. Lemma 1.8. Let X1, X2, ... be a sequence of non-negative iid random variables. Then lim sup n→∞ 1 n Xn = 0 if E[X] < ∞ ∞ if E[X] = ∞ almost surely. Proof. We aim to use the Borel-Cantelli lemma to show that Xn/n is positive only finitely often. To do this, we consider the event {Xn n ≥ } Since the Xn’s are iid, this event has the same probability as {X n ≥ }. We have: ∞ k=1 P X k ≥ = ∞ k=1 P X ≥ k = ∞ k=1 E[1{X/ ≥k}] = E ∞ k=1 1{X/ ≥k} = E X/ . Now by the Borel-Cantelli lemma, if E[X] < ∞ then the event {Xn/n ≥ } happens only finitely often for any > 0 (no matter how small), so Xn/n → 0. On the other hand, by the second Borel-Cantelli lemma, if E[X] = ∞, then the event {Xn/n ≥ } happens infinitely often for any > 0 (no matter how large), so lim supn→∞ Xn/n = ∞. Lemma 1.9. Let X1, X2, ... be a sequence of non-negative iid random variables. Then for any c ∈ (0, 1), ∞ k=1 eXk ck < ∞ if E[X] < ∞ = ∞ if E[X] = ∞ almost surely. Proof. Suppose first that E[X] < ∞. By Lemma 1.8, we have that lim sup eXk/k = e0 = 1. Using the definition of limsup, we have that for all > 0, there exists an M such that eXk/k ≤ 1 + , for each k ≥ M. To prove our result, fix c ∈ (0, 1) and choose > 0 such that c(1 + ) < 1. Then select an M as above. Now: ∞ k=M eXk ck ≤ ∞ k=M (1 + )k ck . Since (1 + )c < 1, we have that the right hand side is finite. Finally, we write ∞ k=1 eXk ck = M−1 k=1 eXk ck + ∞ k=M eXk ck . 6
  • 7. Since both of these sums are almost surely finite, we have proven the result in the case E[X] < ∞. Now suppose E[X] = ∞. By Lemma 1.8, lim sup Xn/n = ∞, so for any K, we can find n1, n2, . . . → ∞ such that Xni /ni ≥ K for all i. Fix c ∈ (0, 1), and choose ni as above with K = − log c. Then ∞ n=1 eXn cn ≥ ∞ i=1 eXni cni ≥ ∞ i=1 e−ni log c cni = ∞ i=1 1 = ∞ as required. We will now use these two lemmas to prove that (c) implies (b) in the Kesten-Stigum theorem. We need one more tool, which is proved in [2]. Lemma 1.10. Suppose that µ and ν are probability measures and dµ dν |Fn = Xn. Let X∞ = lim supn→∞ Xn. Then X∞ < ∞ ν-almost surely ⇔ Eµ[X∞] = 1 and X∞ = ∞ ν-almost surely ⇔ Eµ[X∞] = 0. We can now prove the Kesten-Stigum thoerem. Proposition 1.11. Let W and L be as in Theorem 1.6. Then: E[L log+ L] < ∞ ⇔ E[W] = 1 Proof. We first show that 1/Wn is a supermartingale under Q. Recall from the definition that EQ[1/Wn|Fn−1] is the (almost surely unique) Fn−1-measurable random variable Y such that EQ[1/Wn1A] = EQ[Y 1A] for all A ∈ Fn−1. Now, EQ[ 1 Wn 1A] = EP[1A1{Wn>0}] = EP[1A1{Wn−1>0}P(Wn > 0|Fn−1)] = EQ[ 1 Wn−1 1AP(Wn > 0|Fn−1)]. Therefore EQ[ 1 Wn |Fn−1] = 1 Wn−1 P(Wn > 0|Fn−1) ≤ 1 Wn−1 . So 1/Wn is a non-negative supermartingale under Q, so it converges almost surely to a limit 1/W∞ (which may be 0). In particular, lim supn→∞ Wn = lim infn→∞ Wn, Q-almost surely. Now we demonstrate that lim supn→∞ EQ[Wn|G∞] is almost surely finite if E[L log+ L] < ∞. From Lemma 1.7, we have that EQ[Wn|G∞] = 1 + n−1 i=0 (Lξi − 1)m−(i+1) ≤ 1 + n i=1 elog+ (Lξi−1 −1) m−i Clearly, by Lemma 1.9 this converges when EQ[log+ (Lξi − 1)] < EQ[log+ (Lξi )] < ∞. Now, 7
  • 8. EQ[log+ (Lξi )] = ∞ k=0 Q(Lξi = k) log+ k = ∞ k=0 kpk m log+ k = 1 m E[L log+ L] Hence, E[L log+ L] < ∞ =⇒ lim sup EQ[Wn|G∞] < ∞ almost surely. Now Fatou’s lemma, combined with the fact (proven above) that lim supn→∞ Wn = lim infn→∞ Wn, Q-almost surely, gives EQ[lim sup Wn|G∞] = EQ[lim inf Wn|G∞] ≤ lim inf EQ[lim inf Wn|G∞] ≤ lim sup EQ[lim inf Wn|G∞] < ∞. Therefore lim sup Wn < ∞ Q-almost surely, so by Lemma 1.10 we have EP[W] = 1. 2 A discrete-time branching process on the unit square 2.1 Preliminaries and heuristics Our process begins with the unit square. It splits into two rectangles of area U and 1 − U respec- tively, where U is a random variable uniform on [0, 1]. In each subsequent generation, each of the rectangles splits in a manner similar to the original square. The orientation of these splits is not relevant to the following results, so we assume that each split occurs either vertically or horizontally with probability 1 2 . Figure 2 shows a simulated outcome of this process after ten generations. A natural question to ask is: what size would we expect the smallest and largest rectangles to be after n generations? There are many ways one could interpret the notion of size in this context; for our purposes, we will be considering the area of each rectangle to be its size. After n ≥ 1 generations, the area of any given rectangle in the nth generation, here denoted An, can be written: An = n k=1 Uk where (Uk : k ∈ N) is a sequence of iid unif(0, 1) random variables. We will denote the set of rectangles in generation n by Nn. We will now use our simulation to plot the areas of the rectangles in each generation, and hopefully provide some heuristic justification for the main result of this paper. 8
  • 9. Figure 2: The process after 10 generations 9
  • 10. Here we see what our intuition tells us: the area of the rectangles decreases exponentially quickly. This plot however is not particularly useful due to the nature of exponential growth. To rectify this, we will plot the logarithm of the area of each rectangle. Clearly, the growth of the log-area in this plot is linear, somewhat confirming our suspicion that the growth is exponential. It also appears that there are upper and lower boundary lines between which all of the rectangles fall. It is the upper boundary line which, over the course of this sec- tion, we shall not only demonstrate the existence of, but also give an explicit expression for its slope. Before we do this, we shall construct a spine for this process in a manner very similar to that in Section 1.3. We will henceforth refer to the rectangles as particles. As before, we shall define the spine recursively. Define ξ0 as the root particle. For each ξk, uniformly select one of its children, and set that particle to be ξk+1. Finally, define Ξ = {ξk|k ∈ N0}. This forms our spine. 2.2 Change of measure and Spine Decomposition Definition 2.1. Important filtrations 1. Fn is the filtration defined by the first n generations of the process 2. Gn is the filtration defined by the first n generations of the spine 3. Fn = σ(Fn ∪ Gn) 10
  • 11. Proposition 2.2. Let α > −1. Then: 1. The process W(α) = (W (α) n )n∈N defined by W (α) n = (α+1 2 )n u∈Nn Aα u is a Fn-martingale 2. The process M = (Mn)n∈N defined by Mn = Aα ξn (α + 1)n is a Fn-martingale We now define a new measure Q by setting dQ dP Fn = Mn. In the following propositions, we will show that the spine decomposition converges Q almost-surely to a finite limit. First, we need to consider how our process changes under this new measure. Lemma 2.3. Let u ∈ Nn. Then for all k ∈ [0, 1] and β ∈ N: (i) Q(Uu ≤ x) = xα+1 u ∈ Ξ x u ∈ Ξ (ii) EQ[Uβ u ] = α+1 α+β+1 u ∈ Ξ 1 β+1 u ∈ Ξ (iii) EQ[log Uξn ] = − 1 α+1 Proof. Consider the case that u ∈ Ξ. Then u = ξn, and: Q(Uξn ≤ x) = E[Mn|Uξn ≤ x] × P(Uξn ≤ x) = x(α + 1)n × E[Aα ξn−1 Uα ξn |Uξn ≤ x] = x(α + 1)n × E[Aα ξn−1 ] × E[Uα ξn |Uξn ≤ x] = x(α + 1)n × 1 (a + 1)n−1 × x 0 1 x xα dx = x(α + 1) × 1 x × xα+1 α + 1 = xα+1 Define gu(x) to be the pdf of Uu under Q. Let u = ξn. We now know that gξn (x) := dQ(Uξn ≤ x) dx = (α + 1)xα Hence, EQ[Uβ u ] = (α + 1) 1 0 xα+β dx = α + 1 α + β + 1 As required. The results for u ∈ Ξ are trivial. Finally, we have: EQ[log Uξn ] = (α + 1) 1 0 log(x)xα dx = − α + 1 (α + 1)2 = − 1 α + 1 11
  • 12. Proposition 2.4. The Spine Decomposition Let α ∈ [0, 1]. Then: EQ W(α) n G∞ = Aα ξn α + 1 2 n + α + 1 2 n−1 k=0 α + 1 2 k Aξk − Aξk+1 α Proof. Let us consider what particles exist at time n. A trivial examination of this tree structure reveals that for any k < n, there will be 2n−k−1 particles alive at time n whose last spinal ancestor was ξk. We will denote the set of non-spine descendants of ξk alive at time n by Cn(ξk). Each u ∈ Cn(ξk) has size distribution Aξk × n−k Ui, where the Ui’s are uniform (0, 1) random variables. Now, we can once again use the binary structure of the process to remove a degree of randomness from this expression. ξk has two children. One of these children is in the spine, and hence contributes no descendants to Cn(ξk). However, the other child has size distribution Aξk+1 − Aξk , and its descendants at time n form precisely the set Cn(ξk). As such, each u ∈ Cn(ξk) has distribution (Aξk+1 − Aξk ) × n−k−1 Ui. The keen observer will notice however that | k Cn(ξk)| = 2n − 1; we are missing ξn, which trivially has size distribution Aξn . This completes our characterisation of the particles in Nn. Hence: EQ W(α) n |G∞ = α + 1 2 n × EQ Aα ξn |G∞ + n−1 k=0 2n−k−1 EQ Aξk+1 − Aξk × n−k−1 i=1 Ui α G∞ Now, the Aξk ’s are G∞-measurable, so we can take them out of the expectations. The Ui’s are also independent of G∞, so we simply take their Q-expectations. By Proposition 2.3.(ii), we have that EQ [Uα i ] = 1 α+1 . Hence our full expression for the spine decomposition is EQ W(α) n |G∞ = α + 1 2 n × Aα ξn + n−1 k=0 2 α + 1 n−k−1 Aξk+1 − Aξk α some simple algebraic manipulation gives the required result. Proposition 2.5. Set α such that e− α α+1 < (α+1 2 ). Then: lim sup EQ[W(α) n |G∞] < ∞ almost surely. Proof. First we will consider the convergence of ∞ k=0 α+1 2 k Aξk − Aξk+1 α . We will first rewrite this as ∞ k=0 α+1 2 e α k log(Aξk −Aξk+1 ) k . We have that: 1 k log Aξk − Aξk+1 = 1 k log 1 − Uξk+1 k i=0 Uξi = log(1 − Uξk+1 ) k + 1 k k i=1 log(Uξi ) 12
  • 13. We now have two summands to consider. By the SLLN, 1 k k i=1 log Uξi → EQ[log Uξi ] Q-almost surely. By Lemma 2.3 we know that this is − 1 α+1 . We will now consider the convergence of 1 k log(1 − Uξk+1 ). By Lemmas 1.8 and 2.3, we have that −1 k log(1 − Uξk+1 ) converges to 0. Hence we also have 1 k log(1 − Uξk+1 ) → 0, Q-almost-surely. This gives us a precise limit: lim k→∞ 1 k log Aξk − Aξk+1 = − 1 α + 1 Hence, we have that lim k→∞ α + 1 2 exp α k log Aξk − Aξk+1 = α + 1 2 e− α α+1 We have now that the sum ∞ k=0 α+1 2 k Aξk − Aξk+1 α converges Q-almost surely if (α+1 2 )e− α α+1 < 1. It remains now to find which values of α ensure the convergence of Aα ξn α+1 2 n . Now: lim n→∞ 1 n log(Aξn ) = lim n→∞ 1 n log n i=1 Uξi = lim n→∞ 1 n n i=1 log(Uξn ) = EQ[log Uξn ] = − 1 α + 1 Hence, similar to before, lim sup n→∞ α + 1 2 n Aα ξn = lim sup n→∞ α + 1 2 e α n log(Aξn ) n = lim sup n→∞ α + 1 2 e− α α+1 n Which is finite iff (α+1 2 )e− α α+1 < 1. This fact, combined with the convergence of the aforementioned sum on the same interval gives us our required condition for convergence. 2.3 Asymptotic growth of the maximal particle The following three lemmas are essential in proving our main result. Lemma 2.6. Set α such that e− α α+1 < (α+1 2 ). Then: E lim n→∞ W(α) n = 1 Q(α) -almost surely. Proof. We first use the proof of Proposition 1.11 to deduce that lim supn→∞ W (α) n = lim infn→∞ W (α) n . We now have EQ[lim sup W(α) n |G∞] = EQ[lim inf W(α) n |G∞] ≤ lim inf EQ[W(α) n |G∞] ≤ lim sup EQ[W(α) n |G∞] < ∞ From a combination of Fatou’s Lemma and Proposition 2.5. Therefore, lim W (α) n < ∞ Q-almost surely, and by Lemma 1.10 we have that E limn→∞ W (α) n = 1 as required. 13
  • 14. Lemma 2.7. Let A ∈ Fn. Then: Q(α) (A) = EP W(α) ∞ 1E + P(A ∩ {W(α) ∞ = ∞}) Proof is a standard result of measure theory Lemma 2.8. Let Eδ be the event {supu∈Nn 1 n log Au > δ i.o.}. Then P(Eδ) is either 0 or 1. Proof. We shall consider the probability of the complement of Eδ given the filtration Fk. We have that: P(Ec δ|Fk) = P ∀v ∈ Nk, sup u∈Nn, v≤u log Au n ≤ δ e.v. Fk = P v∈Nk sup u∈Nn, v≤u log Au n ≤ δ e.v. Fk = v∈Nk P sup u∈Nn, v≤u log Au n ≤ δ e.v. Fk = v∈Nk P sup u∈Nn−k log Au n ≤ δ e.v. = v∈Nk P sup u∈Nn log Au n · n + k n ≤ δ e.v. ≤ v∈Nk P sup u∈Nn log Au n ≤ δ + e.v. = P Ec δ+ |Nk| To summarise, we now have that P(Ec δ|Fk) ≤ P(Ec δ+ )|Nk| . We can now take the limit as → 0 and take the expectation of both sides to obtain P(Ec δ) ≤ P(Ec δ)2k . This proves the result. The following definition and lemma are purely technical, serving only to simplify the statement of the main theorem. Definition 2.9. The Lambert W Function Let z be any complex number. Then W is the unique function satisfying the equation z = W(z)eW (z) Lemma 2.10. min x>0 1 x log 2 x + 1 = W − 1 2e Proof. Let f(x) = 1 x log 2 x+1 . Simple calculus shows that f (x) = − 1 x2 log 2 x + 1 − 1 x(x + 1) 14
  • 15. setting f (x) = 0, we obtain 1 x log 2 x + 1 = − 1 x + 1 (1) Let x∗ denote the solution to this equation, and let f∗ denote the minimum of f. Then clearly f∗ is given by f∗ = f(x∗ ) = − 1 x∗ + 1 Rearranging (1), we can obtain − 1 2e = − 1 x + 1 e− 1 x+1 Hence the f∗ is the solution to the equation − 1 2e = f∗ ef∗ The definition of the Lambert W Function says that for any real number z, we have z = W(z)eW (z) Therefore, f∗ = W(− 1 2e ) Theorem 2.11. Let both Au and Nn be defined as in Section 2.1. Then, lim sup n→∞ max u∈Nn log Au n = W − 1 2e Almost surely. Proof. This proof is split into two parts, in which we will derive both upper and lower bounds for the limit. We will begin with the former. Fix some γ ∈ R, and suppose that there exists some particle u ∈ Nn such that log Au n > γ. Then, W(α) n = α + 1 2 n v∈Nn Aα u ≥ α + 1 2 n Aα u > α + 1 2 n eαγn Now, since W (α) n is a non-negative martingale, it converges almost surely to a finite limit. Hence to ensure convergence, we must have that α+1 2 eαγ > 1 only finitely often. Rewriting this, we see that this implies maxu∈Nn log Au n > γ > 1 α log 2 α+1 happens only finitely often for all α > 0. This inequality relies on varying α, so we will optimise over α via Lemma 2.10, giving us that minα>0 1 α log 2 α+1 = W(− 1 2e ) Therefore, we have lim sup n→∞ max u∈Nn log Au n ≤ W − 1 2e almost surely. Now, we will use our spine decomposition to provide the lower bound. Let W (α) ∞ := lim sup W (α) n . By Lemma 2.6 we have that E[W (α) ∞ ] = 1. Hence P(W (α) ∞ = ∞) = 0. Now, combining this with Lemma 2.7, we have 15
  • 16. Q(α) (A) = EP W(α) ∞ 1A Let E be the event {supu∈Nn 1 n log Au ≥ − 1 α+1 i.o.}. In Proposition 2.5, we showed that limn→∞ 1 n log Aξn = − 1 α+1 , i.e. Q(α) { 1 n log Aξn = − 1 α+1 i.o.} = 1. The following sequence of set inclusions will make it obvious that Q(α) (E) = 1: 1 n log Aξn = − 1 α + 1 i.o. ⊆ 1 n log Aξn ≥ − 1 α + 1 i.o. ⊆ sup u∈Nn 1 n log Au ≥ − 1 α + 1 i.o. = E We also have that Q(α) (E) = EP W(α) ∞ 1E Hence P(E) > 0. Finally, by Lemma 2.8 we have that P(E) = 0 or 1. We have just shown that P(E) > 0, therefore we necessarily have that P(E) = 0, i.e. lim sup n→∞ sup u∈Nn log Au n ≥ − 1 α + 1 = W − 1 2e Proving our result. References [1] H. Kesten and B. P. Stigum. A limit theorem for multidimensional galton-watson processes. Ann. Math. Statist., 37(5):1211–1223, 10 1966. [2] R. Lyons and Y. Peres. Probability on Trees and Networks. Cambridge University Press, 2016. 16