SlideShare a Scribd company logo
1 of 75
Download to read offline
Stochastic oscillations and their power
spectrum
Masterarbeit
von Jordi Giner-Bald´o, geb. am 23. 04. 1990,
eingereicht beim Fachbereich Physik der Freien Universit¨at Berlin
zur Erlangung des akademischen Grades Master of Science
am 26. 01. 2016
Matrikelnummer: 4756284
Adresse: M¨uhsamstr. 36, 10249 Berlin
Email: jorgibal@zedat.fu-berlin.de
Externer Betreuer Prof. Dr. Benjamin Lindner,
Institut f¨ur Physik, HU Berlin und BCCN Berlin
Betreuer am Fachbereich: Prof. Dr. Roland Netz, Fachbereich Physik, FU Berlin
Zweitgutachterin: Priv.-Doz. Dr. Stefanie Russ, Fachbereich Physik, FU Berlin
Abstract
Stochastic oscillations - also known as narrow-band fluctuations - are ubiquitous in
biological systems. Their mathematical description is challenging: it often involves
non-equilibrium and non-linear models subject to temporally correlated fluctuations.
Measures that are often used to characterize stochastic oscillations are the autocor-
relation function and the power spectrum. In this thesis we develop and apply
analytical, semianalytical and numerical approaches to these measures that can pro-
vide some insight on spectral quantities such as the frequency and the coherence of
the oscillations, given by the quality factor.
A number of methods have been used in the literature to model stochastic os-
cillations. We briefly review some of these theoretical models before focusing on
two specific instances of stochastic oscillators: an integrate-and-fire neuron driven
by temporally correlated fluctuations, i.e. colored noise; and the noisy heteroclinic
oscillator introduced by Thomas and Lindner (2014), a paradigmatic example of a
system that oscillates only in the presence of noise. On the one hand, we study the ef-
fect of two different types of colored-noise driving on the power spectrum of a perfect
integrate-and-neuron using an analytical approach by Schwalger et al. (2015). The
two noise models considered are a low-pass filtered noise modelled as an Ornstein-
Uhlenbeck process and harmonic noise. On the other hand, we use numerical and
semianalytical matrix methods to calculate the power spectrum of the noisy hetero-
clinic oscillator. These methods are not accurate and/or efficient in the small noise
limit, where the oscillations become slower and more coherent. In this limit, we
provide an analytical approximation for the power spectrum based on the theory of
two-state processes and existing results from the theory of random dynamical sys-
tems for hyperbolic fixed points. The analytical approaches used in this thesis are
based on the Fokker-Planck formalism. All the results are compared to stochastic
simulations.
Contents
1. Introduction 1
1.1. Motivation: noisy oscillations in biology . . . . . . . . . . . . . . . . . . . . . . . 1
1.2. Measures of stochastic oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3. Models of stochastic oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4. Models of noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5. Aim and outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2. Integrate-and-fire neuron driven by colored noise 13
2.1. Integrate-and-fire models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2. Colored noise in neural systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3. Analytical approach to colored noise . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4. Results for a PIF model driven by colored noise . . . . . . . . . . . . . . . . . . . 16
3. Noise-induced oscillations in a heteroclinic system 23
3.1. General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2. Approach to the power spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3. Solving the equations by matrix methods . . . . . . . . . . . . . . . . . . . . . . 29
3.4. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.5. Dichotomous approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4. Summary and outlook 57
Appendices 59
A. Expansion of LFP into a basis 59
B. Tridiagonal recurrence relation 60
C. Expansion of the steady-state probability current 62
References 63
1. Introduction
1.1. Motivation: noisy oscillations in biology
Noisy or stochastic oscillations are ubiquitous in biology. Examples include phenomena from
a broad spectrum of fields. For example, the intracellular concentration of calcium oscillates,
acting as as a signal that regulates cellular activity (Kummer et al., 2005); in the vestibular and
auditory system, the transducing cells (hair bundles) show spontaneous mechanical oscillations
(Martin et al., 2003); in neural systems, representative cases can be found both at the single-
neuron level, where neurons often fire rather regular sequences of action potentials (Nawrot
et al., 2007), and at the population level, in the form of α-, β- and γ- oscillations in the brain
(Xing et al., 2012).
(a) Mechano-sensory hair cells (b) Single neuron’s regular firing activity
(c) Intracellular calcium oscillations (d) Brain activity: α-,β-,γ- oscillations.
Figure 1.1: Some examples of noisy oscillations in biology. (a) Martin et al. (2003). (b) Nawrot
et al. (2007). (c) Kummer et al. (2005). (d) Xing et al. (2012).
The common feature to these oscillations is that their coherence is lost over long time scales,
showing random fluctuations in both phase and amplitude. In the spectral domain, they are
characterized by a preferred frequency band of spectral power (see Figure 1.2), hence they are
sometimes termed narrow-band fluctuations (Stratonovich, 1963; Bauermeister et al., 2013).
Stochastic oscillations pose important challenges from the theoretical point of view, because
most stochastic oscillations observed in living organisms: (i) operate beyond thermodynamic
equilibrium, (ii) are described by non-linear dynamical systems, (iii) are often subject to tempo-
rally correlated fluctuations (in contrast with simpler uncorrelated, i.e. white noise). Therefore,
new approaches to the problem are required in order to better characterize the autocorrela-
tion function and power spectrum of these stochastic oscillations, quantities that can be easily
accessed experimentally.
1
1 INTRODUCTION
Figure 1.2: Time-evolution (A) and power spectrum (B) of the spontaneous mechanical oscil-
lations of a hair bundle from the sacculus of the bullfrog’s inner ear. Note the peak in the power
spectrum as a signature of stochastic oscillations. From (Martin et al., 2001).
1.2. Measures of stochastic oscillations
Given a stationary stochastic process x(t), one can define its autocorrelation function (Gardiner,
2009) as
Cxx(τ) = x(t)x(t + τ) − x(t) 2
, (1.1)
where · denotes an average over an ensemble of realizations of x(t). The autocorrelation
function quantifies how much two points of a trajectory which are lagged by an interval τ have
in common. One can also look at second-order statistics in the Fourier domain: the power
spectrum of the process x(t) is defined as (Gardiner, 2009)
Sxx(f) = lim
T→∞
˜x(f)˜x∗(f)
T
,
where ˜x(f) is the Fourier transform of a finite realization of x(t), i.e.
˜x(f) =
T
0
dtei2πft
x(t),
and T is the time window of the realization. The power spectrum essentially quantifies how the
variance (∆x)2 is distributed over frequencies. In simulations, T must be sufficiently long in
order to provide a good approximation to the power spectrum. The autocorrelation funtion and
the power spectrum are related via the Wiener-Khinchin theorem (Gardiner, 2009)
Sxx(f) =
+∞
−∞
dτei2πfτ
Cxx(τ). (1.2)
Stochastic oscillations are characterized by a more or less narrow peak at a nonzero frequency
in the power spectrum, which indicates a preferred frequency band. The stochasticity of the
oscillation translates into a loss of coherence over long time scales (Stratonovich, 1963; Bauer-
meister et al., 2013) and therefore into the broadening of the peak. One must also address
the question of how to quantify the coherence of such oscillations. Here we use a well-known
measure from the theory of oscillators and resonators, the quality factor
2
1.3 Models of stochastic oscillations
Q =
fp
∆f
, (1.3)
where fp is the center frequency of the characteristic peak in the power spectrum and ∆f its
bandwidth (full-width-at-half-maximum). This definition agrees with the intuition that narrower
peaks lead to more coherent oscillations and therefore to higher quality factors, as seen in
Figure 1.3.
Figure 1.3: Quality factors Q of a narrow and a broad peak in the power spectrum. Narrower
peaks lead to higher Q.
1.3. Models of stochastic oscillations
1.3.1. Harmonic oscillator
A simple model that generates stochastic oscillations is a Brownian particle attached to a spring
which oscillates in a fluid and is subject to thermal fluctuations. This system can be modelled
as a damped harmonic oscillator driven by white Gaussian noise, i.e. the displacement x(t) of
the particle with respect to its equilibrium satisfies the stochastic differential equation
m¨x + γ ˙x + mω2
0x = 2γkBTξ(t), (1.4)
where m is the mass of the particle, γ is the damping coefficient, ω0 is the frequency of the
undamped oscillation, kB is Boltzmann’s constant, T is the temperature and ξ(t) is Gaussian
white noise with the properties
ξ(t) = 0 and ξ(t)ξ(t ) = δ(t − t ).
The power spectrum Eq. (1.2) can be calculated analytically (see Section 1.4.2) for this simple
linear system. It turns out that in the underdamped regime (ω0 > γ/2), the power spectrum
displays a peak at a nonzero frequency, as illustrated in Figure 1.4.
For a weakly-damped (ω0 γ/2) harmonic oscillator driven by thermal fluctuations the
quality factor takes the simple form
3
1 INTRODUCTION
0
0.0001
0.0002
0 0.5 1 1.5 2 2.5 3
S(f)
f
Simulation
Analytics
Figure 1.4: Power spectrum Sxx(f) of the system described by Eq. (1.4). The analytics from
Eq. (1.16) (solid lines) are compared to numerical simulations of Eq. (1.15) (dots). Note the
distinctive peak in the power spectrum. Parameters: ω0 = 10, D = 0.01, γ = 1, Q ≈ 10.
Q =
Ω
γ
, (1.5)
where Ω = ω2
0 − (γ/2)2 is the frequency of the damped oscillation.
1.3.2. Noise-perturbed self-sustained oscillators
The simple linear model introduced in the previous subsection cannot capture all the features
of the stochastic oscillations shown in Section 1.1, despite showing a narrow-band peak in the
power spectrum. For example, the probability distribution of the position of the hair bundle
is bimodal, i.e. it is far from being the characteristic Gaussian distribution expected from
a linear system driven by Gaussian noise. This obstacle is relatively easy to circumvent: for
instance, a damped Brownian particle in a bistable potential, whose equations of motion are non-
linear (see e.g. (Anishchenko et al., 2006)), would be able to reproduce a bimodal probability
distribution. However, other features observed in real biophysical systems, such as stability
against perturbations of the amplitude, are more subtle and cannot be accounted for by simple
versions of these damped oscillators driven by noise.
It turns out that an appropriate class of dynamical systems that encompasses many oscillations
observed in biophysics is that of self-sustained oscillators (brief but illuminating introductions
to this topic can be found in (Pikovsky et al., 2003) and (Anishchenko et al., 2006)). Self-
sustained oscillators are active systems that are capable of producing their own long-lasting
rhythms without any external driving. This is possible due to an internal source of energy that
compensates for dissipation in the system. A fundamental property of self-sustained oscillations
is that their characteristics (e.g. amplitude, waveform, period, etc.) are completely determined
by the internal parameters of the system and do not depend on the initial conditions.
Self-sustained oscillations have a precise mathematical description in terms of non-linear au-
tonomous dynamical systems with stable limit-cycle solutions. Limit cycles are closed curves
in the phase space which are isolated, i.e. neighbouring trajectories are not closed but spiral
away (unstable limit cycle) or towards the limit cycle (stable limit cycle). They lead to peri-
odic trajectories and can occur only in at least two-dimensional non-linear dynamical systems
4
1.3 Models of stochastic oscillations
(Izhikevich, 2007; Strogatz, 2001).
To account for the stochasticity of the oscillations, the simplest approach is to add some white
noise to deterministic dynamical equations containing limit-cycle solutions, which leads to noisy
trajectories around the deterministic limit cycle, as seen in Figure 1.5 for a stochastic version
of a prototypical self-sustained oscillator, the Van der Pol oscillator. The dynamics of such a
system is governed by the second-order differential equation
¨x − µ(1 − x2
) ˙x + x =
√
2Dξ(t),
which can be rewritten as the system of first-order differential equations
˙x = y,
˙y = µ(1 − x2
)y − x +
√
2Dξ(t),
(1.6)
where
√
2Dξ(t) is Gaussian white noise with intensity D. If D = 0 this system contains a stable
limit cycle for µ > 0 (Anishchenko et al., 2006).
−5
0
5
x
0 20 40 60 80 100
t
−5
0
5
y
(a) (b)
Figure 1.5: Comparison between the trajectories of a deterministic and a noise-perturbed self-
sustained oscillator. The time evolution of the position x and the velocity ˙x = y of a Van der
Pol oscillator Eq. (1.6) with µ = 1 are shown in (a). The color code is the same as in (b),
where the same trajectories are displayed in the phase plane (x, y). After an initial transient the
system converges to the limit cycle in the deterministic case (black line). Adding some noise to
the system leads to noisy trajectories (red line) following closely the deterministic limit cycle.
1.3.3. Noise-driven excitable systems
Excitable systems are a broad class of systems characterized by possessing a stable “rest” state
and unstable “excited” (“firing”) and “refractory” states (Lindner et al., 2004). A strong enough
external perturbation can force the system to leave the resting state and undergo a stereotypical
excursion in phase space (see Figure 1.6b) through the firing and the refractory states before
coming back to rest. The underlying mathematical description is a dynamical system close
to a bifurcation to a limit-cycle. In the following we introduce the main ingredients of clas-
sical excitability in a stochastic neuron model, the FitzHugh-Nagumo system. We follow the
presentation of (Lindner et al., 2004).
5
1 INTRODUCTION
A common form of the stochastic FitzHugh-Nagumo system is
t ˙x = x − x3
− y,
˙y = γx − y + b +
√
2Dξ(t),
(1.7)
where x and y are a voltage-like and a recovery-like variable, respectively. In the neural context
t 1, hence x can be regarded as a fast variable and y as slow variable. The system is
driven by white Gaussian noise
√
2Dξ(t) of intensity D. The parameters b and γ determine the
intersection between the x and y nullclines, i.e. the cubic curve and the straight line that can be
observed in Figure 1.6b, respectively. In the excitable regime the intersection point is a stable
fixed point on the left branch of the cubic nullcline, which corresponds to the resting state of
the system. If unperturbed, the system stays at this stable fixed point. However, the central
branch of the cubic nullcline acts here as an effective threshold: a sufficiently strong external
perturbation can kick the system over this central branch leading to a large excursion of the
state variables on the phase plane (“firing”, i.e. travel of the phase point through the regions
labelled as “self-excitatory” and “active” in Figure 1.6b). After a refractory state, the system
comes back to the resting state, where, if noise is present, it may be perturbed again to the
firing state. In this way, a random sequence of action potentials or pulses is generated. Traces
of these stochastic oscillations are shown in Figure 1.6a.
−2
0
2
x
D = 0.01
0 5 10 15 20 25 30 35 40
t
−2
0
2
x
D = 0.1
(a) (b)
Figure 1.6: Sample trajectories of a noise-driven excitable system in the excitable regime for
different values of the noise. (a) Sample time evolution of the voltage-like variable x of the
FitzHugh-Nagumo system Eq. (1.7) for D = 0.1 (lower panel) and D = 0.01 (upper panel).
(b) Trajectory for D = 0.01 in the phase plane (x, y) and x- (black dashed line) and y- (black
solid line) nullclines; the stable resting state at the intersection of the two nullclines is also
indicated (thick black dot). Far-reaching excursions in the phase plane correspond to spikes in
the trace of x. The occurrence of spikes is more likely for larger values of the noise D as seen
in (a). Relevant states of the system are indicated according to (Lindner et al., 2004). Other
parameters: γ = 1.5, t = 0.01 and b = 0.6.
1.3.4. Integrate-and-fire models
It is possible to reduce the dimensions in the description of a limit cycle by choosing an appro-
priately defined phase variable (Pikovsky et al., 2003). The state of oscillation is then given by
6
1.3 Models of stochastic oscillations
a single equation for the phase dynamics.
Let us consider the so-called integrate-and-fire model (Burkitt, 2006b) describing the firing of
spiking neurons
˙v = f(v) + η(t), if v(ti) = vth : v → vres, (1.8)
where v is the membrane voltage of the neuron, η(t) is a stochastic process accounting for sources
of noise in the system and f(v) is a function describing the voltage dynamics. The model is
equipped with a fire-and-reset rule: when the voltage v hits a threshold vth, v is reset to vres
and simultaneously the time ti at which this event occurred is registered. The system is then
said to have fired an action potential or ”spike”. This is illustrated in Figure 1.7. It must be
emphasized that the relevant output of the system is the sequence of spiking times {ti}.
Figure 1.7: Stochastic integrate-and-fire neuron model. The voltage time-course of a leaky
integrate-and-fire neuron driven by white noise is displayed in the lower panel. Whenever the
voltage v(t) hits the threshold vth (red dashed line), a spike (black vertical arrow) is formally
added to the output spike train x(t) at time ti (upper panel) and v is reset to the reset voltage
vres (black dashed line). The noise leads to variability in the spiking times {ti}.
In this setting the variable v can be regarded as a phase-like variable taking values from vres
to vth in a circle, with the fire-and-reset rule connecting both ends of the line. The fire-and-reset
rule is reminiscent of the firing and recovery states of an excitable system. Indeed, the stochastic
integrate-and-fire model is a 1D caricature of a noise-driven excitable system (see Section 1.3.3)
if there is a stable fixed point v < vth for which f(v) = 0. If, on the contrary, f(v) > 0, v < vth,
the model can be regarded as a 1D caricature of a noise-perturbed limit cycle (see Section 1.3.2).
1.3.5. Noise-induced fluctuations in a heteroclinic system
Underlying deterministic limit-cycle dynamics is not a necessary condition to obtain noisy limit-
cycle behaviour. As an example, we discuss in the following a heteroclinic attractor (Krupa,
1997) perturbed by weak noise (Stone and Holmes, 1990; Bakhtin, 2010a).
Let us consider the deterministic system
7
1 INTRODUCTION
˙y1 = cos(y1) sin(y2) + α sin(2y1),
˙y2 = − sin(y1) cos(y2) + α sin(2y2),
(1.9)
with α being a stability parameter. The system is 2π-periodic in y1 and y2, so let us focus on
the central region [−π, π] × [−π, π]. The corresponding phase portrait is shown in Figure 1.8a.
It contains a chain of four saddle points which are connected to each other by heteroclinic
trajectories, forming what is known as a heteroclinic cycle (Shaw et al., 2012). If this heteroclinic
cycle is attracting (α ∈ (0, 0.5)), trajectories that start at its interior tend to get closer and closer
to the cycle, but with increasingly long return times Figure 1.8c. Hence the trajectory along the
cycle has “infinite” period and no well-definend oscillation emerges.
However, let us have a look now at what happens when white noise of intensity D is added
to the system:
˙Y1 = cos(Y1) sin(Y2) + α sin(2Y1) +
√
2Dξ1(t),
˙Y2 = − sin(Y1) cos(Y2) + α sin(2Y2) +
√
2Dξ2(t),
(1.10)
where ξ1,2 are independent Gaussian white noise sources satisfying ξi(t)ξj(t ) = δ(t − t )δij.
Sample trajectories are shown in Figures 1.8b and 1.8d. We add reflecting boundary conditions
on the domain −π/2 ≤ {y1, y2} ≤ π/2 in order to “trap” the trajectory within the heteroclinic
cycle.
What we observe in Figure 1.8b is that the trajectory resembles what can be regarded as
noisy limit-cycle behaviour. As in the deterministic case, the phase point tends to approach the
heteroclinic cycle, but now the noise keeps “kicking” it away from it so that the oscillations are
sustained (see Figure 1.8d). Thus, noise induces finite-period limit-cycle behaviour. Further-
more, the noise intensity D determines this mean period of oscillation along the cycle. This
mean period of oscillation does not emerge because of an underlying deterministic limit-cycle
dynamics but because of the sensitivity of the system to perturbations in the vicinity of the sad-
dle points. A similar phenomenon of selection of time scales is found in homoclinic attractors
perturbed by weak noise (Stone and Holmes, 1990) and in 2D systems close to a saddle node
bifurcation (Sigeti and Horsthemke, 1989).
1.4. Models of noise
Theoretical studies often assume that the noise ξ(t) present in the system is temporally uncor-
related (white noise) (van Kampen, 2007), i.e.
ξ(t)ξ(t ) = δ(t − t ). (1.11)
This translates in the frequency domain into a flat power spectrum Sξξ(f) = 1, i.e. the noise
contains the same power at all frequencies. The use of white noise simplifies the analytical
description of the system (Gardiner, 2009) and it is a good approximation in certain cases.
However, noise in nature has always some non-zero finite correlation time (such that the au-
tocorrelation function acquires some non-trivial temporal structure) and one needs to consider
instead colored (temporally correlated) noise, whose power spectrum is not flat. Two examples
of (Gaussian) colored noise are explored in this thesis: low-pass filtered noise generated by an
Orstein-Uhlenbeck process (OUP) and harmonic noise (HN). Examples of their power spectra
8
1.4 Models of noise
(a) (b)
0 50 100 150 200 250 300
t
−2.0
−1.5
−1.0
−0.5
0.0
0.5
1.0
1.5
2.0
y1(t)
(c)
0 50 100 150 200 250 300
t
−2.0
−1.5
−1.0
−0.5
0.0
0.5
1.0
1.5
2.0
y1(t)
(d)
Figure 1.8: Noise induces oscillations in a system (b,d) which does not show limit-cycle be-
haviour in the deterministic case (a,c). (a) Phase portrait of the deterministic system governed
by Eq. (1.9). A trajectory of a phase point is shown in blue: it passes near the four distinct
saddle points (thick black dots), slowing down progressively as it gets closer to the stable hetero-
clinic cycle. From (Shaw et al., 2012). (b) Sample trajectory over the phase plane (y1, y2) for the
system described by Eq. (1.10) with reflecting boundary conditions; the initial condition y0 and
the sense of rotation are indicated. The trajectory wiggles close to the deterministic heteroclinic
cycle showing noise-induced limit-cycle behaviour. (c) Slowing transient of the deterministic
system. (d) Time evolution of the noise-induced oscillation. Parameters: α = 0.1 (all panels),
D = 0.01 (b) and (d).
are shown in Figure 1.9 together with that of white noise. The study of colored-noise driven
systems is theoretically challenging (H¨anggi and Jung, 1994).
9
1 INTRODUCTION
ω
S(ω)
White
Low − pass
Narrow − band
Figure 1.9: Power spectrum of white noise (black) vs two instances of colored noise: narrow-
band fluctuations (blue) and low-pass noise (red).
1.4.1. Ornstein-Uhlenbeck process
The Ornstein-Uhlenbeck process (OUP) (Uhlenbeck and Ornstein, 1930) is the Gaussian, zero-
mean stochastic process v(t) defined by the equation
τc ˙v = −v + 2σ2τcξ(t),
where τc is the correlation time of the process, σ2 is the variance and ξ(t) is Gaussian white
noise as defined in Section 1.3.1. The intensity of the noise is related both to the correlation
time and the variance, and reads DOU = σ2τc.
The OUP was originally introduced to describe the velocity v of a 1D Brownian particle and
it is one of the milestones of statistical mechanics. The essential mathematical feature of the
OUP is that it is described by a Langevin equation which is linear in v (this property can easily
be generalised to extend the OUP to higher dimensions, see e.g. (Risken, 1984)) and that the
coefficient D characterizing the strenght of the noise does not depend on v, i.e. the noise is
additive.
As a consequence of such mathematical properties, the OUP is essentially the only process
which is stationary, Gaussian and Markovian, as stated by Doob’s theorem (van Kampen, 2007).
Moreover, its autocorrelation function C(τ) and power spectrum S(f) are well-known and read
Cηη(τ) = σ2
e−
|τ|
τc , (1.12)
and
Sηη(f) =
2σ2τc
1 + (2πfτc)2
. (1.13)
In other words, the OUP displays temporal exponential correlations with timescale τc. The
power spectrum is therefore a Lorentzian function centered at f = 0 by the Wiener-Khinchin
theorem Eq. (1.2). It is also convenient to define a cut-off frequency fcut-off as the frequency value
at which the power spectrum decays at half of its value at f = 0, i.e. S(fcut-off) = S(f = 0)/2.
This leads to the simple expression
10
1.5 Aim and outline of the thesis
fcut-off =
1
2πτc
,
which quantifies a characteristic range of frequencies involved in the process.
1.4.2. Harmonic noise
In the following we call harmonic noise (Schimansky-Geier and Z¨ulicke, 1990) a Gaussian, zero-
mean stochastic process x(t) that is governed by the linear dynamical equation
¨x + γ ˙x + ω2
0x =
√
2Dξ(t), (1.14)
where γ is the friction coefficent, D is the intensity of the driving noise, ω0 is the frequency of
the undamped oscillation and ξ(t) is the usual Gaussian white noise. Equation Eq. (1.14) can
also be rewritten as a system of Langevin equations
˙x = y
˙y = −γy − ω2
0x +
√
2Dξ(t),
(1.15)
so that the joint process {x(t), y(t)} is Markovian.
The study of white-noise driven damped harmonic oscillators goes back to the early works
of Chandrasekhar (1943) and Wang and Uhlenbeck (1945), in which analytical expressions for
the underdamped regime, i.e. ω0 > γ/2, were obtained for both the power spectrum (using
Rice’s method),
Sxx(f) =
2D
(2πfγ)2 + (2πf)2 − ω2
0
2 . (1.16)
and the autocorrelation function,
Cxx(τ) =
D
ω2
0γ
exp −
γ
2
|τ| cos(Ωτ) +
γ
2Ω
sin(Ω|τ|) , (1.17)
where Ω = ω2
0 − (γ/2)2 ≥ 0. This last expression is obtained by calculating the Fourier
transform of S(f) according to the Wiener-Khinchin theorem Eq. (1.2), which requires of contour
integration methods. From Eq. (1.17) we can deduce the variance of the process σ2
HN = Cxx(0) =
D/(ω2
0γ).
In the underdamped regime, harmonic noise is a model of stochastic oscillations (with quality
factor given by Eq. (1.5) in the weakly-damped limit), with the power spectrum exhibiting two
peaks at ωp = 2πfp = ± ω2
0 − γ2/2 with a local minimum at f = 0. This contrasts with the
Ornstein-Uhlenbeck case, where the power spectrum monotonously decays to 0 with a cut-off
frequency determined by the inverse of the correlation time of the noise.
1.5. Aim and outline of the thesis
The theoretical treatment of stochastic oscillations is challenging. The aim of this thesis is
to develop and apply analytical, semianalytical and numerical methods to describe the power
spectrum and the autocorrelation function of such oscillations. Within the enormous landscape
of models available, we focus on two specific types of non-linear stochastic oscillators. Each
11
1 INTRODUCTION
model allows us to explore the effect of a different aspect of the noise on the output of the
system.
In Section 2 an integrate-and-fire model driven by temporally correlated noise is studied. Two
different types of noise are discussed separately there: low-pass filtered noise, modelled as an
Ornstein-Uhlenbeck process; and harmonic noise, which corresponds to the interesting case of a
non-linear stochastic oscillator driven by a stochastic oscillation. The effect of the parameters
characterizing those driving fluctuations on certain features of the output power spectrum of
the spike train will be investigated by numerical simulations and analytical formulae.
In Section 3 we study a paradigmatic example of a heteroclinic system that only displays
oscillations in the presence of noise, as introduced in Section 1.3.5. The system is investigated
through a numerical and a semianalytical technique (the method of Matrix Continued Fractions),
from which we obtain results in a relevant noise regime for the steady-state distribution, the
steady-state probability current and the power spectrum. Useful measures are extracted from
the power spectrum that characterize the spectral features of the system. In the range of noise
values where the previous techniques are not efficient, namely the small noise limit, an analytical
approximation for the power spectrum is developed.
12
2. Integrate-and-fire neuron driven by colored noise
Neurons, the fundamental components of the nervous system, are electrically excitable systems
connected to each other forming complex networks. Information is encoded and transmitted
across these networks in the form of short electrical pulses (of approximately 100 mV in am-
plitude and a few ms of duration), also called ”spikes” or action potentials. These pulses are
generated when the electrical potential across the membrane of a neuron (referred to as mem-
brane potential) exceeds a certain threshold, which can occur even in the absence of any sensory
stimulus due to various sources of noise influencing the system (spontaneous firing). While the
subthreshold dynamics of the membrane potential is shaped by the inputs that the neuron re-
ceives from other neurons, the shape of the action potentials is very stereotypical and does not
change as it propagates along the neuron. This suggests that the information is not encoded in
the form of the pulse itself, but rather in the number and timing of spikes (Rieke et al., 1999).
A sequence of such stereotyped events is called a spike train (Gerstner and Kistler, 2002).
The biophysical mechanism of generation of an action potential in a single neuron is well
captured by conductance-based models such as the Hodgkin and Huxley model and its two-
dimensional simplifications, e.g. the FitzHugh-Nagumo model (Izhikevich, 2007; Gerstner and
Kistler, 2002). However, these non-linear, higher-dimensional models are very difficult to treat
analytically and therefore often not convenient to make predictions about the behaviour of the
system. Hence, the problem calls for simpler models that can still capture realistic features
of neural behaviour. One of the most successful models in that respect is the integrate-and-
fire model (Burkitt, 2006a; Gerstner and Kistler, 2002; Brette, 2015). This phenomenological
model has been useful to understand certain aspects of single-neuron coding (Vilela and Lindner,
2009), and its simplicity also makes it a very popular choice in numerical and analytical network
studies, see e.g. (Brunel and Hakim, 1999; Brunel, 2000; Wieland et al., 2015). A very detailed
account on the physiology of neurons can be found, for example, in the book by Kandel et al.
(2000).
In this section we investigate the effect of temporally-correlated noise on the autocorrelation
function and the power spectrum of a perfect integrate-and-fire (PIF)neuron. After a brief
introduction to the PIF model, we discuss the role of colored noise in neural systems and
present an analytical approach introduced by Schwalger et al. (2015) for a PIF neuron driven
by weak Gaussian colored noise. We apply this approach to two different types of colored-noise
driving: an Ornstein-Uhlenbeck process and harmonic noise. The results are then compared to
stochastic simulations.
2.1. Integrate-and-fire models
In integrate-and-fire models the state of the neuron is solely characterized by its membrane
potential (denoted here as v), i.e. they are one-dimensional models. In this respect, the spatial
extension of the neuron is neglected and effectively one deals with a point neuron.
The second fundamental simplifying assumption is based on the fact that the shapes of action
potentials are stereotypical and their timing is what matters, which allows us to focus on the
subthreshold dynamics of the membrane potential. Indeed, in integrate-and-fire models the
biophysical mechanism of spike generation (related to the activation/inactivation of voltage-
gated ion channels) is neglected so that the action potentials are not dynamically generated
from the model but rather added ad hoc as formal events at a certain firing time ti to an output
13
2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE
spike train x(t) according to a fire-and-reset rule: whenever v crosses a threshold voltage vT , a
spike is generated and the voltage is reset to a value vR. The subthreshold dynamics continues
after a certain refractory period τref (τref = 0 throughout this thesis). Such a fire-and-reset
rule introduces a very strong non-linearity in the system, but allows for a reduction of the
dimensionality of the dynamics.
The equation describing a general (noisy) integrate-and-fire model is
˙v = f(v) + η(t), if v = vT : v → vR,
where η(t) is a stochastic process accounting for sources of noise in the system (see Section 2.2)
and f(v) is a function describing the subthreshold dynamics that can be extracted from experi-
mental data by using the method of dynamic I −V curves (Badel et al., 2008). The function f(v)
determines the particular type of integrate-and-fire model: common choices are f(v) = µ − v
(leaky integrate-and-fire neuron; a voltage trace of such a model can be seen in Figure 1.7),
f(v) = µ + v2 (quadratic integrate-and-fire neuron) and the perfect integrate-and-fire (PIF)
model, with f(v) = µ = const,
˙v = µ + η(t), if v = vT : v → vR, (2.1)
where µ is the mean input current.
Each of these models has a certain range of applicability (a comparison of the performance
of some stochastic IF models is presented in (Vilela and Lindner, 2009)), and in particular
the PIF model is the canonical choice to describe a so-called tonically firing neuron (such as
some sensory cells with a high firing rate), in which the mean input current µ is so strong that
the voltage-dependence of the subthreshold dynamics can be neglected (Schwalger et al., 2013;
Bauermeister et al., 2013). In that situation, the firing of the neuron is pacemaker-like and very
regular.
A remarkable feature of the PIF model is that since the evolution of the membrane voltage
between spikes does not depend on the voltage itself, the output firing rate can be shown to be
always r0 = µ/vT , no matter what the temporal correlations of the noise η(t) are (Bauermeister
et al., 2013). For the remainder of the chapter we will focus on the PIF model Eq. (2.1).
A detailed review on integrate-and-fire models can be found in (Burkitt, 2006b).
2.2. Colored noise in neural systems
The term η(t) in Eq. (2.1) models the influence of the noise on the dynamics of the neuron.
Such a term is necessary since in vitro and in vivo recordings of neural spiking display high
variability. This variability is not due to measurement noise, but it is inherent to the neural
system.
In general, there are three main sources of noise influencing the membrane potential (Gerstner
and Kistler, 2002): (i) Channel noise arising from a finite population of ion channels due to the
random nature of their opening/closing events; (ii) Synaptic unreliability due to stochastic
release of neurotransmitter; (iii) (Quasi-)random arrival times of synaptic input. (i) is intrinsic
to the neuron, whereas (ii) and (iii) are associated to external synaptic input. However, noisy
integrate-and-fire neuron models do not clearly distinguish between intrinsic and external sources
of noise due to its representation of the neuron as a point neuron (Burkitt, 2006b), and all
contributions are subsumed under the term η(t).
14
2.3 Analytical approach to colored noise
Focusing on η(t) as a model of the noisy synaptic input, theoretical studies have frequently
assumed that it is Poissonian, i.e. uncorrelated in time. By using a diffusion approximation one
can then model the synaptic input as Gaussian white noise, which simplifies considerably the
analytical treatment of the problem. However, realistic synaptic inputs have temporal structure,
due to a plethora of phenomena: bursting, refractoriness, etc. (Schwalger et al., 2015). Because
the white noise approximation cannot account for such temporal correlations, the study of
colored noise (i.e. temporally-correlated noise, see Section 1.3.4) in neural systems is currently
an active topic of research for which analytical results are still required.
2.3. Analytical approach to colored noise
The analytical treatment of dynamical systems driven by colored noise present different compli-
cations (H¨anggi and Jung, 1994). In particular, the colored-noise driving renders these systems
non-Markovian, so that standard techniques such as the Fokker-Planck approach cannot be di-
rectly applied. Nevertheless, for certain cases there are still several methods one can resort to,
such as Markovian embedding (a comprehensive presentation can be found in (H¨anggi and Jung,
1994; Luczka, 2005)).
A sophisticated approach using a Markovian embedding has been recently put forward by Schwal-
ger et al. (2015) to calculate the interspike-interval and higher-order interval statistics of a PIF
neuron driven by weak Gaussian noise, i.e.
˙v = µ + ση(t), if v(ti) = vT : v → vR
where σ2 is the variance of the input noise and η(t) is a zero-mean, unit-variance Gaussian process
with autocorrelation function Cin(τ) and power spectrum Sin(f) =
+∞
−∞ dτei2πτf Cin(τ). The
approach introduced by Schwalger et al. (2015) can account for many different types of colored
noise driving η(t), as long as its correlation function can be approximated by a sum of damped
oscillations (or in the frequency domain, if the power spectrum can be represented by a sum of
Lorentzian functions). This condition is satisfied by a large class of processes, and in particular
by the two that are studied in this thesis: an Ornstein-Uhlenbeck process and harmonic noise.
Finally, the assumption that the noise must be weak is expressed in terms of a small parameter
= σ/µ 1.
The main result of (Schwalger et al., 2015) is an explicit formula for the n-th order interval1
densities Pn(t),
Pn(t) =
r0
2 4π 2h3(t)
exp −
(r0t − n)2
4 2h(t)
[(n − r0t)g(t) + 2h(t)]2
2h(t)
− 2
[g2
(t) − 2h(t)Cin] ,
(2.2)
where r0 = µ
vT −vR
is the mean firing rate of a PIF neuron and g(t) and h(t) are given by
g(t) = r0
t
0
dt Cin(t ) (2.3)
and
1
n-th order intervals are sums of n subsequents interspike intervals. Details on these and other statistics of
neural output can be found in (Gabbiani and Koch, 1998).
15
2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE
h(t) = r0
t
0
dt g(t ). (2.4)
In other words, Eq. (2.2) gives the n-th order interval densities of the output spike train by
knowing only the normalized autocorrelation function of the weak input noise, Cin(t), and some
integrals over it, g(t) and h(t), which can be calculated analytically if Cin(τ) is simple enough2.
The density Pn(t) can be related to the autocorrelation function of the output point process
by using (Cox and Lewis, 1966; Gabbiani and Koch, 1998)
Cout(τ) = r0 δ(τ) +
∞
n=1
Pn(|τ|) − r0 . (2.5)
After substituting Eq. (2.2) into the above equation, an exact expression for the autocorrelation
function of a PIF neuron driven by a weak Gaussian process with arbitrary temporal structure
is obtained.
Ornstein-Uhlenbeck noise Substituting Eq. (1.12) into Eq. (2.3) and Eq. (2.4) we find
gOU(t) = r0τc [1 − exp(−t/τc)] , hOU(t) = r2
0τ2
c
t
τc
+ exp
−t
τc
− 1 , (2.6)
which are required to compute the autocorrelation function of the output spike train of a PIF
neuron driven by colored noise, as described above.
Harmonic noise Using Eq. (1.17), we can proceed as in the Ornstein-Uhlenbeck case and
calculate the functions
gHN(t) =
r0
Ω2 + (γ/2)2
γ + exp −
γ
2
t
sin(Ωt)
Ω
Ω2
−
γ
2
2
− γ cos(Ωt) , (2.7)
and
hHN(t) =
r2
0
Ω2 + (γ/2)2
γt +
Ω2 − (3/4)γ2
Ω2 + (γ/2)2
−
exp(−(γ/2)t)
Ω2 + (γ/2)2
cos(Ωt) Ω2
−
3
4
γ2
+
sin(Ωt)
Ω
3
2
γΩ2
−
γ
2
3
.
(2.8)
2.4. Results for a PIF model driven by colored noise
For convenience we have chosen µ = 1, vT = 1 and vR = 0, which leads to a firing rate of
r0 = 1. The accuracy of the analytical approximations is controlled by the small parameter
= σ/µ, where σ is the standard deviation of the input noise and its square, the variance σ2, is
a parameter in our simulations.
2
Note that Cin(τ) is the autocorrelation function for a unit-variance stochastic process, i.e. Cin(0) = 1. One
must take such a normalization into account when calculating Eq. (2.4) and Eq. (2.3).
16
2.4 Results for a PIF model driven by colored noise
For each type of noise driving the system, an approximation for the analytical autocorrelation
function has been calculated by truncating the infinite sum in Equation (2.5) after N terms.
The result has then been (numerically) Fourier-transformed in order to obtain a semianalytical
approximation for the power spectrum. A truncation parameter of N = 100 is sufficient to
reproduce accurately the relevant part of the autocorrelation function and the first peak of the
power spectrum for all the cases explored here. The low frequencies, on the contrary, are related
to the long-time behaviour of the autocorrelation function, which is affected by the truncation.
We choose therefore a high truncation parameter, N = 10000, such that we can compare the
power spectrum over the whole frequency range available from the stochastic simulations.
2.4.1. PIF neuron driven by Ornstein-Uhlenbeck noise
Most of the analytical results that are found in the literature concerning colored-noise driving in
neurons involve exponentially-correlated noise (Schwalger et al., 2015), which can be generated
by an OUP at the cost of adding one degree of freedom to the dynamics (Markovian embedding).
However, analytical tractability is not the only reason to choose exponentially-correlated noise
to drive the system: it turns out that in many cases filtered synaptic dynamics and slow intrinsic
channel noise can be well approximated by an OUP with an adequate correlation time (Fisch
et al., 2012).
The dynamics of the PIF neuron driven by an OUP is governed by the following two-
dimensional set of stochastic differential equations:
˙v(t) = µ + η(t), if v(ti) = vT : v → vR
˙η(t) = −
η(t)
τc
+
2σ2
OU
τc
ξ(t).
(2.9)
A sample trajectory is shown in Figure 2.1a. Large values of the input noise lead to an increased
probability of firing, i.e. spikes are more closely spaced in time.
vR
vT
v(t)
0 2 4 6 8 10 12 14
t
−1
0
1
OUη(t)
(a) PIF neuron driven by an OUP
vR
vT
v(t)
0 2 4 6 8 10 12 14
t
−2
0
2
HNx(t)
(b) PIF neuron driven by harmonic noise
Figure 2.1: Illustration of the PIF driven by (a) an OUP and (b) harmonic noise: the upper
panel represents a membrane voltage trace of a PIF model that yields a spike whenever it hits
the threshold vT . The lower panel shows a sample trajectory of the input noise. Note the
increased firing probability at higher values of the driving noise processes η(t) and x(t). Here
the size of the spikes is arbitrarily set for illustration purposes.
17
2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE
Here the important parameters of the system are the correlation time, τc, which tells us how
slow the fluctuations of the input noise are; the variance of the noise, σ2; and the firing rate of
the PIF neuron, r0, which has been set to 1, as explained above.
What follows is a comparison between numerical simulations of Eq. (2.9) and (semi)analytical
results obtained by implementing the formulas presented in Section 2.3. The effect of τc and
σ2 on the autocorrelation function and the power spectrum is explored. The coherence of the
oscillations is quantified by the quality factor Q, which is measured on the first peak of the
output power spectrum from the semianalytical results.
Effect of the variance σ2
OU Here the correlation time is fixed (and therefore also the cut-
off frequency) to an intermediate value τc = 10 and several simulations are performed with
increasing σ2
OU . The aim is to observe at which point the analytical formulas ceased to provide
a good approximation for the autocorrelation function and the power spectrum obtained from
the numerical simulations. Results for the autocorrelation function and the power spectrum are
summarized in Figure 2.2. For relatively small σ2
OU the system shows rather periodic firing and
peaks in the power spectrum at the firing rate r0 and its higher harmonics are observed.
0
2
4
6
8
0 2 4 6 8 10
C(τ)
τ
σ2 = 0.001
σ2 = 0.01
σ2 = 0.1
σ2 = 1
(a)
10−4
10−2
100
0.001 0.01 0.1 1
S(f)
f
σ2 = 0.001
σ2 = 0.01
σ2 = 0.1
σ2 = 1
(b)
.
Figure 2.2: Autocorrelation function (a) and power spectrum (b) of a PIF neuron driven by
an OUP for different values of the variance of the OUP σ2
OU. The correlation time of the OUP
has been set τc = 10. Results from numerical simulations of Eq. (2.9) (dots) are compared to
(semi)analytical results (solid lines) from the approach outlined in Section 2.3. The infinite sum
is cut off after N = 10000 terms. Other parameters: µ = 1.0 and vT = vR = 1.0, leading to
r0 = 1.0.
Figure 2.2b shows that the analytical approximation breaks down for σ2
OU = 1, which is in
any case expected since in principle it is only valid for weak noise. As the variance (and thus
the noise intensity DOU = σ2
OUτc) increases, more and more power is added in the low-frequency
range, where the OUP contains most of the power. It is remarkable that the Lorentzian structure
of the input noise is preserved in that range, observation that was also made in (Middleton et al.,
2003). At higher frequencies, increasing the variance broadens the peaks in the power spectrum
18
2.4 Results for a PIF model driven by colored noise
until it destroys completely those at higher harmonics of the firing rate r0 = 1. The quality
factor Q of the stochastic oscillations decreases as σ2
OU is increased, as seen in Figure 2.4a.
Notably, the decrease seems to follow a power-law for the range of noise variances explored.
Effect of the correlation time τ Here the variance of the noise is fixed to σ2
OU = 0.01, a value
at which the analytical approximations are still expected to reproduce well the result of the
numerical simulations (see above). Several simulations are performed for increasing τc, and the
results are summarized in Figure 2.3.
0
2
4
6
8
0 2 4 6 8 10
C(τ)
τ
τc = 0.1
τc = 1
τc = 10
τc = 100
(a)
10−4
10−2
100
0.001 0.01 0.1 1
S(f)
f
τc = 0.1
τc = 1
τc = 10
τc = 100
(b)
Figure 2.3: Autocorrelation function (a) and power spectrum (b) of a PIF neuron driven by
an OUP for different values of the correlation time of the OUP τc. The variance of the OUP
has been set σ2
OU = 0.01. Results from numerical simulations of Eq. (2.9) (dots) are compared
to (semi)analytical results (solid lines) from the approach outlined in Section 2.3. The infinite
sum is cut off after N = 10000 terms. Other parameters: µ = 1.0 and vT = vR = 1.0, leading
to r0 = 1.0.
Different τc lead to different cut-off frequencies, which can be observed in the low-frequency
range of the power spectrum, where the system preserves the noise spectral structure. Notably,
increasing τc leads to less regular spike trains only for correlation times smaller than the mean
interspike interval of the system (here I = 1/r0 = 1). This becomes evident when we look at
the dependence of the quality factor Q on the correlation time in Figure 2.4b. This “saturation”
in the quality of the oscillations can also be noticed in the number of peaks present in the
power spectrum: whereas large noise variance left only one peak in the power spectrum, for long
correlation times the power spectrum seems to “saturate” and several peaks are still observed.
2.4.2. PIF neuron driven by harmonic noise
The second type of noise used to drive the PIF neuron is the so-called harmonic noise, which
has been already discussed above. This setup corresponds to the interesting case of a non-linear
stochastic oscillator, which generates narrow-band noise (in the sense that the power spectrum
19
2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE
10−3
10−2
10−1
σ2
OU
100
101
Q
(a)
10−2
10−1
100
101
102
103
τc
100
101
102
103
Q
(b)
Figure 2.4: Dependence of the quality factor Q of the PIF neuron’s output on (a) the variance
σ2
OU and (b) the correlation time τc of a driving Ornstein-Uhlenbeck noise. The quality factor has
been extracted from the (semi)analytical results from Section 2.3. While Q seems to decrease as
a power-law as the noise variance is increased, for the correlation time a saturation is observed at
timescales comparables to the mean interspike interval. Parameters: (a) τc = 10; (b) σ2
OU = 0.01;
remaining: µ = 1.0 and vT = vR = 1.0, leading to r0 = 1.0.
of the output contains peaks) but which is also driven by narrow-band noise.
The combined system is described by the following set of stochastic differential equations:
˙v(t) = µ + x(t), if v(ti) = vT : v → vR
˙x(t) = y(t)
˙y(t) = −γy(t) − ω2
0x(t) +
√
2Dξ(t),
(2.10)
where ξ(t) is the usual zero-mean white Gaussian noise and the output of the system is a
collection of spike times {ti}. A sample realization of the process is shown in Figure 2.1b.
The interplay between the two intrinsic frequencies present in the system, namely the mean
firing rate of the PIF neuron, r0 = µ
vT −vR
, and the frequency of the damped oscillation of
the harmonic noise, Ω = ω2
0 − (γ/2)2, leads to interesting non-linear effects in the power
spectrum. In the simulations the relation between r0 and Ω has been parametrized by means of
their frequency ratio
w =
Ω
r0
. (2.11)
Apart from w, other parameters relevant to the simulations are the variance of the harmonic
noise, σ2
HN = D/(ω2
0γ), the quality factor of the noise Q = Ω/γ and r0 = 1. We also provide
here for completeness the relations that allow us to determine ω0, γ and D (intensity of the
white Gaussian driving) given those parameters:
γ =
2πr0w
Q
, ω2
0 = (2πr0w)2
1 +
1
4Q2
.
From here,
D = σ2
HNω2
0γ.
20
2.4 Results for a PIF model driven by colored noise
Effect of the input quality factor Q Because the PIF driven by harmonic noise is an instance
of a non-linear oscillator driven in turn by stochastic oscillations, we are particularly interested
in studying the effect of the coherence of the input noise, characterised by Q, on the power
spectrum and the autocorrelation funcion of the output spike train. We fix therefore w = 0.4
and σ2
HN = 0.01, a reasonably small value for which the analytical expressions are still expected
to reproduce accurately the autocorrelation function, as they do in the case of the PIF neuron
driven by an OUP. Furthermore, in order to observe complex non-linear effects in the power
spectrum w should not be a simple ratio such as w = 1/2 (Bauermeister et al., 2013).
The results are summarised in Figure 2.5. The striking feature of the power spectrum as
compared to the OUP driving is that peaks do not occur solely at the firing rate r0 and its higher
harmonics: the frequency of the harmonic noise Ω is also present, together with sidebands at
r0 ±Ω/(2π) and some of their harmonics. Increasing the quality factor of the input reveals more
peaks in the power spectrum and reduces the width of the existing ones, indicating an enhanced
coherence of the output.
Here the analytical expressions also reproduce quite accurately the results from numerical
simulations.
0
5
10
0 2 4 6 8 10 12
C(τ)
τ
Q = 1
Q = 20
Q = 50
(a)
10−4
10−2
100
1
S(f)
f
Q = 1
Q = 20
Q = 50
(b)
Figure 2.5: Autocorrelation function (a) and power spectrum (b) of a PIF neuron driven by
harmonic noise for different values of the input quality factor Q. The variance of the harmonic
noise and the frequency ratio have been set to σ2
HN = 0.01 and w = 0.4, respectively. Results
from numerical simulations of Eq. (2.10) (dots) are compared to (semi)analytical results (solid
lines) from the approach outlined in Section 2.3. The infinite sum is cut off after N = 10000
terms. Other parameters: µ = 1.0 and vT = vR = 1.0, leading to r0 = 1.0.
Comparison with an experimental model Remarkably, the PIF neuron driven by harmonic
noise seems to suit very well a specific experimental model: the peripheral electroreceptors
of paddlefish (Wilkens et al., 1997). In these electroreceptors, a population of epithelial cells
generate collectively spontaneous stochastic oscillations at around fe = 25 Hz, which drive
in turn a pacemaker-like oscillator in the peripheral terminals of afferent sensory neurons at
21
2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE
approximately twice the frequency (denoted by fa).
The power spectra we presented in Figure 2.5b (an annotated version is shown in Figure 2.6b)
seem to capture already the main features of the experimental power spectrum of the afferent
oscillations shown in Figure 2.6a, including the sidepeaks due to the non-linear interaction of
the two fundamental frequencies, fa and fe. Indeed, a similar theoretical model was used in
(Bauermeister et al., 2013), where the power spectrum was obtained by numerical simulations
of a PIF neuron driven by harmonic noise together with some Ornstein-Uhlenbeck noise and
which successfully compared to experimental data (Figure 2.6a). This is an example of how a
relatively simple model can account for realistic features of neural sensory systems.
(a)
10−4
10−2
100
1
S(f)
f
Q = 1
Q = 20
Q = 50
(b)
Figure 2.6: Comparison of the power spectrum of the oscillations in the afferent terminals
of peripheral electroreceptors of paddlefish (a) with the power spectra for a PIF driven by
harmonic noise (b). (a) A power spectrum of a representative paddlefish electroreceptor afferent
measured experimentally (grey dots) is compared with a power spectrum obtained by numerical
simulations (magenta line) of a PIF neuron driven by a combination of Ornstein-Uhlenbeck and
harmonic noise. Adapted from (Bauermeister et al., 2013). (b) Power spectrum of a PIF neuron
driven by harmonic noise (see Section 2.4.2) for different Q. Note the peaks at the firing rate
of the PIF neuron, r0, the driving frequency of the noise, Ω/(2π), as well as the sidebands
generated due to their non-linear interaction.
22
3. Noise-induced oscillations in a heteroclinic system
The second system discussed in this thesis is a paradigmatic example of a system that only
displays oscillations in the presence of noise. We use the term noisy heteroclinic oscillator
to refer to such a system from now on. Its dynamics is governed by the pair of Langevin
equations (Thomas and Lindner, 2014)
˙y1 = cos(y1) sin(y2) + α sin(2y1) +
√
2Dξ1(t),
˙y2 = − sin(y1) cos(y2) + α sin(2y2) +
√
2Dξ2(t),
(3.1)
together with reflecting boundary conditions on the domain −π/2 ≤ {y1, y2} ≤ π/2. The
processes
√
2Dξ1,2(t) are independent white Gaussian noise sources of intensity D satisfying
ξi(t)ξj(t ) = δ(t − t )δi,j. The parameter α determines the stability of the heteroclinic cycle
from the deterministic dynamics. As an illustration of the dynamics of the system, Figure 1.8
shows a sample trajectory for weak-noise driving, where pronounced oscillations are displayed
in the form of irregular clockwise rotations in the (y1, y2) plane.
Details on the dynamics of the system have already been discussed above, in particular how
the stochastic case differs from the deterministic one (D = 0) due to the presence of a noisy
finite-period limit cycle when α ∈ (0, 1/2), for which the underlying heteroclinic cycle is stable.
The period of oscillation is related to the intensity of the driving white Gaussian noise, so that
the smaller the noise, the longer the period of the limit cycle (Shaw et al., 2012). For larger
values of the noise, however, it is expected that the oscillatory (though noisy) nature of the
individual realizations is destroyed, leading to a loss of the coherence of the oscillations and the
broadening of the peak in the power spectrum.
This noisy heteroclinic oscillator was motivated in Section 1.3.5 as one possible mechanism
to obtain stochastic oscillations. On the top of it, it has some other appealing features, related
to the fact that the deterministic dynamics contains (for a certain parameter range) a stable
heteroclinic cycle. It turns out that systems containing such cycles are appropriate playgrounds
to study the role of saddle points in controlling the timing of rhythmic behaviour (Shaw et al.,
2012). This might be very useful to model rhythms in biological systems, which often show
the robustness to perturbations characteristic of limit-cycle dynamics together with mechanisms
of behavioural or metabolic control. Such mechanisms might lead to extended dwell times in
localized regions of phase space, which are typical of trajectories passing close to heteroclinic
trajectories connecting different saddle points. Due to these features, stable heteroclinic cycles
have been used to model several phenomena: e.g. olfactory processing in insects (Rabinovich
et al., 2008) and motor control in a marine mollusk (Varona et al., 2002) (a more comprehensive
list can be found in (Shaw et al., 2012)). Although these aspects are not studied in this thesis, we
hope they provide some insight into the versatility of this class of models in describing rhythmic
behaviour.
At the end of this section we derive an approximation for the small noise limit of the noisy
heteroclinic oscillator. The effect of small additive noise on systems possessing structurally stable
heteroclinic cycles was originally motivated by the study of turbulent layer models (Busse and
Heikes, 1980; Stone and Holmes, 1989), where the addition of noise is responsible for physical
phenomena such as intermittency and bursting. These works identified the fundamental role of
small random perturbations in the neighbourhood of the hyperbolic fixed points of the system,
which was further studied in e.g. (Kifer, 1981; Stone and Holmes, 1990; Stone and Armbruster,
23
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
1999). More general and rigorous results on the small noise limit of noisy heteroclinic networks
are also available (Armbruster et al., 2003; Bakhtin, 2010a; Bakhtin, 2010b).
In this section we derive a method to obtain the power spectrum of sin(y1) for the noisy
heteroclinic oscillator using the Fokker-Planck formalism. The resulting equations are then
solved by two different numerical and semianalytical matrix methods, which are thoroughly
discussed and whose performance is compared in terms of accuracy and efficiency. Results for
the steady-state probability density, the steady-state probability current and the power spectrum
are obtained, and the dependence of some spectral measures on the noise intensity is studied.
Finally, we develop an approximation for the power spectrum in the small noise limit.
3.1. General considerations
In this section we briefly summarize the Fokker-Planck formalism and discuss some particularities
we encounter when applying it to our model. Moreover, we recall a useful result on the small noise
limit of systems containing heteroclinic cycles as presented in (Stone and Holmes, 1990). These
are the building blocks for the results derived in the rest of the chapter, namely the calculation
of the power spectrum for sin(y1) using matrix methods for the Fokker-Planck equation and the
dichotomous approximation of the power spectrum in the small noise limit.
Fokker-Planck formalism Our analysis relies heavily on the Fokker-Planck formalism (Risken,
1984), which is justified because the driving noise in the Langevin equations Eq. (3.1) is white
and Gaussian. The transition probability density P(y, t + τ|y , t) for the stochastic process y(t)
satisfies then the time-dependent Fokker-Planck equation
∂τ P(y, t + τ|y , t) = LFP(y)P(y, t + τ|y , t), (3.2)
with initial condition P(y, t|y , t) = δ(y − y ). The Fokker-Planck operator LFP is determined
from the Langevin equations Eq. (3.1) and reads in this particular case
LFP = ∂y1 (− cos(y1) sin(y2) − α sin(2y1) + D∂y1 )
+ ∂y2 (sin(y1) cos(y2) − α sin(2y2) + D∂y2 ) .
(3.3)
Because the Fokker-Planck operator LFP(y) does not depend on time, we can express P(y, t +
τ|y , t) as (Risken, 1984)
P(y, t + τ|y , t) = P0(y) +
n
ϕn(y)φ∗
n(y)e−λnτ
, (3.4)
where ϕn(y) and φ∗
n(y) are, respectively, the eigenfunctions with eigenvalue λn ∈ C of LFP and
its adjoint L†
FP, i.e.
LFP ϕn = −λnϕn, L†
FPφ∗
n(y) = −λnφ∗
n(y). (3.5)
The eigenfunctions ϕn(y) and φ∗
n(y) must satisfy appropriate boundary conditions. The eigen-
function associated to λ0 = 0 is the unique stationary distribution P0(y), such that limτ→∞ P(y, t+
τ|y , t) → P0(y).
24
3.1 General considerations
In Section 3.2 we present a method to determine the power spectrum of the observable
f(y1(t)) = sin(y1(t)) for the noisy heteroclinic oscillator. This method requires solving Fokker-
Planck-like equations. However, the Langevin equations Eq. (3.1) describe a non-potential
system, i.e. we cannot find a scalar potential U(y) such that they can be written as
˙y = − U(y) +
√
2Dξ,
where ξ is a vector whose components are independent white Gaussian noise processes. Solving
analytically non-potential systems is not possible in general, and in the majority of the cases
we must resort to semianalytical and numerical methods. That is indeed what we do in Sec-
tion 3.3, where the methods used take advantage of the fact that the system Eq. (3.1) with
reflecting boundary conditions on the domain Ω = [−π/2, π/2] × [−π/2, π/2] is equivalent to
a system governed by the same dynamics Eq. (3.1) but with periodic boundary conditions on
Ω = [−π, π] × [−π, π], if an appropriate observable f(y1(t)) is chosen. Under these conditions
the eigenfunctions {ϕn(y), φ∗
n(y)} are 2π-periodic in y1 and y2 due to the invariance of LFP(y)
under the transformations (y1, y2) → (y1 + 2πk, y2) and (y1, y2) → (y1, y2 + 2πk), k ∈ Z, i.e.
independent 2π-translations in y1 and y2.
In particular, a suitable observable f(y1) for which the previous statement holds is f(y1) =
sin(y1): on the one hand, this quantity takes the same values in Ω as in Ω , i.e. the whole
image sin(y1) ∈ [−1, 1]; on the other hand, sin(y1) preserves the reflection symmetry of the drift
field about the lines y1 = ±π/2; the combination of these two facts guarantees that trajectories
of y1 reflected on the boundaries of Ω and trajectories of y1 periodic on Ω lead to identical
trajectories of sin(y1) and, hence, to the same statistics. It is in this sense that the two systems
are equivalent. Of course, all the previous arguments would also apply for observables which
are functions of y2(t). Sample trajectories of y1(t), y2(t) and sin(y1(t)) are shown in Figure 3.1.
There sin(y1(t)) appears as a “smoothed out” version of y1(t), because its non-linearity reduces
the amplitude of the fluctuations occurring close to y1 = ±π/2, i.e. in the neighbourhood of the
saddle points, while leaving the transit regions practically unaffected. We also observe that the
frequency of oscillation of sin(y1(t)) is the same as y1(t), which allows us to study the dependence
on noise of the spectral measures of the underlying process y1(t) through those of sin(y1(t)).
Last but not least, the methods we introduce later lead to a particularly simple form for the
power spectrum of sin(y1).
Small noise limit The key feature of the system is that the presence of even small amounts of
additive temporally uncorrelated noise introduces a well-defined timescale, which does not exist
in the deterministic case. This timescale emerges because the magnitude of the random and
the deterministic components is comparable in the neighbourhood of the saddle points, hence
the diffusive action of the noise prevents the trajectories from getting “stuck” there increasingly
long periods of time. Nevertheless, the small perturbation does not modify significantly the
structure of the vector field during the “jump” events, i.e. drifts following closely the heteroclinic
connection. How the time spent in the neighbourhood of the saddle points (dwell time) depends
on the noise intensity is therefore fundamental to characterise the spectral properties of the
system. A theoretical analysis along these lines is performed in (Stone and Holmes, 1990),
where the authors study these dwell times as first passage times in a finite neighbourhood of a
saddle point embedded in a heteroclinic cycle. Linearizing the system at the saddle point and
applying the Fokker-Planck formalism they are able to derive a relation between the mean first
25
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
0 20 40 60 80 100 120 140
t
−2.0
−1.5
−1.0
−0.5
0.0
0.5
1.0
1.5
2.0
y1(t),sin(y1)(t),y2(t) y1
y2
sin(y1)
Figure 3.1: Sample traces of the system’s variables y1(t) (black lines) and y2(t) (red lines) and
of the observable sin(y1) (blue dots). Parameters: D = 0.01, α = 0.1.
passage time τsp and the intensity of the noise in the small noise limit D δ2,
τsp ≈
1
λu
ln
δ
√
2D
+ O(1) , (3.6)
where λu is the unstable eigenvalue of the linearization and δ quantifies the size of the neigh-
bourhood studied. Such a logarithmic dependence have been reported several times in the
literature, e.g. (Bakhtin, 2010b; Kifer, 1981). The unstable and stable eigenvalues λu, λs are
obtained by linearizing the deterministic velocity vector field at one of the saddle points, e.g.
y0 = (−π/2, +π/2), which leads to
˙u1 = (1 − 2α)u1 ≡ λuu1
˙u2 = −(1 + 2α)u2 ≡ −λsu2,
where u = (u1, u2) = y − y0. The unstable eigenvalue is λu = (1 − 2α), whereas the absolute
value of the stable eigenvalue is λs = (1 + 2α), and it turns out that this is valid also for the
other saddle points. The fact that λs > λu, ∀α leading to a stable heteroclinic cycle implies
that the distribution of trajectories leaving the neighbourhood of a saddle point (and therefore
entering the next saddle) is centered on the heteroclinic connection (Bakhtin, 2010b).
If we divide the phase plane of our system into four square regions, each one associated to
a saddle point, we can use Eq. (3.6) to estimate the corresponding mean dwell times τc,i, i ∈
{1, . . . , 4}. The mean period of oscillation T will then be the sum T = i τc,i = 4τsp, where
the last equality follows from symmetry arguments. In Section 3.5 we combine the results from
(Stone and Holmes, 1990) with the theory of two-state processes to obtain an approximation of
the power spectrum in the small noise limit.
26
3.2 Approach to the power spectrum
3.2. Approach to the power spectrum
Here an approach to calculate the power spectrum for the stochastic process sin(y1,2(t)) is
presented. It shares some features with that used in (Risken and Vollmer, 1982) to derive the
susceptibility for the Brownian motion in a cosine potential.
Let us start by some generic manipulations concerning a (non-linear) transformation x = f(yi),
where yi is one of the components of the two-dimensional stationary stochastic process y(t) with
stationary probability distribution P0(y1, y2) ≡ P0(y). In our setting the evolution of y(t) is
described by Eq. (3.1), whereas f(yi) = sin(yi). In order to proceed it is convenient to alleviate
the burden of the notation by focusing on one of the two variables, i.e. yi ≡ y1. The calculations
for yi ≡ y2 would be carried out in exactly the same fashion.
The autocorrelation function Cf(y1)f(y1)(τ) ≡ Cxx(τ) = x(t)x(t + τ) − x(t) x(t + τ) can
be expressed as
Cxx(τ) = dy1 dy1f(y1)f(y1)P(y1, t; y1, t + τ) − dy1f(y1)P(y1, t)
2
(3.7)
= dy1 dy1f(y1)f(y1) P(y1, t; y1, t + τ) − P(y1, t)P(y1, t + τ) . (3.8)
Here P(y1, t) and P(y1, t; y1, t + τ) are, respectively, one- and two-time marginal distributions
of y1(t) from the joint process y ≡ {y1(t), y2(t)}, i.e.
P(y1, t) = P(y1, t + τ) = P0(y1) = dy2P0(y)
and
P(y1, t; y1, t + τ) = dy2 dy2P(y , t; y, t + τ). (3.9)
Moreover, P(y , t; y, t + τ) can be related to the so-called transition probability density P(y, t +
τ|y , t) by
P(y , t; y, t + τ) = P(y, t + τ|y , t)P0(y , t). (3.10)
Note that due to stationarity the dependence on the absolute time t is redundant in all the
previous expressions, hence from now on I set t = 0 and indicate only the time delay τ when
required. Substituting Eq. (3.9) and Eq. (3.10) into Eq. (3.7) we obtain
Cxx(τ) = dy1dy1dy2dy2f(y1)f(y1)P0(y ) P(y|y ; τ) − P0(y) . (3.11)
According to the Wiener-Khinchin theorem Eq. (1.2), in order to calculate the power spectrum
we have to Fourier transform Eq. (3.11), i.e.
Sxx(ω) =
+∞
−∞
dτeiωt
Cxx(τ) = 2 Re
+∞
0
dτeiωt
Cxx(τ) , (3.12)
where Re[·] denotes the real part of the argument and the last equality follows from the fact
that Cxx(τ) is a real and even function. The power spectrum has then been rewritten in terms
of one-sided Fourier transforms of the autocorrelation function.
27
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
Now we substitute Eq. (3.11) into Eq. (3.12), which leads to
Sxx(ω) = 2 Re d2
yd2
y f(y1)f(y1)P0(y )
+∞
0
dτeiωτ
P(y|y ; τ) − P0(y)
= 2 Re d2
yd2
y f(y1)f(y1)P0(y ) ˜B(y|y ; ω),
(3.13)
where d2y ≡ dy1dy2 and ˜B(y|y ; ω) ≡
+∞
0 dτeiωτ [P(y|y ; τ) − P0(y)]. The second term, pro-
portional to P0(y), removes the δ-singularity in ˜P(y|y ; ω) ≡
∞
0 dτeiωτ P(y|y ; τ) that occurs
at ω = 0 when f(y1) = 0. What Eq. (3.13) tells us is that if we can determine ˜B(y, y ; ω)
and P0(y) by any means, the power spectrum will be just a couple of integrations away. The
immediate step is therefore to find out which differential equations these functions satisfy.
On the one hand, the transition probability density P(y|y ; τ) is known to satisfy the time-
dependent Fokker-Planck equation Eq. (3.2) with initial condition P(y|y ; τ = 0) = δ(y−y ). On
the other hand, the stationary probability density P0(y) satisfies the time-independent Fokker-
Planck equation,
∂τ P0(y) = 0 = LFP(y)P0(y). (3.14)
To get an equation for ˜B(y|y ; ω) we subtract Eq. (3.14) from Eq. (3.2) and perform a one-sided
Fourier transform with respect to τ on the result, so that the LHS reads
∞
0
dτeiωt
∂τ P(y|y ; τ) − P0(y) =
= eiωτ
P(y|y ; τ) − P0(y)
∞
0
− iω
+∞
0
dτeiωτ
P(y|y ; τ) − P0(y)
= 0 − P(y|y ; τ = 0) − P0(y) − iω ˜B(y|y ; ω)
= − δ(y − y ) − P0(y) − iω ˜B(y|y ; ω),
where we have used the corresponding initial condition P(y|y ; τ = 0) = δ(y−y ) and neglected3
the undetermined term eiω∞ [P(y|y ; τ → ∞) − P0(y)].
On the other hand, on the RHS we use that LFP(y) does not depend on τ, so that
∞
0
dτeiωτ
LFP(y) P(y|y ; τ) − P0(y) =
= LFP(y)
∞
0
dτeiωτ
P(y|y ; τ) − P0(y) = LFP(y) ˜B(y|y ; ω).
This leads to the differential equation for ˜B(y|y ; ω),
(LFP(y) + iωI) ˜B(y|y ; ω)) = − δ(y − y ) − P0(y) , (3.15)
where I is the identity operator. Thus, if P0(y) is known (e.g. from solving Eq. (3.14)),
˜B(y|y ; ω) can be in turn obtained as the solution to Eq. (3.15) and we have all the ingredients
to get the power spectrum Sxx(f) through Eq. (3.13).
3
This neglection can be put into more formal grounds by considering a one-sided Laplace transform as a
function of a complex parameter s = σ + iω, σ, ω ∈ R and taking the appropriate limit to a Fourier transform.
28
3.3 Solving the equations by matrix methods
While the previously outlined approach is completely valid, it turns out that it is more con-
venient to seek an equation for
˜H(y; ω) = d2
y f(y1)P0(y )
∞
0
dτeiωτ
P(y|y ; τ) − P0(y) = d2
y f(y1)P0(y ) ˜B(y|y ; ω)).
(3.16)
in terms of which the power spectrum Eq. (3.13) can be expressed as
Sxx(ω) = 2 Re d2
yf(y1) ˜H(y; ω), (3.17)
where x = f(y1) as defined above. It is now easy to obtain an equation for ˜H(y; ω) by multiplying
Eq. (3.15) by f(y1)P0(y ) and integrating over y , i.e.
d2
y f(y1)P0(y )(LFP(y) + iωI) ˜B(y|y ; ω)) = − d2
y f(y1)P0(y ) δ(y − y ) − P0(y) ,
which leads to
(LFP(y) + iωI) ˜H(y; ω) = −P0(y) [f(y1) − f(y1) ] , (3.18)
where we have used Eq. (3.16) and the sifting property of the two-dimensional Dirac delta
distribution δ(y − y ) ≡ δ(y1 − y1)δ(y2 − y2). Note that if P0(y) is already normalized, no
additional normalization condition on ˜H(y; ω) is necessary.
Let us briefly recap what we have achieved with the previous manipulations: the power
spectrum Sxx(ω) of a (non-linear) transformation y1 → x = f(y1) of one component of the
two-dimensional stationary process y(t) has been expressed in Eq. (3.17) in terms of a function
˜H(y; ω) which satisfies the inhomogeneous partial differential equation (PDE) Eq. (3.18). The
inhomogeneity is essentially the steady-state probability density P0(y), which can be obtained
from the stationary Fokker-Planck equation Eq. (3.14).
Let us also note that this formulation of the problem in terms of a PDE for an auxiliary
function ˜H(y; ω) instead of just ˜P(y|y ; ω) has been successfully used in (Gleeson and O’Doherty,
2006) to derive multiple numerical and asymptotic approximations of correlation functions and
spectra, even though the system tackled by them is completely different.
Hereinafter we focus on the model we are interested in: the noisy heteroclinic oscillator
described by Eq. (3.1). In the next section two different matrix methods are used to determine
˜H(y; ω) from Eq. (3.18) in this particular model.
3.3. Solving the equations by matrix methods
3.3.1. Expansion into a complete set of functions
The solution to the two-dimensional, second-order, partial differential equations (PDE)
LFP(y)P0(y) = 0, (3.14 revisited)
(LFP(y) + iωI) ˜H(y; ω) = −P0(y) [f(y1) − f(y1) ] , (3.18 revisited)
29
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
for the noisy heteroclinic oscillator may be found by expanding P0(y) and ˜H(y; ω) into a com-
plete set of functions chosen appropriately according to the boundary conditions.
As explained in Section 3.1, the system whose dynamics is described by Eq. (3.1) with reflect-
ing boundary conditions on the domain Ω = [−π/2, π/2] × [−π/2, π/2] is equivalent, for certain
observables (here we use f(y1) = sin(y1)), to a system with dynamics described by the same
equations but periodic boundary conditions on Ω = [−π, π] × [−π, π] instead. This suggests an
expansion into a basis of complex exponentials with periods Li = 2π, i = 1, 2 in both variables
y1 and y2, i.e. the set {ei(mk1y1+lk2y2)}, m, l ∈ Z, with fundamental modes ki = 2π
Li
= 1. The
expansion therefore reads
P0(y) =
∞
m=−∞
∞
l=−∞
cm,lei(k1my1+k2ly2)
=
∞
m=−∞
∞
l=−∞
cm,lei(my1+ly2)
. (3.19)
Analogously, ˜H(y; ω) can be written as
˜H(y; ω) =
∞
m=−∞
∞
l=−∞
˜Hm,l(ω)ei(my1+ly2)
. (3.20)
By inserting the previous expansions into the corresponding differential equation, one obtains
a system of linear equations to which standard numerical matrix methods can be applied, which
are described in the last part of this subsection (see Sections 3.3.2 and 3.3.3). We start by
plugging the stationary density P0(y) into Eq. (3.14) and then proceed similarly for ˜H(y; ω) in
Eq. (3.18).
Determination of the coefficients for the stationary solution P0(y) If we insert Eq. (3.19)
into Eq. (3.14), we obtain
0 = −
1
4
m,l
(m − l)cm−1,l−1 + (m + l)cm+1,l−1 − (m − l)cm+1,l+1
− (m + l)cm−1,l+1 + 2αmcm−2,l + 2αlcm,l−2
− 2αmcm+2,l − 2αlcm,l+2 + 4D(m2
+ l2
)cm,l eimy1
eily2
,
(3.21)
where we have used the definition of LFP(y) given by Eq. (3.3), sin(x) = (eix − e−ix)/(2i) and
cos(x) = (eix + e−ix)/2. Redefining the indices of the sums is also necessary for some terms.
Since the previous relation is valid ∀y1, y2, the expression within square brackets must vanish
and we have
0 = −
1
4
(m − l)cm−1,l−1 + (m + l)cm+1,l−1 − (m − l)cm+1,l+1
− (m + l)cm−1,l+1 + 2αmcm−2,l + 2αlcm,l−2
− 2αmcm+2,l − 2αlcm,l+2 + 4D(m2
+ l2
)cm,l , ∀m, l ∈ Z.
(3.22)
This equation defines an infinite, homogeneous system of linear equations.
The system contains additional symmetries which impose restrictions on the set of coefficients
{cm,l}. In particular, reflections through the origin (y1, y2) → (−y1, −y2) leave LFP invariant,
i.e. LFP(−y) = LFP(y), as it can be readily checked. The eigenfunctions ϕn(y) of LFP have
30
3.3 Solving the equations by matrix methods
therefore a definite parity, i.e. they must be even ϕn(−y) = ϕn(y) or odd ϕn(−y) = −ϕn(y).
From the positivity of the steady-state distribution P0(y) ≡ ϕ0(y) ≥ 0, we conclude
P0(−y) = P0(y). (3.23)
The consequences of this relation on the coefficients of the expansion are determined by plugging
Eq. (3.19) into Eq. (3.23),
0 = P0(y) − P0(−y) =
m,l
cm,leimy1
eily2
−
m,l
cm,le−imy1
e−ily2
=
=
m,l
[cm,l − c−m,−l] eimy1
eily2
, (3.24)
where to obtain the second line we have transformed the indices (m, l) → (−m, −l) in the second
term. Since Eq. (3.24) must be valid ∀y1, y2, it follows that c−m,−l = cm,l. Moreover, P0(y)
takes values in R because it is a probability density function, so that the coefficients also satisfy
c−m,−l = c∗
m,l. Putting the two conditions together, we have
cm,l = c−m,−l ∈ R.
Furthermore, the translations (y1, y2) → (y1 +2πk, y2), (y1, y2) → (y1, y2 +2πk) and (y1, y2) →
(y1 + kπ, y2 ± kπ), k ∈ Z also leave LFP(y) invariant. Because we are only interested in peri-
odic solutions, Bloch’s theorem guarantees that the eigenfunctions ϕn(y) of LFP must be also
symmetric under these translations. For P0(y) ≡ ϕ0(y) this means (apart from the trivial
2π-periodicity in y1 and y2 already imposed by the boundary conditions)
P0(y1 + kπ, y2 ± kπ) = P0(y1, y2), k ∈ Z,
which leads to the relation
0 = cm,l 1 − eimπ
eilπ
= cm,l 1 − (−1)(m+l)
.
Hence, cm,l = 0 if m + l = 2k + 1, k ∈ Z and only “even” coefficients (in the sense that m + l
add up to an even number) survive.
The relevance of these symmetries relations satisfied by the coefficients cannot be understated:
they provide a number of sanity checks on the output of our numerical routines, while in other
cases they are embedded in the implementation of the numerical method itself (see Section 3.3.3).
A summary of the symmetry properties of P0(y) and its expansion coefficients is provided in
Table 3.1.
Finally, we must discuss how to normalize P0(y) so that it has the properties of a probability
density function. First, let us note that the linear system Eq. (3.22) as it stands contains an
equation that depends linearly on all the others, that for m = l = 0, which reads 0 · c0,0 = 0.
Since the linear system is homogeneous, the set of solutions {cm,l} is infinite. A particular
solution is specified by adding a normalization condition, which we restrict to the central region
Ω = [−π/2, π/2] × [−π/2, π/2] limited by the deterministic heteroclinic cycle, i.e.
Ω
dyP0(y) = 1, Ω = [−π/2, π/2] × [−π/2, π/2]. (3.25)
31
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
Symmetry transformation P0(y) cm,l
y → −y P0(−y) = P0(y) c−m,−l = cm,l
(y1, y2) → (y1 + kπ, y2 ± kπ), P0(y1 + kπ, y2 ± kπ) = P0(y1, y2) cm,l = 0, if m + l = 2k + 1
k ∈ Z
P0(y) ∈ R P0(y)∗ = P0(y) c−m,−l = c∗
m,l
Table 3.1: Summary of the conditions imposed by symmetries (other than 2π-periodicity on y1
and y2) on the stationary probability density function P0(y) and its expansion coefficients cm,l.
We need to relate such a condition to the coefficients of the expansion on the larger region
Ω = [−π, π] × [−π, π], which we can accomplish by substituting Eq. (3.19) into Eq. (3.25) and
using the orthogonality relations for the Fourier basis
+π
−π
dye−iny
eimy
= 2πδm,n, (3.26)
so that we obtain
Ω
dyP0(y) =
+π
−π
dy1
+π
−π
dy2P0(y) = (2π)2
c0,0. (3.27)
Moreover, using P0(y1 + kπ, y2 + kπ) = P0(y1, y2), k ∈ Z and the reflection symmetries with
respect to the axes determined by y1 = ±π/2 and y2 = ±π/2 (not discussed here) it is possible
to show
1 =
Ω
dyP0(y) =
1
4 Ω
dyP0(y). (3.28)
From Eq. (3.27) and Eq. (3.28) it follows that
c0,0 =
1
π2
. (3.29)
Determination of the coefficients for the auxiliary function ˜H(y; ω) Here we insert the ex-
pansions of ˜H(y; ω) and P0(y), Eq. (3.20) and Eq. (3.19) respectively, into Eq. (3.18) with
f(y1) = sin(y1). The main difference with the derivation for P0(y) is the fact that the RHS no
longer vanishes but it is instead a function of P0(y) and the observable sin(y1). Using the same
arguments as in the derivation for P0(y) above, we arrive at
−
1
4
(m − l) ˜Hm−1,l−1 + (m + l) ˜Hm+1,l−1 − (m − l) ˜Hm+1,l+1 − (m + l) ˜Hm−1,l+1 + 2αm ˜Hm−2,l
+2αl ˜Hm,l−2 −2αm ˜Hm+2,l − 2αl ˜Hm,l+2 + 4D(m2
+ l2
) − 4iω ˜Hm,l
=
1
2i
(2π)2
cm,l(c1,0 − c−1,0) − (cm−1,l − cm+1,l) , ∀m, l ∈ Z,
(3.30)
where ˜Hm,l(ω) (whose frequency dependence is omitted in the following for notational conve-
nience) and cm,l are the coefficients of the expansions of ˜H(y; ω) and P0(y), respectively. Note
32
3.3 Solving the equations by matrix methods
that the LHS here is the same as the LHS of Eq. (3.21) except for the term +iω accompanying
˜Hm,l(ω), which is expected given that the operators on the LHS of Eq. (3.14) and Eq. (3.18)
differ only by a term of the form +iωI.
Equation (3.30) defines an infinite, inhomogeneous system of linear equations. As opposed
to the case of P0(y), no additional normalization condition4 is required here to fully specify
˜H(y; ω), as P0(y) is already normalized.
The symmetries of LFP(y) also impose certain conditions on ˜H(y; ω). We briefly state here
what these conditions are and what consequences they have on the coefficients of its expansion
˜Hm,l. First of all, one can show that ˜H(y; ω) = − ˜H(−y; ω), which leads to ˜Hm,l = − ˜H−m,−l;
moreover, ˜H(y1+kπ, y2+kπ; ω) = − ˜H(y1, y2; ω), k ∈ Z, resulting in ˜Hm,l = 0 if m+l = 2k, k ∈ Z.
On the other hand, ˜Hm,l ∈ C in general because ˜H(y; ω) ∈ C. These results follow from the
definition of ˜H(y; ω) Eq. (3.16) and the symmetry properties of the transition density P(y|y ; τ),
which can be traced to those of the eigenfunctions of LFP and L†
FP.
A summary of the symmetry properties of ˜H(y; ω) and its expansion coefficients is provided
in Table 3.2.
Symmetry transformation ˜H(y; ω) ˜Hm,l(ω)
y → −y ˜H(y; ω) = − ˜H(−y; ω) ˜H−m,−l = − ˜Hm,l
(y1, y2) → (y1 + kπ, y2 ± kπ), ˜H(y1 + kπ, y2 ± kπ; ω) = − ˜H(y1, y2; ω) ˜Hm,l = 0, if m + l = 2k
k ∈ Z
Table 3.2: Summary of the conditions imposed by symmetries (other than 2π-periodicity on
y1 and y2) on the auxiliary function ˜H(y; ω) and its expansion coefficients ˜H(ω).
Power spectrum of sin(y1) in terms of expansion coefficients Let us finally look at the power
spectrum Sxx(ω) in terms of the coefficients of the expansion of ˜H(y; ω). By plugging Eq. (3.20)
into Eq. (3.17) and using the orthogonality relations for the Fourier basis Eq. (3.26) we obtain
the relation
Sxx(ω) = 2(2π) Re
m
˜Hm,0 dy1f(y1) .
Choosing our observable to be f(y1) = sin(y1) = (eiy1 − e−iy1 )/(2i) leads to
Sxx(ω) = (2π)2
Re −i ˜H−1,0 − ˜H0,1 .
The previous equation can be further simplified using ˜Hm,l = − ˜H−m,−l. Hence, our final
expression for the power spectrum reads
Sxx(ω) = 2(2π)2
Re i ˜H0,1(ω) = −8π2
Im ˜H0,1(ω) , (3.31)
4
The case ω = 0 requires a special treatment. In order to simplify the presentation, we consider that ˜H(y; ω)
(and, consequently, S(ω)) are only evaluated at ω > 0.
33
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
where Im[·] denotes the imaginary part of the argument. Remarkably, after this involved pro-
cedure the power spectrum of sin(y1) has been expressed in terms of a single coefficient of the
expansion of ˜H(y; ω).
In order to evaluate Eq. (3.31), the linear system Eq. (3.30) must be solved at each ω. Two
different numerical methods to do so are described next in the remainder of this subsection:
the first one has been named full matrix approach, whereas the second one is the method of
Matrix Continued Fractions (MCF). While the method of MCF requires a tidy bit of extra
preanalytical work, it is theoretically more efficient than the full matrix approach. Efficiency is
of crucial importance given that the linear system has to be solved multiple times, one at each ω.
Moreover, finer and finer discretizations of the interval of frequencies of interest will be required
to resolve the sharp peaks in the power spectrum in the low-noise regime, where the oscillations
become more coherent. Thus, a major question of interest is how the two methods actually
compare in terms of accuracy and efficiency for our system and whether the extra preanalytic
work required by the method of MCF is worth the effort.
3.3.2. Solving the full linear system
The first numerical method used to solve the systems of linear equations Eq. (3.22) (for the
coefficients cm,l of the expansion of P0(y)) and Eq. (3.30) (for the coefficients ˜Hm,l of the
expansion of ˜H(y; ω)) involves no extra work: first of all, a specific ordering of the elements of
the basis of complex exponentials {eimy1 eily2 } is chosen; next, the coefficients cm,l and ˜Hm,l(ω)
are arranged into single column vectors c and hω, respectively, according to the chosen ordering;
finally, the corresponding matrices of coefficients, M and Aω, are set for each system. Obviously,
practically this also requires using a truncated set of complex exponentials, {eimy1 eily2 }, −L ≤
m, l ≤ L, so that we end up with a finite system of linear equations.
Let us make the previous statements more precise. The ordering of the basis functions is
chosen such that c has the form
c(L)
= c−L,−L, · · · , c−L,L, · · · , c0,−L, · · · , c0,L, · · · , cL,−L, · · · , cL,L
T
,
where T denotes transposition and we have made explicit the dependence of the size of c on L
as c(L). An analogous form is valid for hω (from now on we use only c in the discussion, with
the understanding that everything related to the arrangement of the coefficients can be applied
as well to hω). The key point is to realize that now each component cm,l associated to a basis
function eimy1 eily2 corresponds to an entry of the column vector c(L), i.e.
c(L)
m(2L+1)+l
= cm,l, −L ≤ m, l ≤ L.
Once the ordering of the basis has been established, one can set the elements of the matrices
of coefficients M and Aω according to Eq. (3.22) and Eq. (3.30), as well as the inhomogeneity f
on the RHS of Eq. (3.30). Explicit expressions for such matrices are not included here but left
to Appendix A. Instead, we just acknowledge that we have to solve the following linear systems
in matrix form
M(L)
(D, α)c(L)
= 0, (3.32)
A(L)
ω (D, α; ω)h(L)
ω = f(L)
, (3.33)
34
3.3 Solving the equations by matrix methods
where we have emphasized the dependence of the entries of M and Aω on the noise intensity D,
the stability parameter α and the frequency at which we evaluate the power spectrum ω. Note
that we only need to solve Eq. (3.32) once, so that we can use c to construct f. Eq. (3.33) is
then solved for hω at a set of discrete frequencies {ωi}, i ∈ 1, . . . , N to obtain an approximation
to the power spectrum S(ω) in a given frequency window.
The truncation parameter L determines the size of the vectors and matrices introduced above.
In particular, the dimension of c(L) and h
(L)
ω is (2L + 1)2, while that of M(L)
and A
(L)
ω is
(2L + 1)2 × (2L + 1)2. Thus, even relatively small values of L can lead to very large matrices,
which poses serious numerical problems: on the one hand, if the matrix is large, storing all its
elements can consume a significant amount of memory; on the other hand, as the size of the
coefficient matrix grows, the number of operations required to solve numerically a system of
equations increases dramatically. Using efficient sparse matrix methods, which is justified since
our matrices contain very few non-zero entries, can help to partially overcome these problems.
In particular, the issue of the high-dimensionality of the problem becomes fundamental in the
low-noise regime of our system, where the probability distributions display very abrupt changes
on small spatial scales. To accurately describe such peaked distributions, one needs to include
modes eimy1 eily2 with higher spatial frequencies |k| =
√
m2 + l2 in the expansion of P0(y) and
˜H(y; ω), which is equivalent to increasing L (see Figure 3.2). Thus, we will experience serious
limitations when applying this method in the weak-noise regime, even if using sparse methods.
The numerical implementation has been carried out using Python’s programming language,
for which extensive linear algebra libraries are available, such as those in numpy and scipy’s
packages. The use of sparse matrices and sparse diagonalization and linear solver methods5,
implemented in the scipy.sparse.linalg library, has significantly reduced the computational
time and the memory demands of this method.
From the discussion above it is not clear why we had to use at some point (sparse) diago-
nalization methods. In fact, it is actually not necessary since the present method requires only
solving two linear systems. However, let us recall that Eq. (3.32) is simply a truncated matrix
version of the stationary Fokker-Planck equation LFP P0(y) = 0, so that M(L)
is nothing but
the expression of LFP(y) in the basis of complex exponentials. Hence, solving for P0(y) (respec-
tively, c(L)) is equivalent to determining the eigenvector associated to the eigenvalue λ = 0 of
LFP (respectively, M(L)
) if we renormalize the eigenvector such that the normalization condition
Eq. (3.29) is satisfied. Calculating c(L) in this way turns out to be convenient since it allows us
to simultaneously determine the first non-zero eigenvalues (i.e. the less negative) of LFP, which
provide insight on the oscillatory properties of the system. The price to pay for obtaining extra
eigenvalues is longer computation times.
Finally, it is clear that the resulting power spectrum S(ω) should be independent of the
truncation parameter L. In our implementation, we repeat the above procedure for different
values of L until S(ω) does not change any more (over the set of discrete frequencies {ωi}) by
increasing L within 1% precision. Our stopping criterion looks as follows
= max
i
S(L)(ωi) − S(L )(ωi)
S(L )(ωi)
≤ 0.01, (3.34)
where S(L)(ωi) is the power spectrum at ωi obtained with a truncation parameter L.
5
For reasons of efficiency it is recommended to avoid inverting matrices numerically (Heath, 2002) when
possible and to use linear solvers instead.
35
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
102
103
D−1
101
102
TruncationL
Figure 3.2: Truncation parameter L of the full matrix approach as a function of the inverse of
the noise intensity D−1. The truncation parameter is such that the accuracy satisfies ≤ 0.01
according to Eq. (3.34).
Let us summarise for clarity the rather complex protocol described above:
1. Set sparse matrices M(L)
(D, α) and A
(L)
ω (D, α; ω) (see entries in Appendix A)
2. Solve Eq. (3.32) for c(L) using sparse diagonalization method.
3. Use c(L) to construct f(L)
in RHS of Eq. (3.33).
4. Solve Eq. (3.33) for h
(L)
ωi at {ωi}, i ∈ {1, . . . , N}.
5. Obtain S(L)(ωi) from h
(L)
ωi using Eq. (3.31).
6. Check precision using Eq. (3.34) and if the test is not passed, increase L and
repeat from step 1.
Despite having significantly optimized the implementation by using sparse matrix methods,
this method is still computationally demanding in the weak-noise regime, both because of the
large size of the matrices and the finer discretization required to resolve the sharp peaks. A
more efficient method (but not necessarily accurate) to perform the same task is presented next,
the method of Matrix Continued Fractions.
3.3.3. Solving by the matrix continued-fraction method
The method of Matrix Continued Fractions (MCF) is discussed at length in Chapters 9 and 11
of (Risken, 1984), whose presentation we partially follow. The MCF method takes advantage
of the structure of certain systems of linear equations in order to solve them more efficiently,
effectively reducing the size of the matrices involved in the computations. In particular, it is a
method to solve so-called tridiagonal vector recurrence relations
Q−
n vn−1 + Q0
nvn + Q+
n vn+1 = fn, (3.35)
36
3.3 Solving the equations by matrix methods
which involve the matrices Q±
n , Q0
n and the vectors vn; we can have one-(n ≥ 0) and two-sided
(n ∈ Z) relations. Tridiagonal vector recurrence relations appear frequently when expanding
the solutions of partial differential equations (such as the Fokker-Planck equation) into complete
sets of functions.
The question is thus how we can cast the relations Eq. (3.22) and Eq. (3.30) satisfied by
the coefficients of P0(y) and ˜H(y; ω) into tridiagonal vector recurrence relations. This usually
requires some amount of analytical work, which is why the method of matrix-continued fractions
is often said to be semianalytical. We outline in the following (a more systematic approach is
presented in Appendix B) how to obtain such relations and describe the different methods of
solutions, which differ for homogeneous (fn = 0) and inhomogeneous recurrence relations.
Homogeneous recurrence relation for {cm,l} Let us recall Eq. (3.22), which can be rewritten
as
0 = −
1
4

D−
1 (m, l)


cm−2,l
cm−1,l−1
cm,l−2

 + D0
1(m, l)


cm−1,l+1
cm,l
cm+1,l−1

 + D+
1 (m, l)


cm,l+2
cm+1,l+1
cm+2,l



 (3.36)
≡ −
1
4
D−
1 (m, l)cm+l−2 + D0
1(m, l)cm+l + D+
1 (m, l)cm+l+2 , m, l ∈ Z, (3.37)
where D±
1 (m, l) and D0
1(m, l) are the row vectors
D−
1 (m, l) = (2αm, m − l, 2αl)
D0
1(m, l) = −(m + l), 4D(m2
+ l2
), m + l
D+
1 (m, l) = (−2αl, −(m − l), −2αm) .
This form is very illustrative since it suggests grouping the coefficients cm,l with m+l = constant
into vectors: note how cm+l−2, cm+l and cm+l+2 in Eq. (3.36) contain only coefficients whose
indices add up to m + l − 2, m + l and m + l + 2, respectively. Indeed, it turns out that by
extending cm+l to
c2n =


















...
cn−l,n+l
...
cn−1,n+1
cn,n
cn+1,n−1
...
cn+l,n−l
...


















, (3.38)
it is possible to cast our starting relation Eq. (3.36) into a homogeneous, tridiagonal vector
recurrence relation between c2(n−1), c2n and c2(n+1)
37
3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM
Q−
2nc2(n−1) + Q0
2nc2n + Q+
2nc2(n+1) = 0, n ∈ Z. (3.39)
Explicit expressions for the entries of Q±
2n and Q0
2n, which depend on the parameters D and α,
are given in Appendix B.
The k-component (c2n)k of the c2n displayed in Eq. (3.38) can also be written as
(c2n)k = cn+k,n−k, k ∈ Z.
Let us emphasize two key features from this relation: on the one hand, the vectors c2n are
labelled by the sum m + l of the indices of the coefficients cm,l it contains; on the other hand,
all “odd” vectors {c2k+1} are identically 0, since they contain only “odd” coefficients cm,l =
0, m + l = 2k + 1, k ∈ Z. What allows for such a useful rearrangement is the underlying
symmetry LFP(y1 + kπ, y2 ± kπ) = LFP(y1, y2), k ∈ Z, which has been discussed above.
The homogeneous, tridiagonal vector recurrence relation Eq. (3.39) can now be solved by the
MCF method. The trick is the following: we first introduce a “ladder” matrix S+
2n defined by
c2(n+1) = S+
2nc2n. (3.40)
that connects “even” coefficients. The normalization condition and the symmetries of the Fokker-
Planck equation determine the vector entries of c0, hence the knowledge of the sequence of
matrices {S+
2n}, n ≥ 0 is sufficient to determine {c2n}, n ≥ 0. In fact, even though Eq. (3.39) is
in principle two-sided, we can use the reflection symmetry leading to c−m,−l = cm,l to relate c2n
and c−2n as follows
c−2n ≡














...
c−n−l,−n+l
...
c−n,−n
...
c−n+l,−n−l
...














=














...
cn+l,n−l
...
cn,n
...
cn−l,n+l
...














= Uc2n = U














...
cn−l,n+l
...
cn,n
...
cn+l,n−l
...














where the transformation matrix U can be identified by inspection as
U =






0 · · · 0 1
... ...
1 0
0 ...
... ...
1 0 · · · 0






, (3.41)
i.e. a rotated identity matrix that flips upside-down the components of the vectors upon it acts.
Thus, given c0 and {S+
2n}, n ≥ 0 the whole sequence {c2n}, n ∈ Z can be determined.
The matrices S+
2n can be obtained by inserting Eq. (3.40) into the recurrence relation, which
yields
Q−
2nc2(n−1) + Q0
2nS+
2(n−1)c2(n−1) + Q+
2nS+
2nS+
2(n−1)c2(n−1) = 0.
38
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi
thesis_jordi

More Related Content

What's hot

J.Jochim_Poster_CNS_2016
J.Jochim_Poster_CNS_2016J.Jochim_Poster_CNS_2016
J.Jochim_Poster_CNS_2016Janina Jochim
 
Vibration analysis and modelling of cantilever beam
Vibration analysis and modelling of cantilever beam Vibration analysis and modelling of cantilever beam
Vibration analysis and modelling of cantilever beam Baran Shafqat
 
Paper id 27201423
Paper id 27201423Paper id 27201423
Paper id 27201423IJRAT
 
Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...
Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...
Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...Shu Tanaka
 
NEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORM
NEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORMNEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORM
NEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORMmathsjournal
 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...IJDKP
 
Chirped Pulse Amplifiers
Chirped Pulse AmplifiersChirped Pulse Amplifiers
Chirped Pulse AmplifiersMario Monico
 
Vibration Analysis and Modelling of a Cantilever Beam
Vibration Analysis and Modelling of a Cantilever Beam Vibration Analysis and Modelling of a Cantilever Beam
Vibration Analysis and Modelling of a Cantilever Beam Muhammad Usman
 
A scheme for optical pulse generation using optoelectronic phase locked loops
A scheme for optical pulse generation using optoelectronic phase locked loopsA scheme for optical pulse generation using optoelectronic phase locked loops
A scheme for optical pulse generation using optoelectronic phase locked loopseSAT Journals
 
A scheme for optical pulse generation using
A scheme for optical pulse generation usingA scheme for optical pulse generation using
A scheme for optical pulse generation usingeSAT Publishing House
 
TwoLevelMedium
TwoLevelMediumTwoLevelMedium
TwoLevelMediumJohn Paul
 
InternshipReport
InternshipReportInternshipReport
InternshipReportHamza Ameur
 
Optimal control of multi delay systems via orthogonal functions
Optimal control of multi delay systems via orthogonal functionsOptimal control of multi delay systems via orthogonal functions
Optimal control of multi delay systems via orthogonal functionsiaemedu
 
Quantum formula sheet
Quantum formula sheetQuantum formula sheet
Quantum formula sheetLovish pujani
 

What's hot (19)

J.Jochim_Poster_CNS_2016
J.Jochim_Poster_CNS_2016J.Jochim_Poster_CNS_2016
J.Jochim_Poster_CNS_2016
 
SV-InclusionSOcouplinginNaCs
SV-InclusionSOcouplinginNaCsSV-InclusionSOcouplinginNaCs
SV-InclusionSOcouplinginNaCs
 
Tesis
TesisTesis
Tesis
 
Vibration analysis and modelling of cantilever beam
Vibration analysis and modelling of cantilever beam Vibration analysis and modelling of cantilever beam
Vibration analysis and modelling of cantilever beam
 
Paper id 27201423
Paper id 27201423Paper id 27201423
Paper id 27201423
 
Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...
Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...
Interlayer-Interaction Dependence of Latent Heat in the Heisenberg Model on a...
 
Wavelength Conversion of a Laser Beam Using a Continuous- Wave Optical Parame...
Wavelength Conversion of a Laser Beam Using a Continuous- Wave Optical Parame...Wavelength Conversion of a Laser Beam Using a Continuous- Wave Optical Parame...
Wavelength Conversion of a Laser Beam Using a Continuous- Wave Optical Parame...
 
NEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORM
NEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORMNEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORM
NEW METHOD OF SIGNAL DENOISING BY THE PAIRED TRANSFORM
 
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
A COMPREHENSIVE ANALYSIS OF QUANTUM CLUSTERING : FINDING ALL THE POTENTIAL MI...
 
Chirped Pulse Amplifiers
Chirped Pulse AmplifiersChirped Pulse Amplifiers
Chirped Pulse Amplifiers
 
Vibration Analysis and Modelling of a Cantilever Beam
Vibration Analysis and Modelling of a Cantilever Beam Vibration Analysis and Modelling of a Cantilever Beam
Vibration Analysis and Modelling of a Cantilever Beam
 
thesis_mark5
thesis_mark5thesis_mark5
thesis_mark5
 
A scheme for optical pulse generation using optoelectronic phase locked loops
A scheme for optical pulse generation using optoelectronic phase locked loopsA scheme for optical pulse generation using optoelectronic phase locked loops
A scheme for optical pulse generation using optoelectronic phase locked loops
 
A scheme for optical pulse generation using
A scheme for optical pulse generation usingA scheme for optical pulse generation using
A scheme for optical pulse generation using
 
TwoLevelMedium
TwoLevelMediumTwoLevelMedium
TwoLevelMedium
 
InternshipReport
InternshipReportInternshipReport
InternshipReport
 
Optimal control of multi delay systems via orthogonal functions
Optimal control of multi delay systems via orthogonal functionsOptimal control of multi delay systems via orthogonal functions
Optimal control of multi delay systems via orthogonal functions
 
12098
1209812098
12098
 
Quantum formula sheet
Quantum formula sheetQuantum formula sheet
Quantum formula sheet
 

Viewers also liked

energias renovables
energias renovables energias renovables
energias renovables Byassin
 
Jayadev nair financial products and gdp
Jayadev nair financial products and gdpJayadev nair financial products and gdp
Jayadev nair financial products and gdpJayadev Nair
 
Blog 120709000538-phpapp01
Blog 120709000538-phpapp01Blog 120709000538-phpapp01
Blog 120709000538-phpapp01Momay Mintra
 
Cellfood - The new health Miracle
Cellfood - The new health MiracleCellfood - The new health Miracle
Cellfood - The new health Miraclecellfood-supplement
 
Peter De Boeck infographic
Peter De Boeck infographicPeter De Boeck infographic
Peter De Boeck infographicPeter De Boeck
 

Viewers also liked (9)

Published paper
Published paperPublished paper
Published paper
 
energias renovables
energias renovables energias renovables
energias renovables
 
Jayadev nair financial products and gdp
Jayadev nair financial products and gdpJayadev nair financial products and gdp
Jayadev nair financial products and gdp
 
Blog 120709000538-phpapp01
Blog 120709000538-phpapp01Blog 120709000538-phpapp01
Blog 120709000538-phpapp01
 
Cellfood - The new health Miracle
Cellfood - The new health MiracleCellfood - The new health Miracle
Cellfood - The new health Miracle
 
P1
P1P1
P1
 
Presentation pdb
Presentation pdbPresentation pdb
Presentation pdb
 
Peter De Boeck infographic
Peter De Boeck infographicPeter De Boeck infographic
Peter De Boeck infographic
 
Tipos de energia Y SU USO
Tipos de energia Y SU USO Tipos de energia Y SU USO
Tipos de energia Y SU USO
 

Similar to thesis_jordi

Dynamics of cold atoms in moving optical lattices (Version Dek8)
Dynamics of cold atoms in moving optical lattices (Version Dek8)Dynamics of cold atoms in moving optical lattices (Version Dek8)
Dynamics of cold atoms in moving optical lattices (Version Dek8)Nadal Sarkytbayev
 
Dynamic light scattering
Dynamic light scatteringDynamic light scattering
Dynamic light scatteringVishalSingh1328
 
neural pacemaker
neural pacemakerneural pacemaker
neural pacemakerSteven Yoon
 
'Almost PERIODIC WAVES AND OSCILLATIONS.pdf
'Almost PERIODIC WAVES AND OSCILLATIONS.pdf'Almost PERIODIC WAVES AND OSCILLATIONS.pdf
'Almost PERIODIC WAVES AND OSCILLATIONS.pdfThahsin Thahir
 
thesis_final_draft
thesis_final_draftthesis_final_draft
thesis_final_draftBill DeRose
 
2. Introduction to Spectroscopy 2022.pptx
2. Introduction to Spectroscopy 2022.pptx2. Introduction to Spectroscopy 2022.pptx
2. Introduction to Spectroscopy 2022.pptxWilliamkambi
 
Partitions and entropies intel third draft
Partitions and entropies intel third draftPartitions and entropies intel third draft
Partitions and entropies intel third draftAnna Movsheva
 
Identification of the Memory Process in the Irregularly Sampled Discrete Time...
Identification of the Memory Process in the Irregularly Sampled Discrete Time...Identification of the Memory Process in the Irregularly Sampled Discrete Time...
Identification of the Memory Process in the Irregularly Sampled Discrete Time...idescitation
 
Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...
Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...
Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...Pedro Cerón Colás
 

Similar to thesis_jordi (20)

Dynamics of cold atoms in moving optical lattices (Version Dek8)
Dynamics of cold atoms in moving optical lattices (Version Dek8)Dynamics of cold atoms in moving optical lattices (Version Dek8)
Dynamics of cold atoms in moving optical lattices (Version Dek8)
 
Senior thesis
Senior thesisSenior thesis
Senior thesis
 
rkdiss
rkdissrkdiss
rkdiss
 
diplom_master
diplom_masterdiplom_master
diplom_master
 
Fulltext
FulltextFulltext
Fulltext
 
Dynamic light scattering
Dynamic light scatteringDynamic light scattering
Dynamic light scattering
 
Solid state
Solid stateSolid state
Solid state
 
neural pacemaker
neural pacemakerneural pacemaker
neural pacemaker
 
Master degree thesis
Master degree thesisMaster degree thesis
Master degree thesis
 
dissertation
dissertationdissertation
dissertation
 
'Almost PERIODIC WAVES AND OSCILLATIONS.pdf
'Almost PERIODIC WAVES AND OSCILLATIONS.pdf'Almost PERIODIC WAVES AND OSCILLATIONS.pdf
'Almost PERIODIC WAVES AND OSCILLATIONS.pdf
 
Instantons in 1D QM
Instantons in 1D QMInstantons in 1D QM
Instantons in 1D QM
 
TESI
TESITESI
TESI
 
thesis_final_draft
thesis_final_draftthesis_final_draft
thesis_final_draft
 
2. Introduction to Spectroscopy 2022.pptx
2. Introduction to Spectroscopy 2022.pptx2. Introduction to Spectroscopy 2022.pptx
2. Introduction to Spectroscopy 2022.pptx
 
IOMAC2009
IOMAC2009IOMAC2009
IOMAC2009
 
Partitions and entropies intel third draft
Partitions and entropies intel third draftPartitions and entropies intel third draft
Partitions and entropies intel third draft
 
Identification of the Memory Process in the Irregularly Sampled Discrete Time...
Identification of the Memory Process in the Irregularly Sampled Discrete Time...Identification of the Memory Process in the Irregularly Sampled Discrete Time...
Identification of the Memory Process in the Irregularly Sampled Discrete Time...
 
Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...
Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...
Presentation in the Franhoufer IIS about my thesis: A wavelet transform based...
 
Muraoka_FinalThesis
Muraoka_FinalThesisMuraoka_FinalThesis
Muraoka_FinalThesis
 

thesis_jordi

  • 1. Stochastic oscillations and their power spectrum Masterarbeit von Jordi Giner-Bald´o, geb. am 23. 04. 1990, eingereicht beim Fachbereich Physik der Freien Universit¨at Berlin zur Erlangung des akademischen Grades Master of Science am 26. 01. 2016 Matrikelnummer: 4756284 Adresse: M¨uhsamstr. 36, 10249 Berlin Email: jorgibal@zedat.fu-berlin.de Externer Betreuer Prof. Dr. Benjamin Lindner, Institut f¨ur Physik, HU Berlin und BCCN Berlin Betreuer am Fachbereich: Prof. Dr. Roland Netz, Fachbereich Physik, FU Berlin Zweitgutachterin: Priv.-Doz. Dr. Stefanie Russ, Fachbereich Physik, FU Berlin
  • 2.
  • 3. Abstract Stochastic oscillations - also known as narrow-band fluctuations - are ubiquitous in biological systems. Their mathematical description is challenging: it often involves non-equilibrium and non-linear models subject to temporally correlated fluctuations. Measures that are often used to characterize stochastic oscillations are the autocor- relation function and the power spectrum. In this thesis we develop and apply analytical, semianalytical and numerical approaches to these measures that can pro- vide some insight on spectral quantities such as the frequency and the coherence of the oscillations, given by the quality factor. A number of methods have been used in the literature to model stochastic os- cillations. We briefly review some of these theoretical models before focusing on two specific instances of stochastic oscillators: an integrate-and-fire neuron driven by temporally correlated fluctuations, i.e. colored noise; and the noisy heteroclinic oscillator introduced by Thomas and Lindner (2014), a paradigmatic example of a system that oscillates only in the presence of noise. On the one hand, we study the ef- fect of two different types of colored-noise driving on the power spectrum of a perfect integrate-and-neuron using an analytical approach by Schwalger et al. (2015). The two noise models considered are a low-pass filtered noise modelled as an Ornstein- Uhlenbeck process and harmonic noise. On the other hand, we use numerical and semianalytical matrix methods to calculate the power spectrum of the noisy hetero- clinic oscillator. These methods are not accurate and/or efficient in the small noise limit, where the oscillations become slower and more coherent. In this limit, we provide an analytical approximation for the power spectrum based on the theory of two-state processes and existing results from the theory of random dynamical sys- tems for hyperbolic fixed points. The analytical approaches used in this thesis are based on the Fokker-Planck formalism. All the results are compared to stochastic simulations.
  • 4.
  • 5. Contents 1. Introduction 1 1.1. Motivation: noisy oscillations in biology . . . . . . . . . . . . . . . . . . . . . . . 1 1.2. Measures of stochastic oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3. Models of stochastic oscillations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4. Models of noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5. Aim and outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2. Integrate-and-fire neuron driven by colored noise 13 2.1. Integrate-and-fire models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2. Colored noise in neural systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3. Analytical approach to colored noise . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4. Results for a PIF model driven by colored noise . . . . . . . . . . . . . . . . . . . 16 3. Noise-induced oscillations in a heteroclinic system 23 3.1. General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.2. Approach to the power spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.3. Solving the equations by matrix methods . . . . . . . . . . . . . . . . . . . . . . 29 3.4. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.5. Dichotomous approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4. Summary and outlook 57 Appendices 59 A. Expansion of LFP into a basis 59 B. Tridiagonal recurrence relation 60 C. Expansion of the steady-state probability current 62 References 63
  • 6.
  • 7. 1. Introduction 1.1. Motivation: noisy oscillations in biology Noisy or stochastic oscillations are ubiquitous in biology. Examples include phenomena from a broad spectrum of fields. For example, the intracellular concentration of calcium oscillates, acting as as a signal that regulates cellular activity (Kummer et al., 2005); in the vestibular and auditory system, the transducing cells (hair bundles) show spontaneous mechanical oscillations (Martin et al., 2003); in neural systems, representative cases can be found both at the single- neuron level, where neurons often fire rather regular sequences of action potentials (Nawrot et al., 2007), and at the population level, in the form of α-, β- and γ- oscillations in the brain (Xing et al., 2012). (a) Mechano-sensory hair cells (b) Single neuron’s regular firing activity (c) Intracellular calcium oscillations (d) Brain activity: α-,β-,γ- oscillations. Figure 1.1: Some examples of noisy oscillations in biology. (a) Martin et al. (2003). (b) Nawrot et al. (2007). (c) Kummer et al. (2005). (d) Xing et al. (2012). The common feature to these oscillations is that their coherence is lost over long time scales, showing random fluctuations in both phase and amplitude. In the spectral domain, they are characterized by a preferred frequency band of spectral power (see Figure 1.2), hence they are sometimes termed narrow-band fluctuations (Stratonovich, 1963; Bauermeister et al., 2013). Stochastic oscillations pose important challenges from the theoretical point of view, because most stochastic oscillations observed in living organisms: (i) operate beyond thermodynamic equilibrium, (ii) are described by non-linear dynamical systems, (iii) are often subject to tempo- rally correlated fluctuations (in contrast with simpler uncorrelated, i.e. white noise). Therefore, new approaches to the problem are required in order to better characterize the autocorrela- tion function and power spectrum of these stochastic oscillations, quantities that can be easily accessed experimentally. 1
  • 8. 1 INTRODUCTION Figure 1.2: Time-evolution (A) and power spectrum (B) of the spontaneous mechanical oscil- lations of a hair bundle from the sacculus of the bullfrog’s inner ear. Note the peak in the power spectrum as a signature of stochastic oscillations. From (Martin et al., 2001). 1.2. Measures of stochastic oscillations Given a stationary stochastic process x(t), one can define its autocorrelation function (Gardiner, 2009) as Cxx(τ) = x(t)x(t + τ) − x(t) 2 , (1.1) where · denotes an average over an ensemble of realizations of x(t). The autocorrelation function quantifies how much two points of a trajectory which are lagged by an interval τ have in common. One can also look at second-order statistics in the Fourier domain: the power spectrum of the process x(t) is defined as (Gardiner, 2009) Sxx(f) = lim T→∞ ˜x(f)˜x∗(f) T , where ˜x(f) is the Fourier transform of a finite realization of x(t), i.e. ˜x(f) = T 0 dtei2πft x(t), and T is the time window of the realization. The power spectrum essentially quantifies how the variance (∆x)2 is distributed over frequencies. In simulations, T must be sufficiently long in order to provide a good approximation to the power spectrum. The autocorrelation funtion and the power spectrum are related via the Wiener-Khinchin theorem (Gardiner, 2009) Sxx(f) = +∞ −∞ dτei2πfτ Cxx(τ). (1.2) Stochastic oscillations are characterized by a more or less narrow peak at a nonzero frequency in the power spectrum, which indicates a preferred frequency band. The stochasticity of the oscillation translates into a loss of coherence over long time scales (Stratonovich, 1963; Bauer- meister et al., 2013) and therefore into the broadening of the peak. One must also address the question of how to quantify the coherence of such oscillations. Here we use a well-known measure from the theory of oscillators and resonators, the quality factor 2
  • 9. 1.3 Models of stochastic oscillations Q = fp ∆f , (1.3) where fp is the center frequency of the characteristic peak in the power spectrum and ∆f its bandwidth (full-width-at-half-maximum). This definition agrees with the intuition that narrower peaks lead to more coherent oscillations and therefore to higher quality factors, as seen in Figure 1.3. Figure 1.3: Quality factors Q of a narrow and a broad peak in the power spectrum. Narrower peaks lead to higher Q. 1.3. Models of stochastic oscillations 1.3.1. Harmonic oscillator A simple model that generates stochastic oscillations is a Brownian particle attached to a spring which oscillates in a fluid and is subject to thermal fluctuations. This system can be modelled as a damped harmonic oscillator driven by white Gaussian noise, i.e. the displacement x(t) of the particle with respect to its equilibrium satisfies the stochastic differential equation m¨x + γ ˙x + mω2 0x = 2γkBTξ(t), (1.4) where m is the mass of the particle, γ is the damping coefficient, ω0 is the frequency of the undamped oscillation, kB is Boltzmann’s constant, T is the temperature and ξ(t) is Gaussian white noise with the properties ξ(t) = 0 and ξ(t)ξ(t ) = δ(t − t ). The power spectrum Eq. (1.2) can be calculated analytically (see Section 1.4.2) for this simple linear system. It turns out that in the underdamped regime (ω0 > γ/2), the power spectrum displays a peak at a nonzero frequency, as illustrated in Figure 1.4. For a weakly-damped (ω0 γ/2) harmonic oscillator driven by thermal fluctuations the quality factor takes the simple form 3
  • 10. 1 INTRODUCTION 0 0.0001 0.0002 0 0.5 1 1.5 2 2.5 3 S(f) f Simulation Analytics Figure 1.4: Power spectrum Sxx(f) of the system described by Eq. (1.4). The analytics from Eq. (1.16) (solid lines) are compared to numerical simulations of Eq. (1.15) (dots). Note the distinctive peak in the power spectrum. Parameters: ω0 = 10, D = 0.01, γ = 1, Q ≈ 10. Q = Ω γ , (1.5) where Ω = ω2 0 − (γ/2)2 is the frequency of the damped oscillation. 1.3.2. Noise-perturbed self-sustained oscillators The simple linear model introduced in the previous subsection cannot capture all the features of the stochastic oscillations shown in Section 1.1, despite showing a narrow-band peak in the power spectrum. For example, the probability distribution of the position of the hair bundle is bimodal, i.e. it is far from being the characteristic Gaussian distribution expected from a linear system driven by Gaussian noise. This obstacle is relatively easy to circumvent: for instance, a damped Brownian particle in a bistable potential, whose equations of motion are non- linear (see e.g. (Anishchenko et al., 2006)), would be able to reproduce a bimodal probability distribution. However, other features observed in real biophysical systems, such as stability against perturbations of the amplitude, are more subtle and cannot be accounted for by simple versions of these damped oscillators driven by noise. It turns out that an appropriate class of dynamical systems that encompasses many oscillations observed in biophysics is that of self-sustained oscillators (brief but illuminating introductions to this topic can be found in (Pikovsky et al., 2003) and (Anishchenko et al., 2006)). Self- sustained oscillators are active systems that are capable of producing their own long-lasting rhythms without any external driving. This is possible due to an internal source of energy that compensates for dissipation in the system. A fundamental property of self-sustained oscillations is that their characteristics (e.g. amplitude, waveform, period, etc.) are completely determined by the internal parameters of the system and do not depend on the initial conditions. Self-sustained oscillations have a precise mathematical description in terms of non-linear au- tonomous dynamical systems with stable limit-cycle solutions. Limit cycles are closed curves in the phase space which are isolated, i.e. neighbouring trajectories are not closed but spiral away (unstable limit cycle) or towards the limit cycle (stable limit cycle). They lead to peri- odic trajectories and can occur only in at least two-dimensional non-linear dynamical systems 4
  • 11. 1.3 Models of stochastic oscillations (Izhikevich, 2007; Strogatz, 2001). To account for the stochasticity of the oscillations, the simplest approach is to add some white noise to deterministic dynamical equations containing limit-cycle solutions, which leads to noisy trajectories around the deterministic limit cycle, as seen in Figure 1.5 for a stochastic version of a prototypical self-sustained oscillator, the Van der Pol oscillator. The dynamics of such a system is governed by the second-order differential equation ¨x − µ(1 − x2 ) ˙x + x = √ 2Dξ(t), which can be rewritten as the system of first-order differential equations ˙x = y, ˙y = µ(1 − x2 )y − x + √ 2Dξ(t), (1.6) where √ 2Dξ(t) is Gaussian white noise with intensity D. If D = 0 this system contains a stable limit cycle for µ > 0 (Anishchenko et al., 2006). −5 0 5 x 0 20 40 60 80 100 t −5 0 5 y (a) (b) Figure 1.5: Comparison between the trajectories of a deterministic and a noise-perturbed self- sustained oscillator. The time evolution of the position x and the velocity ˙x = y of a Van der Pol oscillator Eq. (1.6) with µ = 1 are shown in (a). The color code is the same as in (b), where the same trajectories are displayed in the phase plane (x, y). After an initial transient the system converges to the limit cycle in the deterministic case (black line). Adding some noise to the system leads to noisy trajectories (red line) following closely the deterministic limit cycle. 1.3.3. Noise-driven excitable systems Excitable systems are a broad class of systems characterized by possessing a stable “rest” state and unstable “excited” (“firing”) and “refractory” states (Lindner et al., 2004). A strong enough external perturbation can force the system to leave the resting state and undergo a stereotypical excursion in phase space (see Figure 1.6b) through the firing and the refractory states before coming back to rest. The underlying mathematical description is a dynamical system close to a bifurcation to a limit-cycle. In the following we introduce the main ingredients of clas- sical excitability in a stochastic neuron model, the FitzHugh-Nagumo system. We follow the presentation of (Lindner et al., 2004). 5
  • 12. 1 INTRODUCTION A common form of the stochastic FitzHugh-Nagumo system is t ˙x = x − x3 − y, ˙y = γx − y + b + √ 2Dξ(t), (1.7) where x and y are a voltage-like and a recovery-like variable, respectively. In the neural context t 1, hence x can be regarded as a fast variable and y as slow variable. The system is driven by white Gaussian noise √ 2Dξ(t) of intensity D. The parameters b and γ determine the intersection between the x and y nullclines, i.e. the cubic curve and the straight line that can be observed in Figure 1.6b, respectively. In the excitable regime the intersection point is a stable fixed point on the left branch of the cubic nullcline, which corresponds to the resting state of the system. If unperturbed, the system stays at this stable fixed point. However, the central branch of the cubic nullcline acts here as an effective threshold: a sufficiently strong external perturbation can kick the system over this central branch leading to a large excursion of the state variables on the phase plane (“firing”, i.e. travel of the phase point through the regions labelled as “self-excitatory” and “active” in Figure 1.6b). After a refractory state, the system comes back to the resting state, where, if noise is present, it may be perturbed again to the firing state. In this way, a random sequence of action potentials or pulses is generated. Traces of these stochastic oscillations are shown in Figure 1.6a. −2 0 2 x D = 0.01 0 5 10 15 20 25 30 35 40 t −2 0 2 x D = 0.1 (a) (b) Figure 1.6: Sample trajectories of a noise-driven excitable system in the excitable regime for different values of the noise. (a) Sample time evolution of the voltage-like variable x of the FitzHugh-Nagumo system Eq. (1.7) for D = 0.1 (lower panel) and D = 0.01 (upper panel). (b) Trajectory for D = 0.01 in the phase plane (x, y) and x- (black dashed line) and y- (black solid line) nullclines; the stable resting state at the intersection of the two nullclines is also indicated (thick black dot). Far-reaching excursions in the phase plane correspond to spikes in the trace of x. The occurrence of spikes is more likely for larger values of the noise D as seen in (a). Relevant states of the system are indicated according to (Lindner et al., 2004). Other parameters: γ = 1.5, t = 0.01 and b = 0.6. 1.3.4. Integrate-and-fire models It is possible to reduce the dimensions in the description of a limit cycle by choosing an appro- priately defined phase variable (Pikovsky et al., 2003). The state of oscillation is then given by 6
  • 13. 1.3 Models of stochastic oscillations a single equation for the phase dynamics. Let us consider the so-called integrate-and-fire model (Burkitt, 2006b) describing the firing of spiking neurons ˙v = f(v) + η(t), if v(ti) = vth : v → vres, (1.8) where v is the membrane voltage of the neuron, η(t) is a stochastic process accounting for sources of noise in the system and f(v) is a function describing the voltage dynamics. The model is equipped with a fire-and-reset rule: when the voltage v hits a threshold vth, v is reset to vres and simultaneously the time ti at which this event occurred is registered. The system is then said to have fired an action potential or ”spike”. This is illustrated in Figure 1.7. It must be emphasized that the relevant output of the system is the sequence of spiking times {ti}. Figure 1.7: Stochastic integrate-and-fire neuron model. The voltage time-course of a leaky integrate-and-fire neuron driven by white noise is displayed in the lower panel. Whenever the voltage v(t) hits the threshold vth (red dashed line), a spike (black vertical arrow) is formally added to the output spike train x(t) at time ti (upper panel) and v is reset to the reset voltage vres (black dashed line). The noise leads to variability in the spiking times {ti}. In this setting the variable v can be regarded as a phase-like variable taking values from vres to vth in a circle, with the fire-and-reset rule connecting both ends of the line. The fire-and-reset rule is reminiscent of the firing and recovery states of an excitable system. Indeed, the stochastic integrate-and-fire model is a 1D caricature of a noise-driven excitable system (see Section 1.3.3) if there is a stable fixed point v < vth for which f(v) = 0. If, on the contrary, f(v) > 0, v < vth, the model can be regarded as a 1D caricature of a noise-perturbed limit cycle (see Section 1.3.2). 1.3.5. Noise-induced fluctuations in a heteroclinic system Underlying deterministic limit-cycle dynamics is not a necessary condition to obtain noisy limit- cycle behaviour. As an example, we discuss in the following a heteroclinic attractor (Krupa, 1997) perturbed by weak noise (Stone and Holmes, 1990; Bakhtin, 2010a). Let us consider the deterministic system 7
  • 14. 1 INTRODUCTION ˙y1 = cos(y1) sin(y2) + α sin(2y1), ˙y2 = − sin(y1) cos(y2) + α sin(2y2), (1.9) with α being a stability parameter. The system is 2π-periodic in y1 and y2, so let us focus on the central region [−π, π] × [−π, π]. The corresponding phase portrait is shown in Figure 1.8a. It contains a chain of four saddle points which are connected to each other by heteroclinic trajectories, forming what is known as a heteroclinic cycle (Shaw et al., 2012). If this heteroclinic cycle is attracting (α ∈ (0, 0.5)), trajectories that start at its interior tend to get closer and closer to the cycle, but with increasingly long return times Figure 1.8c. Hence the trajectory along the cycle has “infinite” period and no well-definend oscillation emerges. However, let us have a look now at what happens when white noise of intensity D is added to the system: ˙Y1 = cos(Y1) sin(Y2) + α sin(2Y1) + √ 2Dξ1(t), ˙Y2 = − sin(Y1) cos(Y2) + α sin(2Y2) + √ 2Dξ2(t), (1.10) where ξ1,2 are independent Gaussian white noise sources satisfying ξi(t)ξj(t ) = δ(t − t )δij. Sample trajectories are shown in Figures 1.8b and 1.8d. We add reflecting boundary conditions on the domain −π/2 ≤ {y1, y2} ≤ π/2 in order to “trap” the trajectory within the heteroclinic cycle. What we observe in Figure 1.8b is that the trajectory resembles what can be regarded as noisy limit-cycle behaviour. As in the deterministic case, the phase point tends to approach the heteroclinic cycle, but now the noise keeps “kicking” it away from it so that the oscillations are sustained (see Figure 1.8d). Thus, noise induces finite-period limit-cycle behaviour. Further- more, the noise intensity D determines this mean period of oscillation along the cycle. This mean period of oscillation does not emerge because of an underlying deterministic limit-cycle dynamics but because of the sensitivity of the system to perturbations in the vicinity of the sad- dle points. A similar phenomenon of selection of time scales is found in homoclinic attractors perturbed by weak noise (Stone and Holmes, 1990) and in 2D systems close to a saddle node bifurcation (Sigeti and Horsthemke, 1989). 1.4. Models of noise Theoretical studies often assume that the noise ξ(t) present in the system is temporally uncor- related (white noise) (van Kampen, 2007), i.e. ξ(t)ξ(t ) = δ(t − t ). (1.11) This translates in the frequency domain into a flat power spectrum Sξξ(f) = 1, i.e. the noise contains the same power at all frequencies. The use of white noise simplifies the analytical description of the system (Gardiner, 2009) and it is a good approximation in certain cases. However, noise in nature has always some non-zero finite correlation time (such that the au- tocorrelation function acquires some non-trivial temporal structure) and one needs to consider instead colored (temporally correlated) noise, whose power spectrum is not flat. Two examples of (Gaussian) colored noise are explored in this thesis: low-pass filtered noise generated by an Orstein-Uhlenbeck process (OUP) and harmonic noise (HN). Examples of their power spectra 8
  • 15. 1.4 Models of noise (a) (b) 0 50 100 150 200 250 300 t −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 y1(t) (c) 0 50 100 150 200 250 300 t −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 y1(t) (d) Figure 1.8: Noise induces oscillations in a system (b,d) which does not show limit-cycle be- haviour in the deterministic case (a,c). (a) Phase portrait of the deterministic system governed by Eq. (1.9). A trajectory of a phase point is shown in blue: it passes near the four distinct saddle points (thick black dots), slowing down progressively as it gets closer to the stable hetero- clinic cycle. From (Shaw et al., 2012). (b) Sample trajectory over the phase plane (y1, y2) for the system described by Eq. (1.10) with reflecting boundary conditions; the initial condition y0 and the sense of rotation are indicated. The trajectory wiggles close to the deterministic heteroclinic cycle showing noise-induced limit-cycle behaviour. (c) Slowing transient of the deterministic system. (d) Time evolution of the noise-induced oscillation. Parameters: α = 0.1 (all panels), D = 0.01 (b) and (d). are shown in Figure 1.9 together with that of white noise. The study of colored-noise driven systems is theoretically challenging (H¨anggi and Jung, 1994). 9
  • 16. 1 INTRODUCTION ω S(ω) White Low − pass Narrow − band Figure 1.9: Power spectrum of white noise (black) vs two instances of colored noise: narrow- band fluctuations (blue) and low-pass noise (red). 1.4.1. Ornstein-Uhlenbeck process The Ornstein-Uhlenbeck process (OUP) (Uhlenbeck and Ornstein, 1930) is the Gaussian, zero- mean stochastic process v(t) defined by the equation τc ˙v = −v + 2σ2τcξ(t), where τc is the correlation time of the process, σ2 is the variance and ξ(t) is Gaussian white noise as defined in Section 1.3.1. The intensity of the noise is related both to the correlation time and the variance, and reads DOU = σ2τc. The OUP was originally introduced to describe the velocity v of a 1D Brownian particle and it is one of the milestones of statistical mechanics. The essential mathematical feature of the OUP is that it is described by a Langevin equation which is linear in v (this property can easily be generalised to extend the OUP to higher dimensions, see e.g. (Risken, 1984)) and that the coefficient D characterizing the strenght of the noise does not depend on v, i.e. the noise is additive. As a consequence of such mathematical properties, the OUP is essentially the only process which is stationary, Gaussian and Markovian, as stated by Doob’s theorem (van Kampen, 2007). Moreover, its autocorrelation function C(τ) and power spectrum S(f) are well-known and read Cηη(τ) = σ2 e− |τ| τc , (1.12) and Sηη(f) = 2σ2τc 1 + (2πfτc)2 . (1.13) In other words, the OUP displays temporal exponential correlations with timescale τc. The power spectrum is therefore a Lorentzian function centered at f = 0 by the Wiener-Khinchin theorem Eq. (1.2). It is also convenient to define a cut-off frequency fcut-off as the frequency value at which the power spectrum decays at half of its value at f = 0, i.e. S(fcut-off) = S(f = 0)/2. This leads to the simple expression 10
  • 17. 1.5 Aim and outline of the thesis fcut-off = 1 2πτc , which quantifies a characteristic range of frequencies involved in the process. 1.4.2. Harmonic noise In the following we call harmonic noise (Schimansky-Geier and Z¨ulicke, 1990) a Gaussian, zero- mean stochastic process x(t) that is governed by the linear dynamical equation ¨x + γ ˙x + ω2 0x = √ 2Dξ(t), (1.14) where γ is the friction coefficent, D is the intensity of the driving noise, ω0 is the frequency of the undamped oscillation and ξ(t) is the usual Gaussian white noise. Equation Eq. (1.14) can also be rewritten as a system of Langevin equations ˙x = y ˙y = −γy − ω2 0x + √ 2Dξ(t), (1.15) so that the joint process {x(t), y(t)} is Markovian. The study of white-noise driven damped harmonic oscillators goes back to the early works of Chandrasekhar (1943) and Wang and Uhlenbeck (1945), in which analytical expressions for the underdamped regime, i.e. ω0 > γ/2, were obtained for both the power spectrum (using Rice’s method), Sxx(f) = 2D (2πfγ)2 + (2πf)2 − ω2 0 2 . (1.16) and the autocorrelation function, Cxx(τ) = D ω2 0γ exp − γ 2 |τ| cos(Ωτ) + γ 2Ω sin(Ω|τ|) , (1.17) where Ω = ω2 0 − (γ/2)2 ≥ 0. This last expression is obtained by calculating the Fourier transform of S(f) according to the Wiener-Khinchin theorem Eq. (1.2), which requires of contour integration methods. From Eq. (1.17) we can deduce the variance of the process σ2 HN = Cxx(0) = D/(ω2 0γ). In the underdamped regime, harmonic noise is a model of stochastic oscillations (with quality factor given by Eq. (1.5) in the weakly-damped limit), with the power spectrum exhibiting two peaks at ωp = 2πfp = ± ω2 0 − γ2/2 with a local minimum at f = 0. This contrasts with the Ornstein-Uhlenbeck case, where the power spectrum monotonously decays to 0 with a cut-off frequency determined by the inverse of the correlation time of the noise. 1.5. Aim and outline of the thesis The theoretical treatment of stochastic oscillations is challenging. The aim of this thesis is to develop and apply analytical, semianalytical and numerical methods to describe the power spectrum and the autocorrelation function of such oscillations. Within the enormous landscape of models available, we focus on two specific types of non-linear stochastic oscillators. Each 11
  • 18. 1 INTRODUCTION model allows us to explore the effect of a different aspect of the noise on the output of the system. In Section 2 an integrate-and-fire model driven by temporally correlated noise is studied. Two different types of noise are discussed separately there: low-pass filtered noise, modelled as an Ornstein-Uhlenbeck process; and harmonic noise, which corresponds to the interesting case of a non-linear stochastic oscillator driven by a stochastic oscillation. The effect of the parameters characterizing those driving fluctuations on certain features of the output power spectrum of the spike train will be investigated by numerical simulations and analytical formulae. In Section 3 we study a paradigmatic example of a heteroclinic system that only displays oscillations in the presence of noise, as introduced in Section 1.3.5. The system is investigated through a numerical and a semianalytical technique (the method of Matrix Continued Fractions), from which we obtain results in a relevant noise regime for the steady-state distribution, the steady-state probability current and the power spectrum. Useful measures are extracted from the power spectrum that characterize the spectral features of the system. In the range of noise values where the previous techniques are not efficient, namely the small noise limit, an analytical approximation for the power spectrum is developed. 12
  • 19. 2. Integrate-and-fire neuron driven by colored noise Neurons, the fundamental components of the nervous system, are electrically excitable systems connected to each other forming complex networks. Information is encoded and transmitted across these networks in the form of short electrical pulses (of approximately 100 mV in am- plitude and a few ms of duration), also called ”spikes” or action potentials. These pulses are generated when the electrical potential across the membrane of a neuron (referred to as mem- brane potential) exceeds a certain threshold, which can occur even in the absence of any sensory stimulus due to various sources of noise influencing the system (spontaneous firing). While the subthreshold dynamics of the membrane potential is shaped by the inputs that the neuron re- ceives from other neurons, the shape of the action potentials is very stereotypical and does not change as it propagates along the neuron. This suggests that the information is not encoded in the form of the pulse itself, but rather in the number and timing of spikes (Rieke et al., 1999). A sequence of such stereotyped events is called a spike train (Gerstner and Kistler, 2002). The biophysical mechanism of generation of an action potential in a single neuron is well captured by conductance-based models such as the Hodgkin and Huxley model and its two- dimensional simplifications, e.g. the FitzHugh-Nagumo model (Izhikevich, 2007; Gerstner and Kistler, 2002). However, these non-linear, higher-dimensional models are very difficult to treat analytically and therefore often not convenient to make predictions about the behaviour of the system. Hence, the problem calls for simpler models that can still capture realistic features of neural behaviour. One of the most successful models in that respect is the integrate-and- fire model (Burkitt, 2006a; Gerstner and Kistler, 2002; Brette, 2015). This phenomenological model has been useful to understand certain aspects of single-neuron coding (Vilela and Lindner, 2009), and its simplicity also makes it a very popular choice in numerical and analytical network studies, see e.g. (Brunel and Hakim, 1999; Brunel, 2000; Wieland et al., 2015). A very detailed account on the physiology of neurons can be found, for example, in the book by Kandel et al. (2000). In this section we investigate the effect of temporally-correlated noise on the autocorrelation function and the power spectrum of a perfect integrate-and-fire (PIF)neuron. After a brief introduction to the PIF model, we discuss the role of colored noise in neural systems and present an analytical approach introduced by Schwalger et al. (2015) for a PIF neuron driven by weak Gaussian colored noise. We apply this approach to two different types of colored-noise driving: an Ornstein-Uhlenbeck process and harmonic noise. The results are then compared to stochastic simulations. 2.1. Integrate-and-fire models In integrate-and-fire models the state of the neuron is solely characterized by its membrane potential (denoted here as v), i.e. they are one-dimensional models. In this respect, the spatial extension of the neuron is neglected and effectively one deals with a point neuron. The second fundamental simplifying assumption is based on the fact that the shapes of action potentials are stereotypical and their timing is what matters, which allows us to focus on the subthreshold dynamics of the membrane potential. Indeed, in integrate-and-fire models the biophysical mechanism of spike generation (related to the activation/inactivation of voltage- gated ion channels) is neglected so that the action potentials are not dynamically generated from the model but rather added ad hoc as formal events at a certain firing time ti to an output 13
  • 20. 2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE spike train x(t) according to a fire-and-reset rule: whenever v crosses a threshold voltage vT , a spike is generated and the voltage is reset to a value vR. The subthreshold dynamics continues after a certain refractory period τref (τref = 0 throughout this thesis). Such a fire-and-reset rule introduces a very strong non-linearity in the system, but allows for a reduction of the dimensionality of the dynamics. The equation describing a general (noisy) integrate-and-fire model is ˙v = f(v) + η(t), if v = vT : v → vR, where η(t) is a stochastic process accounting for sources of noise in the system (see Section 2.2) and f(v) is a function describing the subthreshold dynamics that can be extracted from experi- mental data by using the method of dynamic I −V curves (Badel et al., 2008). The function f(v) determines the particular type of integrate-and-fire model: common choices are f(v) = µ − v (leaky integrate-and-fire neuron; a voltage trace of such a model can be seen in Figure 1.7), f(v) = µ + v2 (quadratic integrate-and-fire neuron) and the perfect integrate-and-fire (PIF) model, with f(v) = µ = const, ˙v = µ + η(t), if v = vT : v → vR, (2.1) where µ is the mean input current. Each of these models has a certain range of applicability (a comparison of the performance of some stochastic IF models is presented in (Vilela and Lindner, 2009)), and in particular the PIF model is the canonical choice to describe a so-called tonically firing neuron (such as some sensory cells with a high firing rate), in which the mean input current µ is so strong that the voltage-dependence of the subthreshold dynamics can be neglected (Schwalger et al., 2013; Bauermeister et al., 2013). In that situation, the firing of the neuron is pacemaker-like and very regular. A remarkable feature of the PIF model is that since the evolution of the membrane voltage between spikes does not depend on the voltage itself, the output firing rate can be shown to be always r0 = µ/vT , no matter what the temporal correlations of the noise η(t) are (Bauermeister et al., 2013). For the remainder of the chapter we will focus on the PIF model Eq. (2.1). A detailed review on integrate-and-fire models can be found in (Burkitt, 2006b). 2.2. Colored noise in neural systems The term η(t) in Eq. (2.1) models the influence of the noise on the dynamics of the neuron. Such a term is necessary since in vitro and in vivo recordings of neural spiking display high variability. This variability is not due to measurement noise, but it is inherent to the neural system. In general, there are three main sources of noise influencing the membrane potential (Gerstner and Kistler, 2002): (i) Channel noise arising from a finite population of ion channels due to the random nature of their opening/closing events; (ii) Synaptic unreliability due to stochastic release of neurotransmitter; (iii) (Quasi-)random arrival times of synaptic input. (i) is intrinsic to the neuron, whereas (ii) and (iii) are associated to external synaptic input. However, noisy integrate-and-fire neuron models do not clearly distinguish between intrinsic and external sources of noise due to its representation of the neuron as a point neuron (Burkitt, 2006b), and all contributions are subsumed under the term η(t). 14
  • 21. 2.3 Analytical approach to colored noise Focusing on η(t) as a model of the noisy synaptic input, theoretical studies have frequently assumed that it is Poissonian, i.e. uncorrelated in time. By using a diffusion approximation one can then model the synaptic input as Gaussian white noise, which simplifies considerably the analytical treatment of the problem. However, realistic synaptic inputs have temporal structure, due to a plethora of phenomena: bursting, refractoriness, etc. (Schwalger et al., 2015). Because the white noise approximation cannot account for such temporal correlations, the study of colored noise (i.e. temporally-correlated noise, see Section 1.3.4) in neural systems is currently an active topic of research for which analytical results are still required. 2.3. Analytical approach to colored noise The analytical treatment of dynamical systems driven by colored noise present different compli- cations (H¨anggi and Jung, 1994). In particular, the colored-noise driving renders these systems non-Markovian, so that standard techniques such as the Fokker-Planck approach cannot be di- rectly applied. Nevertheless, for certain cases there are still several methods one can resort to, such as Markovian embedding (a comprehensive presentation can be found in (H¨anggi and Jung, 1994; Luczka, 2005)). A sophisticated approach using a Markovian embedding has been recently put forward by Schwal- ger et al. (2015) to calculate the interspike-interval and higher-order interval statistics of a PIF neuron driven by weak Gaussian noise, i.e. ˙v = µ + ση(t), if v(ti) = vT : v → vR where σ2 is the variance of the input noise and η(t) is a zero-mean, unit-variance Gaussian process with autocorrelation function Cin(τ) and power spectrum Sin(f) = +∞ −∞ dτei2πτf Cin(τ). The approach introduced by Schwalger et al. (2015) can account for many different types of colored noise driving η(t), as long as its correlation function can be approximated by a sum of damped oscillations (or in the frequency domain, if the power spectrum can be represented by a sum of Lorentzian functions). This condition is satisfied by a large class of processes, and in particular by the two that are studied in this thesis: an Ornstein-Uhlenbeck process and harmonic noise. Finally, the assumption that the noise must be weak is expressed in terms of a small parameter = σ/µ 1. The main result of (Schwalger et al., 2015) is an explicit formula for the n-th order interval1 densities Pn(t), Pn(t) = r0 2 4π 2h3(t) exp − (r0t − n)2 4 2h(t) [(n − r0t)g(t) + 2h(t)]2 2h(t) − 2 [g2 (t) − 2h(t)Cin] , (2.2) where r0 = µ vT −vR is the mean firing rate of a PIF neuron and g(t) and h(t) are given by g(t) = r0 t 0 dt Cin(t ) (2.3) and 1 n-th order intervals are sums of n subsequents interspike intervals. Details on these and other statistics of neural output can be found in (Gabbiani and Koch, 1998). 15
  • 22. 2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE h(t) = r0 t 0 dt g(t ). (2.4) In other words, Eq. (2.2) gives the n-th order interval densities of the output spike train by knowing only the normalized autocorrelation function of the weak input noise, Cin(t), and some integrals over it, g(t) and h(t), which can be calculated analytically if Cin(τ) is simple enough2. The density Pn(t) can be related to the autocorrelation function of the output point process by using (Cox and Lewis, 1966; Gabbiani and Koch, 1998) Cout(τ) = r0 δ(τ) + ∞ n=1 Pn(|τ|) − r0 . (2.5) After substituting Eq. (2.2) into the above equation, an exact expression for the autocorrelation function of a PIF neuron driven by a weak Gaussian process with arbitrary temporal structure is obtained. Ornstein-Uhlenbeck noise Substituting Eq. (1.12) into Eq. (2.3) and Eq. (2.4) we find gOU(t) = r0τc [1 − exp(−t/τc)] , hOU(t) = r2 0τ2 c t τc + exp −t τc − 1 , (2.6) which are required to compute the autocorrelation function of the output spike train of a PIF neuron driven by colored noise, as described above. Harmonic noise Using Eq. (1.17), we can proceed as in the Ornstein-Uhlenbeck case and calculate the functions gHN(t) = r0 Ω2 + (γ/2)2 γ + exp − γ 2 t sin(Ωt) Ω Ω2 − γ 2 2 − γ cos(Ωt) , (2.7) and hHN(t) = r2 0 Ω2 + (γ/2)2 γt + Ω2 − (3/4)γ2 Ω2 + (γ/2)2 − exp(−(γ/2)t) Ω2 + (γ/2)2 cos(Ωt) Ω2 − 3 4 γ2 + sin(Ωt) Ω 3 2 γΩ2 − γ 2 3 . (2.8) 2.4. Results for a PIF model driven by colored noise For convenience we have chosen µ = 1, vT = 1 and vR = 0, which leads to a firing rate of r0 = 1. The accuracy of the analytical approximations is controlled by the small parameter = σ/µ, where σ is the standard deviation of the input noise and its square, the variance σ2, is a parameter in our simulations. 2 Note that Cin(τ) is the autocorrelation function for a unit-variance stochastic process, i.e. Cin(0) = 1. One must take such a normalization into account when calculating Eq. (2.4) and Eq. (2.3). 16
  • 23. 2.4 Results for a PIF model driven by colored noise For each type of noise driving the system, an approximation for the analytical autocorrelation function has been calculated by truncating the infinite sum in Equation (2.5) after N terms. The result has then been (numerically) Fourier-transformed in order to obtain a semianalytical approximation for the power spectrum. A truncation parameter of N = 100 is sufficient to reproduce accurately the relevant part of the autocorrelation function and the first peak of the power spectrum for all the cases explored here. The low frequencies, on the contrary, are related to the long-time behaviour of the autocorrelation function, which is affected by the truncation. We choose therefore a high truncation parameter, N = 10000, such that we can compare the power spectrum over the whole frequency range available from the stochastic simulations. 2.4.1. PIF neuron driven by Ornstein-Uhlenbeck noise Most of the analytical results that are found in the literature concerning colored-noise driving in neurons involve exponentially-correlated noise (Schwalger et al., 2015), which can be generated by an OUP at the cost of adding one degree of freedom to the dynamics (Markovian embedding). However, analytical tractability is not the only reason to choose exponentially-correlated noise to drive the system: it turns out that in many cases filtered synaptic dynamics and slow intrinsic channel noise can be well approximated by an OUP with an adequate correlation time (Fisch et al., 2012). The dynamics of the PIF neuron driven by an OUP is governed by the following two- dimensional set of stochastic differential equations: ˙v(t) = µ + η(t), if v(ti) = vT : v → vR ˙η(t) = − η(t) τc + 2σ2 OU τc ξ(t). (2.9) A sample trajectory is shown in Figure 2.1a. Large values of the input noise lead to an increased probability of firing, i.e. spikes are more closely spaced in time. vR vT v(t) 0 2 4 6 8 10 12 14 t −1 0 1 OUη(t) (a) PIF neuron driven by an OUP vR vT v(t) 0 2 4 6 8 10 12 14 t −2 0 2 HNx(t) (b) PIF neuron driven by harmonic noise Figure 2.1: Illustration of the PIF driven by (a) an OUP and (b) harmonic noise: the upper panel represents a membrane voltage trace of a PIF model that yields a spike whenever it hits the threshold vT . The lower panel shows a sample trajectory of the input noise. Note the increased firing probability at higher values of the driving noise processes η(t) and x(t). Here the size of the spikes is arbitrarily set for illustration purposes. 17
  • 24. 2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE Here the important parameters of the system are the correlation time, τc, which tells us how slow the fluctuations of the input noise are; the variance of the noise, σ2; and the firing rate of the PIF neuron, r0, which has been set to 1, as explained above. What follows is a comparison between numerical simulations of Eq. (2.9) and (semi)analytical results obtained by implementing the formulas presented in Section 2.3. The effect of τc and σ2 on the autocorrelation function and the power spectrum is explored. The coherence of the oscillations is quantified by the quality factor Q, which is measured on the first peak of the output power spectrum from the semianalytical results. Effect of the variance σ2 OU Here the correlation time is fixed (and therefore also the cut- off frequency) to an intermediate value τc = 10 and several simulations are performed with increasing σ2 OU . The aim is to observe at which point the analytical formulas ceased to provide a good approximation for the autocorrelation function and the power spectrum obtained from the numerical simulations. Results for the autocorrelation function and the power spectrum are summarized in Figure 2.2. For relatively small σ2 OU the system shows rather periodic firing and peaks in the power spectrum at the firing rate r0 and its higher harmonics are observed. 0 2 4 6 8 0 2 4 6 8 10 C(τ) τ σ2 = 0.001 σ2 = 0.01 σ2 = 0.1 σ2 = 1 (a) 10−4 10−2 100 0.001 0.01 0.1 1 S(f) f σ2 = 0.001 σ2 = 0.01 σ2 = 0.1 σ2 = 1 (b) . Figure 2.2: Autocorrelation function (a) and power spectrum (b) of a PIF neuron driven by an OUP for different values of the variance of the OUP σ2 OU. The correlation time of the OUP has been set τc = 10. Results from numerical simulations of Eq. (2.9) (dots) are compared to (semi)analytical results (solid lines) from the approach outlined in Section 2.3. The infinite sum is cut off after N = 10000 terms. Other parameters: µ = 1.0 and vT = vR = 1.0, leading to r0 = 1.0. Figure 2.2b shows that the analytical approximation breaks down for σ2 OU = 1, which is in any case expected since in principle it is only valid for weak noise. As the variance (and thus the noise intensity DOU = σ2 OUτc) increases, more and more power is added in the low-frequency range, where the OUP contains most of the power. It is remarkable that the Lorentzian structure of the input noise is preserved in that range, observation that was also made in (Middleton et al., 2003). At higher frequencies, increasing the variance broadens the peaks in the power spectrum 18
  • 25. 2.4 Results for a PIF model driven by colored noise until it destroys completely those at higher harmonics of the firing rate r0 = 1. The quality factor Q of the stochastic oscillations decreases as σ2 OU is increased, as seen in Figure 2.4a. Notably, the decrease seems to follow a power-law for the range of noise variances explored. Effect of the correlation time τ Here the variance of the noise is fixed to σ2 OU = 0.01, a value at which the analytical approximations are still expected to reproduce well the result of the numerical simulations (see above). Several simulations are performed for increasing τc, and the results are summarized in Figure 2.3. 0 2 4 6 8 0 2 4 6 8 10 C(τ) τ τc = 0.1 τc = 1 τc = 10 τc = 100 (a) 10−4 10−2 100 0.001 0.01 0.1 1 S(f) f τc = 0.1 τc = 1 τc = 10 τc = 100 (b) Figure 2.3: Autocorrelation function (a) and power spectrum (b) of a PIF neuron driven by an OUP for different values of the correlation time of the OUP τc. The variance of the OUP has been set σ2 OU = 0.01. Results from numerical simulations of Eq. (2.9) (dots) are compared to (semi)analytical results (solid lines) from the approach outlined in Section 2.3. The infinite sum is cut off after N = 10000 terms. Other parameters: µ = 1.0 and vT = vR = 1.0, leading to r0 = 1.0. Different τc lead to different cut-off frequencies, which can be observed in the low-frequency range of the power spectrum, where the system preserves the noise spectral structure. Notably, increasing τc leads to less regular spike trains only for correlation times smaller than the mean interspike interval of the system (here I = 1/r0 = 1). This becomes evident when we look at the dependence of the quality factor Q on the correlation time in Figure 2.4b. This “saturation” in the quality of the oscillations can also be noticed in the number of peaks present in the power spectrum: whereas large noise variance left only one peak in the power spectrum, for long correlation times the power spectrum seems to “saturate” and several peaks are still observed. 2.4.2. PIF neuron driven by harmonic noise The second type of noise used to drive the PIF neuron is the so-called harmonic noise, which has been already discussed above. This setup corresponds to the interesting case of a non-linear stochastic oscillator, which generates narrow-band noise (in the sense that the power spectrum 19
  • 26. 2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE 10−3 10−2 10−1 σ2 OU 100 101 Q (a) 10−2 10−1 100 101 102 103 τc 100 101 102 103 Q (b) Figure 2.4: Dependence of the quality factor Q of the PIF neuron’s output on (a) the variance σ2 OU and (b) the correlation time τc of a driving Ornstein-Uhlenbeck noise. The quality factor has been extracted from the (semi)analytical results from Section 2.3. While Q seems to decrease as a power-law as the noise variance is increased, for the correlation time a saturation is observed at timescales comparables to the mean interspike interval. Parameters: (a) τc = 10; (b) σ2 OU = 0.01; remaining: µ = 1.0 and vT = vR = 1.0, leading to r0 = 1.0. of the output contains peaks) but which is also driven by narrow-band noise. The combined system is described by the following set of stochastic differential equations: ˙v(t) = µ + x(t), if v(ti) = vT : v → vR ˙x(t) = y(t) ˙y(t) = −γy(t) − ω2 0x(t) + √ 2Dξ(t), (2.10) where ξ(t) is the usual zero-mean white Gaussian noise and the output of the system is a collection of spike times {ti}. A sample realization of the process is shown in Figure 2.1b. The interplay between the two intrinsic frequencies present in the system, namely the mean firing rate of the PIF neuron, r0 = µ vT −vR , and the frequency of the damped oscillation of the harmonic noise, Ω = ω2 0 − (γ/2)2, leads to interesting non-linear effects in the power spectrum. In the simulations the relation between r0 and Ω has been parametrized by means of their frequency ratio w = Ω r0 . (2.11) Apart from w, other parameters relevant to the simulations are the variance of the harmonic noise, σ2 HN = D/(ω2 0γ), the quality factor of the noise Q = Ω/γ and r0 = 1. We also provide here for completeness the relations that allow us to determine ω0, γ and D (intensity of the white Gaussian driving) given those parameters: γ = 2πr0w Q , ω2 0 = (2πr0w)2 1 + 1 4Q2 . From here, D = σ2 HNω2 0γ. 20
  • 27. 2.4 Results for a PIF model driven by colored noise Effect of the input quality factor Q Because the PIF driven by harmonic noise is an instance of a non-linear oscillator driven in turn by stochastic oscillations, we are particularly interested in studying the effect of the coherence of the input noise, characterised by Q, on the power spectrum and the autocorrelation funcion of the output spike train. We fix therefore w = 0.4 and σ2 HN = 0.01, a reasonably small value for which the analytical expressions are still expected to reproduce accurately the autocorrelation function, as they do in the case of the PIF neuron driven by an OUP. Furthermore, in order to observe complex non-linear effects in the power spectrum w should not be a simple ratio such as w = 1/2 (Bauermeister et al., 2013). The results are summarised in Figure 2.5. The striking feature of the power spectrum as compared to the OUP driving is that peaks do not occur solely at the firing rate r0 and its higher harmonics: the frequency of the harmonic noise Ω is also present, together with sidebands at r0 ±Ω/(2π) and some of their harmonics. Increasing the quality factor of the input reveals more peaks in the power spectrum and reduces the width of the existing ones, indicating an enhanced coherence of the output. Here the analytical expressions also reproduce quite accurately the results from numerical simulations. 0 5 10 0 2 4 6 8 10 12 C(τ) τ Q = 1 Q = 20 Q = 50 (a) 10−4 10−2 100 1 S(f) f Q = 1 Q = 20 Q = 50 (b) Figure 2.5: Autocorrelation function (a) and power spectrum (b) of a PIF neuron driven by harmonic noise for different values of the input quality factor Q. The variance of the harmonic noise and the frequency ratio have been set to σ2 HN = 0.01 and w = 0.4, respectively. Results from numerical simulations of Eq. (2.10) (dots) are compared to (semi)analytical results (solid lines) from the approach outlined in Section 2.3. The infinite sum is cut off after N = 10000 terms. Other parameters: µ = 1.0 and vT = vR = 1.0, leading to r0 = 1.0. Comparison with an experimental model Remarkably, the PIF neuron driven by harmonic noise seems to suit very well a specific experimental model: the peripheral electroreceptors of paddlefish (Wilkens et al., 1997). In these electroreceptors, a population of epithelial cells generate collectively spontaneous stochastic oscillations at around fe = 25 Hz, which drive in turn a pacemaker-like oscillator in the peripheral terminals of afferent sensory neurons at 21
  • 28. 2 INTEGRATE-AND-FIRE NEURON DRIVEN BY COLORED NOISE approximately twice the frequency (denoted by fa). The power spectra we presented in Figure 2.5b (an annotated version is shown in Figure 2.6b) seem to capture already the main features of the experimental power spectrum of the afferent oscillations shown in Figure 2.6a, including the sidepeaks due to the non-linear interaction of the two fundamental frequencies, fa and fe. Indeed, a similar theoretical model was used in (Bauermeister et al., 2013), where the power spectrum was obtained by numerical simulations of a PIF neuron driven by harmonic noise together with some Ornstein-Uhlenbeck noise and which successfully compared to experimental data (Figure 2.6a). This is an example of how a relatively simple model can account for realistic features of neural sensory systems. (a) 10−4 10−2 100 1 S(f) f Q = 1 Q = 20 Q = 50 (b) Figure 2.6: Comparison of the power spectrum of the oscillations in the afferent terminals of peripheral electroreceptors of paddlefish (a) with the power spectra for a PIF driven by harmonic noise (b). (a) A power spectrum of a representative paddlefish electroreceptor afferent measured experimentally (grey dots) is compared with a power spectrum obtained by numerical simulations (magenta line) of a PIF neuron driven by a combination of Ornstein-Uhlenbeck and harmonic noise. Adapted from (Bauermeister et al., 2013). (b) Power spectrum of a PIF neuron driven by harmonic noise (see Section 2.4.2) for different Q. Note the peaks at the firing rate of the PIF neuron, r0, the driving frequency of the noise, Ω/(2π), as well as the sidebands generated due to their non-linear interaction. 22
  • 29. 3. Noise-induced oscillations in a heteroclinic system The second system discussed in this thesis is a paradigmatic example of a system that only displays oscillations in the presence of noise. We use the term noisy heteroclinic oscillator to refer to such a system from now on. Its dynamics is governed by the pair of Langevin equations (Thomas and Lindner, 2014) ˙y1 = cos(y1) sin(y2) + α sin(2y1) + √ 2Dξ1(t), ˙y2 = − sin(y1) cos(y2) + α sin(2y2) + √ 2Dξ2(t), (3.1) together with reflecting boundary conditions on the domain −π/2 ≤ {y1, y2} ≤ π/2. The processes √ 2Dξ1,2(t) are independent white Gaussian noise sources of intensity D satisfying ξi(t)ξj(t ) = δ(t − t )δi,j. The parameter α determines the stability of the heteroclinic cycle from the deterministic dynamics. As an illustration of the dynamics of the system, Figure 1.8 shows a sample trajectory for weak-noise driving, where pronounced oscillations are displayed in the form of irregular clockwise rotations in the (y1, y2) plane. Details on the dynamics of the system have already been discussed above, in particular how the stochastic case differs from the deterministic one (D = 0) due to the presence of a noisy finite-period limit cycle when α ∈ (0, 1/2), for which the underlying heteroclinic cycle is stable. The period of oscillation is related to the intensity of the driving white Gaussian noise, so that the smaller the noise, the longer the period of the limit cycle (Shaw et al., 2012). For larger values of the noise, however, it is expected that the oscillatory (though noisy) nature of the individual realizations is destroyed, leading to a loss of the coherence of the oscillations and the broadening of the peak in the power spectrum. This noisy heteroclinic oscillator was motivated in Section 1.3.5 as one possible mechanism to obtain stochastic oscillations. On the top of it, it has some other appealing features, related to the fact that the deterministic dynamics contains (for a certain parameter range) a stable heteroclinic cycle. It turns out that systems containing such cycles are appropriate playgrounds to study the role of saddle points in controlling the timing of rhythmic behaviour (Shaw et al., 2012). This might be very useful to model rhythms in biological systems, which often show the robustness to perturbations characteristic of limit-cycle dynamics together with mechanisms of behavioural or metabolic control. Such mechanisms might lead to extended dwell times in localized regions of phase space, which are typical of trajectories passing close to heteroclinic trajectories connecting different saddle points. Due to these features, stable heteroclinic cycles have been used to model several phenomena: e.g. olfactory processing in insects (Rabinovich et al., 2008) and motor control in a marine mollusk (Varona et al., 2002) (a more comprehensive list can be found in (Shaw et al., 2012)). Although these aspects are not studied in this thesis, we hope they provide some insight into the versatility of this class of models in describing rhythmic behaviour. At the end of this section we derive an approximation for the small noise limit of the noisy heteroclinic oscillator. The effect of small additive noise on systems possessing structurally stable heteroclinic cycles was originally motivated by the study of turbulent layer models (Busse and Heikes, 1980; Stone and Holmes, 1989), where the addition of noise is responsible for physical phenomena such as intermittency and bursting. These works identified the fundamental role of small random perturbations in the neighbourhood of the hyperbolic fixed points of the system, which was further studied in e.g. (Kifer, 1981; Stone and Holmes, 1990; Stone and Armbruster, 23
  • 30. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM 1999). More general and rigorous results on the small noise limit of noisy heteroclinic networks are also available (Armbruster et al., 2003; Bakhtin, 2010a; Bakhtin, 2010b). In this section we derive a method to obtain the power spectrum of sin(y1) for the noisy heteroclinic oscillator using the Fokker-Planck formalism. The resulting equations are then solved by two different numerical and semianalytical matrix methods, which are thoroughly discussed and whose performance is compared in terms of accuracy and efficiency. Results for the steady-state probability density, the steady-state probability current and the power spectrum are obtained, and the dependence of some spectral measures on the noise intensity is studied. Finally, we develop an approximation for the power spectrum in the small noise limit. 3.1. General considerations In this section we briefly summarize the Fokker-Planck formalism and discuss some particularities we encounter when applying it to our model. Moreover, we recall a useful result on the small noise limit of systems containing heteroclinic cycles as presented in (Stone and Holmes, 1990). These are the building blocks for the results derived in the rest of the chapter, namely the calculation of the power spectrum for sin(y1) using matrix methods for the Fokker-Planck equation and the dichotomous approximation of the power spectrum in the small noise limit. Fokker-Planck formalism Our analysis relies heavily on the Fokker-Planck formalism (Risken, 1984), which is justified because the driving noise in the Langevin equations Eq. (3.1) is white and Gaussian. The transition probability density P(y, t + τ|y , t) for the stochastic process y(t) satisfies then the time-dependent Fokker-Planck equation ∂τ P(y, t + τ|y , t) = LFP(y)P(y, t + τ|y , t), (3.2) with initial condition P(y, t|y , t) = δ(y − y ). The Fokker-Planck operator LFP is determined from the Langevin equations Eq. (3.1) and reads in this particular case LFP = ∂y1 (− cos(y1) sin(y2) − α sin(2y1) + D∂y1 ) + ∂y2 (sin(y1) cos(y2) − α sin(2y2) + D∂y2 ) . (3.3) Because the Fokker-Planck operator LFP(y) does not depend on time, we can express P(y, t + τ|y , t) as (Risken, 1984) P(y, t + τ|y , t) = P0(y) + n ϕn(y)φ∗ n(y)e−λnτ , (3.4) where ϕn(y) and φ∗ n(y) are, respectively, the eigenfunctions with eigenvalue λn ∈ C of LFP and its adjoint L† FP, i.e. LFP ϕn = −λnϕn, L† FPφ∗ n(y) = −λnφ∗ n(y). (3.5) The eigenfunctions ϕn(y) and φ∗ n(y) must satisfy appropriate boundary conditions. The eigen- function associated to λ0 = 0 is the unique stationary distribution P0(y), such that limτ→∞ P(y, t+ τ|y , t) → P0(y). 24
  • 31. 3.1 General considerations In Section 3.2 we present a method to determine the power spectrum of the observable f(y1(t)) = sin(y1(t)) for the noisy heteroclinic oscillator. This method requires solving Fokker- Planck-like equations. However, the Langevin equations Eq. (3.1) describe a non-potential system, i.e. we cannot find a scalar potential U(y) such that they can be written as ˙y = − U(y) + √ 2Dξ, where ξ is a vector whose components are independent white Gaussian noise processes. Solving analytically non-potential systems is not possible in general, and in the majority of the cases we must resort to semianalytical and numerical methods. That is indeed what we do in Sec- tion 3.3, where the methods used take advantage of the fact that the system Eq. (3.1) with reflecting boundary conditions on the domain Ω = [−π/2, π/2] × [−π/2, π/2] is equivalent to a system governed by the same dynamics Eq. (3.1) but with periodic boundary conditions on Ω = [−π, π] × [−π, π], if an appropriate observable f(y1(t)) is chosen. Under these conditions the eigenfunctions {ϕn(y), φ∗ n(y)} are 2π-periodic in y1 and y2 due to the invariance of LFP(y) under the transformations (y1, y2) → (y1 + 2πk, y2) and (y1, y2) → (y1, y2 + 2πk), k ∈ Z, i.e. independent 2π-translations in y1 and y2. In particular, a suitable observable f(y1) for which the previous statement holds is f(y1) = sin(y1): on the one hand, this quantity takes the same values in Ω as in Ω , i.e. the whole image sin(y1) ∈ [−1, 1]; on the other hand, sin(y1) preserves the reflection symmetry of the drift field about the lines y1 = ±π/2; the combination of these two facts guarantees that trajectories of y1 reflected on the boundaries of Ω and trajectories of y1 periodic on Ω lead to identical trajectories of sin(y1) and, hence, to the same statistics. It is in this sense that the two systems are equivalent. Of course, all the previous arguments would also apply for observables which are functions of y2(t). Sample trajectories of y1(t), y2(t) and sin(y1(t)) are shown in Figure 3.1. There sin(y1(t)) appears as a “smoothed out” version of y1(t), because its non-linearity reduces the amplitude of the fluctuations occurring close to y1 = ±π/2, i.e. in the neighbourhood of the saddle points, while leaving the transit regions practically unaffected. We also observe that the frequency of oscillation of sin(y1(t)) is the same as y1(t), which allows us to study the dependence on noise of the spectral measures of the underlying process y1(t) through those of sin(y1(t)). Last but not least, the methods we introduce later lead to a particularly simple form for the power spectrum of sin(y1). Small noise limit The key feature of the system is that the presence of even small amounts of additive temporally uncorrelated noise introduces a well-defined timescale, which does not exist in the deterministic case. This timescale emerges because the magnitude of the random and the deterministic components is comparable in the neighbourhood of the saddle points, hence the diffusive action of the noise prevents the trajectories from getting “stuck” there increasingly long periods of time. Nevertheless, the small perturbation does not modify significantly the structure of the vector field during the “jump” events, i.e. drifts following closely the heteroclinic connection. How the time spent in the neighbourhood of the saddle points (dwell time) depends on the noise intensity is therefore fundamental to characterise the spectral properties of the system. A theoretical analysis along these lines is performed in (Stone and Holmes, 1990), where the authors study these dwell times as first passage times in a finite neighbourhood of a saddle point embedded in a heteroclinic cycle. Linearizing the system at the saddle point and applying the Fokker-Planck formalism they are able to derive a relation between the mean first 25
  • 32. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM 0 20 40 60 80 100 120 140 t −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 2.0 y1(t),sin(y1)(t),y2(t) y1 y2 sin(y1) Figure 3.1: Sample traces of the system’s variables y1(t) (black lines) and y2(t) (red lines) and of the observable sin(y1) (blue dots). Parameters: D = 0.01, α = 0.1. passage time τsp and the intensity of the noise in the small noise limit D δ2, τsp ≈ 1 λu ln δ √ 2D + O(1) , (3.6) where λu is the unstable eigenvalue of the linearization and δ quantifies the size of the neigh- bourhood studied. Such a logarithmic dependence have been reported several times in the literature, e.g. (Bakhtin, 2010b; Kifer, 1981). The unstable and stable eigenvalues λu, λs are obtained by linearizing the deterministic velocity vector field at one of the saddle points, e.g. y0 = (−π/2, +π/2), which leads to ˙u1 = (1 − 2α)u1 ≡ λuu1 ˙u2 = −(1 + 2α)u2 ≡ −λsu2, where u = (u1, u2) = y − y0. The unstable eigenvalue is λu = (1 − 2α), whereas the absolute value of the stable eigenvalue is λs = (1 + 2α), and it turns out that this is valid also for the other saddle points. The fact that λs > λu, ∀α leading to a stable heteroclinic cycle implies that the distribution of trajectories leaving the neighbourhood of a saddle point (and therefore entering the next saddle) is centered on the heteroclinic connection (Bakhtin, 2010b). If we divide the phase plane of our system into four square regions, each one associated to a saddle point, we can use Eq. (3.6) to estimate the corresponding mean dwell times τc,i, i ∈ {1, . . . , 4}. The mean period of oscillation T will then be the sum T = i τc,i = 4τsp, where the last equality follows from symmetry arguments. In Section 3.5 we combine the results from (Stone and Holmes, 1990) with the theory of two-state processes to obtain an approximation of the power spectrum in the small noise limit. 26
  • 33. 3.2 Approach to the power spectrum 3.2. Approach to the power spectrum Here an approach to calculate the power spectrum for the stochastic process sin(y1,2(t)) is presented. It shares some features with that used in (Risken and Vollmer, 1982) to derive the susceptibility for the Brownian motion in a cosine potential. Let us start by some generic manipulations concerning a (non-linear) transformation x = f(yi), where yi is one of the components of the two-dimensional stationary stochastic process y(t) with stationary probability distribution P0(y1, y2) ≡ P0(y). In our setting the evolution of y(t) is described by Eq. (3.1), whereas f(yi) = sin(yi). In order to proceed it is convenient to alleviate the burden of the notation by focusing on one of the two variables, i.e. yi ≡ y1. The calculations for yi ≡ y2 would be carried out in exactly the same fashion. The autocorrelation function Cf(y1)f(y1)(τ) ≡ Cxx(τ) = x(t)x(t + τ) − x(t) x(t + τ) can be expressed as Cxx(τ) = dy1 dy1f(y1)f(y1)P(y1, t; y1, t + τ) − dy1f(y1)P(y1, t) 2 (3.7) = dy1 dy1f(y1)f(y1) P(y1, t; y1, t + τ) − P(y1, t)P(y1, t + τ) . (3.8) Here P(y1, t) and P(y1, t; y1, t + τ) are, respectively, one- and two-time marginal distributions of y1(t) from the joint process y ≡ {y1(t), y2(t)}, i.e. P(y1, t) = P(y1, t + τ) = P0(y1) = dy2P0(y) and P(y1, t; y1, t + τ) = dy2 dy2P(y , t; y, t + τ). (3.9) Moreover, P(y , t; y, t + τ) can be related to the so-called transition probability density P(y, t + τ|y , t) by P(y , t; y, t + τ) = P(y, t + τ|y , t)P0(y , t). (3.10) Note that due to stationarity the dependence on the absolute time t is redundant in all the previous expressions, hence from now on I set t = 0 and indicate only the time delay τ when required. Substituting Eq. (3.9) and Eq. (3.10) into Eq. (3.7) we obtain Cxx(τ) = dy1dy1dy2dy2f(y1)f(y1)P0(y ) P(y|y ; τ) − P0(y) . (3.11) According to the Wiener-Khinchin theorem Eq. (1.2), in order to calculate the power spectrum we have to Fourier transform Eq. (3.11), i.e. Sxx(ω) = +∞ −∞ dτeiωt Cxx(τ) = 2 Re +∞ 0 dτeiωt Cxx(τ) , (3.12) where Re[·] denotes the real part of the argument and the last equality follows from the fact that Cxx(τ) is a real and even function. The power spectrum has then been rewritten in terms of one-sided Fourier transforms of the autocorrelation function. 27
  • 34. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM Now we substitute Eq. (3.11) into Eq. (3.12), which leads to Sxx(ω) = 2 Re d2 yd2 y f(y1)f(y1)P0(y ) +∞ 0 dτeiωτ P(y|y ; τ) − P0(y) = 2 Re d2 yd2 y f(y1)f(y1)P0(y ) ˜B(y|y ; ω), (3.13) where d2y ≡ dy1dy2 and ˜B(y|y ; ω) ≡ +∞ 0 dτeiωτ [P(y|y ; τ) − P0(y)]. The second term, pro- portional to P0(y), removes the δ-singularity in ˜P(y|y ; ω) ≡ ∞ 0 dτeiωτ P(y|y ; τ) that occurs at ω = 0 when f(y1) = 0. What Eq. (3.13) tells us is that if we can determine ˜B(y, y ; ω) and P0(y) by any means, the power spectrum will be just a couple of integrations away. The immediate step is therefore to find out which differential equations these functions satisfy. On the one hand, the transition probability density P(y|y ; τ) is known to satisfy the time- dependent Fokker-Planck equation Eq. (3.2) with initial condition P(y|y ; τ = 0) = δ(y−y ). On the other hand, the stationary probability density P0(y) satisfies the time-independent Fokker- Planck equation, ∂τ P0(y) = 0 = LFP(y)P0(y). (3.14) To get an equation for ˜B(y|y ; ω) we subtract Eq. (3.14) from Eq. (3.2) and perform a one-sided Fourier transform with respect to τ on the result, so that the LHS reads ∞ 0 dτeiωt ∂τ P(y|y ; τ) − P0(y) = = eiωτ P(y|y ; τ) − P0(y) ∞ 0 − iω +∞ 0 dτeiωτ P(y|y ; τ) − P0(y) = 0 − P(y|y ; τ = 0) − P0(y) − iω ˜B(y|y ; ω) = − δ(y − y ) − P0(y) − iω ˜B(y|y ; ω), where we have used the corresponding initial condition P(y|y ; τ = 0) = δ(y−y ) and neglected3 the undetermined term eiω∞ [P(y|y ; τ → ∞) − P0(y)]. On the other hand, on the RHS we use that LFP(y) does not depend on τ, so that ∞ 0 dτeiωτ LFP(y) P(y|y ; τ) − P0(y) = = LFP(y) ∞ 0 dτeiωτ P(y|y ; τ) − P0(y) = LFP(y) ˜B(y|y ; ω). This leads to the differential equation for ˜B(y|y ; ω), (LFP(y) + iωI) ˜B(y|y ; ω)) = − δ(y − y ) − P0(y) , (3.15) where I is the identity operator. Thus, if P0(y) is known (e.g. from solving Eq. (3.14)), ˜B(y|y ; ω) can be in turn obtained as the solution to Eq. (3.15) and we have all the ingredients to get the power spectrum Sxx(f) through Eq. (3.13). 3 This neglection can be put into more formal grounds by considering a one-sided Laplace transform as a function of a complex parameter s = σ + iω, σ, ω ∈ R and taking the appropriate limit to a Fourier transform. 28
  • 35. 3.3 Solving the equations by matrix methods While the previously outlined approach is completely valid, it turns out that it is more con- venient to seek an equation for ˜H(y; ω) = d2 y f(y1)P0(y ) ∞ 0 dτeiωτ P(y|y ; τ) − P0(y) = d2 y f(y1)P0(y ) ˜B(y|y ; ω)). (3.16) in terms of which the power spectrum Eq. (3.13) can be expressed as Sxx(ω) = 2 Re d2 yf(y1) ˜H(y; ω), (3.17) where x = f(y1) as defined above. It is now easy to obtain an equation for ˜H(y; ω) by multiplying Eq. (3.15) by f(y1)P0(y ) and integrating over y , i.e. d2 y f(y1)P0(y )(LFP(y) + iωI) ˜B(y|y ; ω)) = − d2 y f(y1)P0(y ) δ(y − y ) − P0(y) , which leads to (LFP(y) + iωI) ˜H(y; ω) = −P0(y) [f(y1) − f(y1) ] , (3.18) where we have used Eq. (3.16) and the sifting property of the two-dimensional Dirac delta distribution δ(y − y ) ≡ δ(y1 − y1)δ(y2 − y2). Note that if P0(y) is already normalized, no additional normalization condition on ˜H(y; ω) is necessary. Let us briefly recap what we have achieved with the previous manipulations: the power spectrum Sxx(ω) of a (non-linear) transformation y1 → x = f(y1) of one component of the two-dimensional stationary process y(t) has been expressed in Eq. (3.17) in terms of a function ˜H(y; ω) which satisfies the inhomogeneous partial differential equation (PDE) Eq. (3.18). The inhomogeneity is essentially the steady-state probability density P0(y), which can be obtained from the stationary Fokker-Planck equation Eq. (3.14). Let us also note that this formulation of the problem in terms of a PDE for an auxiliary function ˜H(y; ω) instead of just ˜P(y|y ; ω) has been successfully used in (Gleeson and O’Doherty, 2006) to derive multiple numerical and asymptotic approximations of correlation functions and spectra, even though the system tackled by them is completely different. Hereinafter we focus on the model we are interested in: the noisy heteroclinic oscillator described by Eq. (3.1). In the next section two different matrix methods are used to determine ˜H(y; ω) from Eq. (3.18) in this particular model. 3.3. Solving the equations by matrix methods 3.3.1. Expansion into a complete set of functions The solution to the two-dimensional, second-order, partial differential equations (PDE) LFP(y)P0(y) = 0, (3.14 revisited) (LFP(y) + iωI) ˜H(y; ω) = −P0(y) [f(y1) − f(y1) ] , (3.18 revisited) 29
  • 36. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM for the noisy heteroclinic oscillator may be found by expanding P0(y) and ˜H(y; ω) into a com- plete set of functions chosen appropriately according to the boundary conditions. As explained in Section 3.1, the system whose dynamics is described by Eq. (3.1) with reflect- ing boundary conditions on the domain Ω = [−π/2, π/2] × [−π/2, π/2] is equivalent, for certain observables (here we use f(y1) = sin(y1)), to a system with dynamics described by the same equations but periodic boundary conditions on Ω = [−π, π] × [−π, π] instead. This suggests an expansion into a basis of complex exponentials with periods Li = 2π, i = 1, 2 in both variables y1 and y2, i.e. the set {ei(mk1y1+lk2y2)}, m, l ∈ Z, with fundamental modes ki = 2π Li = 1. The expansion therefore reads P0(y) = ∞ m=−∞ ∞ l=−∞ cm,lei(k1my1+k2ly2) = ∞ m=−∞ ∞ l=−∞ cm,lei(my1+ly2) . (3.19) Analogously, ˜H(y; ω) can be written as ˜H(y; ω) = ∞ m=−∞ ∞ l=−∞ ˜Hm,l(ω)ei(my1+ly2) . (3.20) By inserting the previous expansions into the corresponding differential equation, one obtains a system of linear equations to which standard numerical matrix methods can be applied, which are described in the last part of this subsection (see Sections 3.3.2 and 3.3.3). We start by plugging the stationary density P0(y) into Eq. (3.14) and then proceed similarly for ˜H(y; ω) in Eq. (3.18). Determination of the coefficients for the stationary solution P0(y) If we insert Eq. (3.19) into Eq. (3.14), we obtain 0 = − 1 4 m,l (m − l)cm−1,l−1 + (m + l)cm+1,l−1 − (m − l)cm+1,l+1 − (m + l)cm−1,l+1 + 2αmcm−2,l + 2αlcm,l−2 − 2αmcm+2,l − 2αlcm,l+2 + 4D(m2 + l2 )cm,l eimy1 eily2 , (3.21) where we have used the definition of LFP(y) given by Eq. (3.3), sin(x) = (eix − e−ix)/(2i) and cos(x) = (eix + e−ix)/2. Redefining the indices of the sums is also necessary for some terms. Since the previous relation is valid ∀y1, y2, the expression within square brackets must vanish and we have 0 = − 1 4 (m − l)cm−1,l−1 + (m + l)cm+1,l−1 − (m − l)cm+1,l+1 − (m + l)cm−1,l+1 + 2αmcm−2,l + 2αlcm,l−2 − 2αmcm+2,l − 2αlcm,l+2 + 4D(m2 + l2 )cm,l , ∀m, l ∈ Z. (3.22) This equation defines an infinite, homogeneous system of linear equations. The system contains additional symmetries which impose restrictions on the set of coefficients {cm,l}. In particular, reflections through the origin (y1, y2) → (−y1, −y2) leave LFP invariant, i.e. LFP(−y) = LFP(y), as it can be readily checked. The eigenfunctions ϕn(y) of LFP have 30
  • 37. 3.3 Solving the equations by matrix methods therefore a definite parity, i.e. they must be even ϕn(−y) = ϕn(y) or odd ϕn(−y) = −ϕn(y). From the positivity of the steady-state distribution P0(y) ≡ ϕ0(y) ≥ 0, we conclude P0(−y) = P0(y). (3.23) The consequences of this relation on the coefficients of the expansion are determined by plugging Eq. (3.19) into Eq. (3.23), 0 = P0(y) − P0(−y) = m,l cm,leimy1 eily2 − m,l cm,le−imy1 e−ily2 = = m,l [cm,l − c−m,−l] eimy1 eily2 , (3.24) where to obtain the second line we have transformed the indices (m, l) → (−m, −l) in the second term. Since Eq. (3.24) must be valid ∀y1, y2, it follows that c−m,−l = cm,l. Moreover, P0(y) takes values in R because it is a probability density function, so that the coefficients also satisfy c−m,−l = c∗ m,l. Putting the two conditions together, we have cm,l = c−m,−l ∈ R. Furthermore, the translations (y1, y2) → (y1 +2πk, y2), (y1, y2) → (y1, y2 +2πk) and (y1, y2) → (y1 + kπ, y2 ± kπ), k ∈ Z also leave LFP(y) invariant. Because we are only interested in peri- odic solutions, Bloch’s theorem guarantees that the eigenfunctions ϕn(y) of LFP must be also symmetric under these translations. For P0(y) ≡ ϕ0(y) this means (apart from the trivial 2π-periodicity in y1 and y2 already imposed by the boundary conditions) P0(y1 + kπ, y2 ± kπ) = P0(y1, y2), k ∈ Z, which leads to the relation 0 = cm,l 1 − eimπ eilπ = cm,l 1 − (−1)(m+l) . Hence, cm,l = 0 if m + l = 2k + 1, k ∈ Z and only “even” coefficients (in the sense that m + l add up to an even number) survive. The relevance of these symmetries relations satisfied by the coefficients cannot be understated: they provide a number of sanity checks on the output of our numerical routines, while in other cases they are embedded in the implementation of the numerical method itself (see Section 3.3.3). A summary of the symmetry properties of P0(y) and its expansion coefficients is provided in Table 3.1. Finally, we must discuss how to normalize P0(y) so that it has the properties of a probability density function. First, let us note that the linear system Eq. (3.22) as it stands contains an equation that depends linearly on all the others, that for m = l = 0, which reads 0 · c0,0 = 0. Since the linear system is homogeneous, the set of solutions {cm,l} is infinite. A particular solution is specified by adding a normalization condition, which we restrict to the central region Ω = [−π/2, π/2] × [−π/2, π/2] limited by the deterministic heteroclinic cycle, i.e. Ω dyP0(y) = 1, Ω = [−π/2, π/2] × [−π/2, π/2]. (3.25) 31
  • 38. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM Symmetry transformation P0(y) cm,l y → −y P0(−y) = P0(y) c−m,−l = cm,l (y1, y2) → (y1 + kπ, y2 ± kπ), P0(y1 + kπ, y2 ± kπ) = P0(y1, y2) cm,l = 0, if m + l = 2k + 1 k ∈ Z P0(y) ∈ R P0(y)∗ = P0(y) c−m,−l = c∗ m,l Table 3.1: Summary of the conditions imposed by symmetries (other than 2π-periodicity on y1 and y2) on the stationary probability density function P0(y) and its expansion coefficients cm,l. We need to relate such a condition to the coefficients of the expansion on the larger region Ω = [−π, π] × [−π, π], which we can accomplish by substituting Eq. (3.19) into Eq. (3.25) and using the orthogonality relations for the Fourier basis +π −π dye−iny eimy = 2πδm,n, (3.26) so that we obtain Ω dyP0(y) = +π −π dy1 +π −π dy2P0(y) = (2π)2 c0,0. (3.27) Moreover, using P0(y1 + kπ, y2 + kπ) = P0(y1, y2), k ∈ Z and the reflection symmetries with respect to the axes determined by y1 = ±π/2 and y2 = ±π/2 (not discussed here) it is possible to show 1 = Ω dyP0(y) = 1 4 Ω dyP0(y). (3.28) From Eq. (3.27) and Eq. (3.28) it follows that c0,0 = 1 π2 . (3.29) Determination of the coefficients for the auxiliary function ˜H(y; ω) Here we insert the ex- pansions of ˜H(y; ω) and P0(y), Eq. (3.20) and Eq. (3.19) respectively, into Eq. (3.18) with f(y1) = sin(y1). The main difference with the derivation for P0(y) is the fact that the RHS no longer vanishes but it is instead a function of P0(y) and the observable sin(y1). Using the same arguments as in the derivation for P0(y) above, we arrive at − 1 4 (m − l) ˜Hm−1,l−1 + (m + l) ˜Hm+1,l−1 − (m − l) ˜Hm+1,l+1 − (m + l) ˜Hm−1,l+1 + 2αm ˜Hm−2,l +2αl ˜Hm,l−2 −2αm ˜Hm+2,l − 2αl ˜Hm,l+2 + 4D(m2 + l2 ) − 4iω ˜Hm,l = 1 2i (2π)2 cm,l(c1,0 − c−1,0) − (cm−1,l − cm+1,l) , ∀m, l ∈ Z, (3.30) where ˜Hm,l(ω) (whose frequency dependence is omitted in the following for notational conve- nience) and cm,l are the coefficients of the expansions of ˜H(y; ω) and P0(y), respectively. Note 32
  • 39. 3.3 Solving the equations by matrix methods that the LHS here is the same as the LHS of Eq. (3.21) except for the term +iω accompanying ˜Hm,l(ω), which is expected given that the operators on the LHS of Eq. (3.14) and Eq. (3.18) differ only by a term of the form +iωI. Equation (3.30) defines an infinite, inhomogeneous system of linear equations. As opposed to the case of P0(y), no additional normalization condition4 is required here to fully specify ˜H(y; ω), as P0(y) is already normalized. The symmetries of LFP(y) also impose certain conditions on ˜H(y; ω). We briefly state here what these conditions are and what consequences they have on the coefficients of its expansion ˜Hm,l. First of all, one can show that ˜H(y; ω) = − ˜H(−y; ω), which leads to ˜Hm,l = − ˜H−m,−l; moreover, ˜H(y1+kπ, y2+kπ; ω) = − ˜H(y1, y2; ω), k ∈ Z, resulting in ˜Hm,l = 0 if m+l = 2k, k ∈ Z. On the other hand, ˜Hm,l ∈ C in general because ˜H(y; ω) ∈ C. These results follow from the definition of ˜H(y; ω) Eq. (3.16) and the symmetry properties of the transition density P(y|y ; τ), which can be traced to those of the eigenfunctions of LFP and L† FP. A summary of the symmetry properties of ˜H(y; ω) and its expansion coefficients is provided in Table 3.2. Symmetry transformation ˜H(y; ω) ˜Hm,l(ω) y → −y ˜H(y; ω) = − ˜H(−y; ω) ˜H−m,−l = − ˜Hm,l (y1, y2) → (y1 + kπ, y2 ± kπ), ˜H(y1 + kπ, y2 ± kπ; ω) = − ˜H(y1, y2; ω) ˜Hm,l = 0, if m + l = 2k k ∈ Z Table 3.2: Summary of the conditions imposed by symmetries (other than 2π-periodicity on y1 and y2) on the auxiliary function ˜H(y; ω) and its expansion coefficients ˜H(ω). Power spectrum of sin(y1) in terms of expansion coefficients Let us finally look at the power spectrum Sxx(ω) in terms of the coefficients of the expansion of ˜H(y; ω). By plugging Eq. (3.20) into Eq. (3.17) and using the orthogonality relations for the Fourier basis Eq. (3.26) we obtain the relation Sxx(ω) = 2(2π) Re m ˜Hm,0 dy1f(y1) . Choosing our observable to be f(y1) = sin(y1) = (eiy1 − e−iy1 )/(2i) leads to Sxx(ω) = (2π)2 Re −i ˜H−1,0 − ˜H0,1 . The previous equation can be further simplified using ˜Hm,l = − ˜H−m,−l. Hence, our final expression for the power spectrum reads Sxx(ω) = 2(2π)2 Re i ˜H0,1(ω) = −8π2 Im ˜H0,1(ω) , (3.31) 4 The case ω = 0 requires a special treatment. In order to simplify the presentation, we consider that ˜H(y; ω) (and, consequently, S(ω)) are only evaluated at ω > 0. 33
  • 40. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM where Im[·] denotes the imaginary part of the argument. Remarkably, after this involved pro- cedure the power spectrum of sin(y1) has been expressed in terms of a single coefficient of the expansion of ˜H(y; ω). In order to evaluate Eq. (3.31), the linear system Eq. (3.30) must be solved at each ω. Two different numerical methods to do so are described next in the remainder of this subsection: the first one has been named full matrix approach, whereas the second one is the method of Matrix Continued Fractions (MCF). While the method of MCF requires a tidy bit of extra preanalytical work, it is theoretically more efficient than the full matrix approach. Efficiency is of crucial importance given that the linear system has to be solved multiple times, one at each ω. Moreover, finer and finer discretizations of the interval of frequencies of interest will be required to resolve the sharp peaks in the power spectrum in the low-noise regime, where the oscillations become more coherent. Thus, a major question of interest is how the two methods actually compare in terms of accuracy and efficiency for our system and whether the extra preanalytic work required by the method of MCF is worth the effort. 3.3.2. Solving the full linear system The first numerical method used to solve the systems of linear equations Eq. (3.22) (for the coefficients cm,l of the expansion of P0(y)) and Eq. (3.30) (for the coefficients ˜Hm,l of the expansion of ˜H(y; ω)) involves no extra work: first of all, a specific ordering of the elements of the basis of complex exponentials {eimy1 eily2 } is chosen; next, the coefficients cm,l and ˜Hm,l(ω) are arranged into single column vectors c and hω, respectively, according to the chosen ordering; finally, the corresponding matrices of coefficients, M and Aω, are set for each system. Obviously, practically this also requires using a truncated set of complex exponentials, {eimy1 eily2 }, −L ≤ m, l ≤ L, so that we end up with a finite system of linear equations. Let us make the previous statements more precise. The ordering of the basis functions is chosen such that c has the form c(L) = c−L,−L, · · · , c−L,L, · · · , c0,−L, · · · , c0,L, · · · , cL,−L, · · · , cL,L T , where T denotes transposition and we have made explicit the dependence of the size of c on L as c(L). An analogous form is valid for hω (from now on we use only c in the discussion, with the understanding that everything related to the arrangement of the coefficients can be applied as well to hω). The key point is to realize that now each component cm,l associated to a basis function eimy1 eily2 corresponds to an entry of the column vector c(L), i.e. c(L) m(2L+1)+l = cm,l, −L ≤ m, l ≤ L. Once the ordering of the basis has been established, one can set the elements of the matrices of coefficients M and Aω according to Eq. (3.22) and Eq. (3.30), as well as the inhomogeneity f on the RHS of Eq. (3.30). Explicit expressions for such matrices are not included here but left to Appendix A. Instead, we just acknowledge that we have to solve the following linear systems in matrix form M(L) (D, α)c(L) = 0, (3.32) A(L) ω (D, α; ω)h(L) ω = f(L) , (3.33) 34
  • 41. 3.3 Solving the equations by matrix methods where we have emphasized the dependence of the entries of M and Aω on the noise intensity D, the stability parameter α and the frequency at which we evaluate the power spectrum ω. Note that we only need to solve Eq. (3.32) once, so that we can use c to construct f. Eq. (3.33) is then solved for hω at a set of discrete frequencies {ωi}, i ∈ 1, . . . , N to obtain an approximation to the power spectrum S(ω) in a given frequency window. The truncation parameter L determines the size of the vectors and matrices introduced above. In particular, the dimension of c(L) and h (L) ω is (2L + 1)2, while that of M(L) and A (L) ω is (2L + 1)2 × (2L + 1)2. Thus, even relatively small values of L can lead to very large matrices, which poses serious numerical problems: on the one hand, if the matrix is large, storing all its elements can consume a significant amount of memory; on the other hand, as the size of the coefficient matrix grows, the number of operations required to solve numerically a system of equations increases dramatically. Using efficient sparse matrix methods, which is justified since our matrices contain very few non-zero entries, can help to partially overcome these problems. In particular, the issue of the high-dimensionality of the problem becomes fundamental in the low-noise regime of our system, where the probability distributions display very abrupt changes on small spatial scales. To accurately describe such peaked distributions, one needs to include modes eimy1 eily2 with higher spatial frequencies |k| = √ m2 + l2 in the expansion of P0(y) and ˜H(y; ω), which is equivalent to increasing L (see Figure 3.2). Thus, we will experience serious limitations when applying this method in the weak-noise regime, even if using sparse methods. The numerical implementation has been carried out using Python’s programming language, for which extensive linear algebra libraries are available, such as those in numpy and scipy’s packages. The use of sparse matrices and sparse diagonalization and linear solver methods5, implemented in the scipy.sparse.linalg library, has significantly reduced the computational time and the memory demands of this method. From the discussion above it is not clear why we had to use at some point (sparse) diago- nalization methods. In fact, it is actually not necessary since the present method requires only solving two linear systems. However, let us recall that Eq. (3.32) is simply a truncated matrix version of the stationary Fokker-Planck equation LFP P0(y) = 0, so that M(L) is nothing but the expression of LFP(y) in the basis of complex exponentials. Hence, solving for P0(y) (respec- tively, c(L)) is equivalent to determining the eigenvector associated to the eigenvalue λ = 0 of LFP (respectively, M(L) ) if we renormalize the eigenvector such that the normalization condition Eq. (3.29) is satisfied. Calculating c(L) in this way turns out to be convenient since it allows us to simultaneously determine the first non-zero eigenvalues (i.e. the less negative) of LFP, which provide insight on the oscillatory properties of the system. The price to pay for obtaining extra eigenvalues is longer computation times. Finally, it is clear that the resulting power spectrum S(ω) should be independent of the truncation parameter L. In our implementation, we repeat the above procedure for different values of L until S(ω) does not change any more (over the set of discrete frequencies {ωi}) by increasing L within 1% precision. Our stopping criterion looks as follows = max i S(L)(ωi) − S(L )(ωi) S(L )(ωi) ≤ 0.01, (3.34) where S(L)(ωi) is the power spectrum at ωi obtained with a truncation parameter L. 5 For reasons of efficiency it is recommended to avoid inverting matrices numerically (Heath, 2002) when possible and to use linear solvers instead. 35
  • 42. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM 102 103 D−1 101 102 TruncationL Figure 3.2: Truncation parameter L of the full matrix approach as a function of the inverse of the noise intensity D−1. The truncation parameter is such that the accuracy satisfies ≤ 0.01 according to Eq. (3.34). Let us summarise for clarity the rather complex protocol described above: 1. Set sparse matrices M(L) (D, α) and A (L) ω (D, α; ω) (see entries in Appendix A) 2. Solve Eq. (3.32) for c(L) using sparse diagonalization method. 3. Use c(L) to construct f(L) in RHS of Eq. (3.33). 4. Solve Eq. (3.33) for h (L) ωi at {ωi}, i ∈ {1, . . . , N}. 5. Obtain S(L)(ωi) from h (L) ωi using Eq. (3.31). 6. Check precision using Eq. (3.34) and if the test is not passed, increase L and repeat from step 1. Despite having significantly optimized the implementation by using sparse matrix methods, this method is still computationally demanding in the weak-noise regime, both because of the large size of the matrices and the finer discretization required to resolve the sharp peaks. A more efficient method (but not necessarily accurate) to perform the same task is presented next, the method of Matrix Continued Fractions. 3.3.3. Solving by the matrix continued-fraction method The method of Matrix Continued Fractions (MCF) is discussed at length in Chapters 9 and 11 of (Risken, 1984), whose presentation we partially follow. The MCF method takes advantage of the structure of certain systems of linear equations in order to solve them more efficiently, effectively reducing the size of the matrices involved in the computations. In particular, it is a method to solve so-called tridiagonal vector recurrence relations Q− n vn−1 + Q0 nvn + Q+ n vn+1 = fn, (3.35) 36
  • 43. 3.3 Solving the equations by matrix methods which involve the matrices Q± n , Q0 n and the vectors vn; we can have one-(n ≥ 0) and two-sided (n ∈ Z) relations. Tridiagonal vector recurrence relations appear frequently when expanding the solutions of partial differential equations (such as the Fokker-Planck equation) into complete sets of functions. The question is thus how we can cast the relations Eq. (3.22) and Eq. (3.30) satisfied by the coefficients of P0(y) and ˜H(y; ω) into tridiagonal vector recurrence relations. This usually requires some amount of analytical work, which is why the method of matrix-continued fractions is often said to be semianalytical. We outline in the following (a more systematic approach is presented in Appendix B) how to obtain such relations and describe the different methods of solutions, which differ for homogeneous (fn = 0) and inhomogeneous recurrence relations. Homogeneous recurrence relation for {cm,l} Let us recall Eq. (3.22), which can be rewritten as 0 = − 1 4  D− 1 (m, l)   cm−2,l cm−1,l−1 cm,l−2   + D0 1(m, l)   cm−1,l+1 cm,l cm+1,l−1   + D+ 1 (m, l)   cm,l+2 cm+1,l+1 cm+2,l     (3.36) ≡ − 1 4 D− 1 (m, l)cm+l−2 + D0 1(m, l)cm+l + D+ 1 (m, l)cm+l+2 , m, l ∈ Z, (3.37) where D± 1 (m, l) and D0 1(m, l) are the row vectors D− 1 (m, l) = (2αm, m − l, 2αl) D0 1(m, l) = −(m + l), 4D(m2 + l2 ), m + l D+ 1 (m, l) = (−2αl, −(m − l), −2αm) . This form is very illustrative since it suggests grouping the coefficients cm,l with m+l = constant into vectors: note how cm+l−2, cm+l and cm+l+2 in Eq. (3.36) contain only coefficients whose indices add up to m + l − 2, m + l and m + l + 2, respectively. Indeed, it turns out that by extending cm+l to c2n =                   ... cn−l,n+l ... cn−1,n+1 cn,n cn+1,n−1 ... cn+l,n−l ...                   , (3.38) it is possible to cast our starting relation Eq. (3.36) into a homogeneous, tridiagonal vector recurrence relation between c2(n−1), c2n and c2(n+1) 37
  • 44. 3 NOISE-INDUCED OSCILLATIONS IN A HETEROCLINIC SYSTEM Q− 2nc2(n−1) + Q0 2nc2n + Q+ 2nc2(n+1) = 0, n ∈ Z. (3.39) Explicit expressions for the entries of Q± 2n and Q0 2n, which depend on the parameters D and α, are given in Appendix B. The k-component (c2n)k of the c2n displayed in Eq. (3.38) can also be written as (c2n)k = cn+k,n−k, k ∈ Z. Let us emphasize two key features from this relation: on the one hand, the vectors c2n are labelled by the sum m + l of the indices of the coefficients cm,l it contains; on the other hand, all “odd” vectors {c2k+1} are identically 0, since they contain only “odd” coefficients cm,l = 0, m + l = 2k + 1, k ∈ Z. What allows for such a useful rearrangement is the underlying symmetry LFP(y1 + kπ, y2 ± kπ) = LFP(y1, y2), k ∈ Z, which has been discussed above. The homogeneous, tridiagonal vector recurrence relation Eq. (3.39) can now be solved by the MCF method. The trick is the following: we first introduce a “ladder” matrix S+ 2n defined by c2(n+1) = S+ 2nc2n. (3.40) that connects “even” coefficients. The normalization condition and the symmetries of the Fokker- Planck equation determine the vector entries of c0, hence the knowledge of the sequence of matrices {S+ 2n}, n ≥ 0 is sufficient to determine {c2n}, n ≥ 0. In fact, even though Eq. (3.39) is in principle two-sided, we can use the reflection symmetry leading to c−m,−l = cm,l to relate c2n and c−2n as follows c−2n ≡               ... c−n−l,−n+l ... c−n,−n ... c−n+l,−n−l ...               =               ... cn+l,n−l ... cn,n ... cn−l,n+l ...               = Uc2n = U               ... cn−l,n+l ... cn,n ... cn+l,n−l ...               where the transformation matrix U can be identified by inspection as U =       0 · · · 0 1 ... ... 1 0 0 ... ... ... 1 0 · · · 0       , (3.41) i.e. a rotated identity matrix that flips upside-down the components of the vectors upon it acts. Thus, given c0 and {S+ 2n}, n ≥ 0 the whole sequence {c2n}, n ∈ Z can be determined. The matrices S+ 2n can be obtained by inserting Eq. (3.40) into the recurrence relation, which yields Q− 2nc2(n−1) + Q0 2nS+ 2(n−1)c2(n−1) + Q+ 2nS+ 2nS+ 2(n−1)c2(n−1) = 0. 38